Feed aggregator

Kubuntu: Party 2 in Review

Planet Ubuntu - Sun, 04/17/2016 - 14:54

Last night (Friday 15th April, 19:00 UTC) we held the second of our Kubuntu packaging parties. Using the new conference server provided by BigBlueButton (BBB), things worked like a dream.

Our last party was difficult to orchestrate as we were operating across multiple channels. Internet Relay Chat, Mumble voice server, and Google Hangouts. Our BBB server provides all three of those functions and allows each user to decide how they prefer to use it. One can listen and watch, use the chat function, or turn on mic, or mic and camera.

It was a brilliant party, with a great atmosphere,which is not easy to achieve in a purely online setting. Fred Dixon (Product Manager at Big Blue Button) had told us that BBB could host up to 100 people in each conference Room, and we have 4 Rooms !

All together we estimate around 40 people came to the party, with 18 in Room 1 at its peak.

During this party, Ovidiu-Florin has been productive!

  • Finished Kubuntu Podcast page on the website
  • Added Podcast menu in navigation bar on the website
  • Embedded IRC with Kiwi on the support page
  • Embedded IRC with Kiwi on the Podcast page
  • Some clean-up on Trello
  • Set-up autoposting of Kubuntu news on the Kubuntu Telegram News channel
  • Moved episodes 1-6 of Podcast show notes to the website, with embedded video
  • Discussed with people about adding a link to the Ubuntu Release Notes on the Kubuntu release pages (from now on) (Bug report)
  • Tried to link Trello with the Kubuntu Devel Telegram group, but found that the Integram bot does not work properly with Telegram supergroups
What we learned

The theme for the party was packaging, although the primary objective was to “Make Friends, and Have Fun!” The packaging didn’t really hit the ground running, and as the night went on, it took on more of a back seat. The bottom line being that we didn’t get further than “Getting Setup”.


It is clear that Kubuntu Parties are a great idea, but the focus must be on “Making Friends and Having Fun!” The party atmosphere takes on a life of its own, with the conversation flowing in different directions, based upon who is talking, and who has recently arrived. We feel that this very loose format of welcoming guests, and getting them involved in the conversation, and in particular focusing on open questions that invite them to introduce their own topics, works best.

Managing the interactions

Having a single presenter controlling BBB, interacting with guests, running the packaging exercises, and screen sharing is too much for one person. The next party  (Friday 20th May 19:00 UTC) will involve a greater number of hosts, who will share responsibility between Host, Screencast demos, managing the room and driving invitations via social media.

Tightening the time frame

Both parties were allowed to overrun and take their own directions. The first party resulted in our discovering and being given Big Blue Button. Party 2, took on a similar un-conference direction and style. However, this allows the party to fizzle out in a sparkle rather than ending with the fireworks, and leaving people wanting more. We’ll change this for the next event closing the party on time, but allowing 30 minutes to say farewell and wave our guests off.

Kubuntu Dojo

One of the objectives of Kubuntu party was to share knowledge and enable learning. In reality the community have shown us that they see a different purpose for Kubuntu Party, “Having Fun and Make Friends!”

Sharing knowledge is still an important requirement for growing and developing a healthy Kubuntu community, and we are looking at two potential ways to do this.

One way to do this would be separate Kubuntu Dojo learning events, this is appealing as Big Blue Button is designed specifically for on-line education.

Our other way of approaching this, is to introduce Kubuntu Dojo as a segment section of the Kubuntu Podcast. This too has some nice advantages, one in particular is the ability to edit out that specific section, and post as a standalone video. This would  build as a library of easy to access multimedia based knowledge.

Conclusion Kubuntu Parties are fantastic fun, and they work!

They attract a wide audience from beyond the Kubuntu community. At this party we attracted party-goers from; UBports project, Ubuntu, Lubuntu, Devon and Cornwall Linux User Group and of course Kubuntu.

To make these parties even better we need YOUR HELP. Firstly a party can only be a a party when there are party goes, so ensure you come along to next event on Friday 20th May @ 19:00 UTC in our Big Blue Button conference server

Invite your friends, and share this news on your social networks, IRC and the wider community.

David Tomaschik: PlaidCTF 2016: Butterfly

Planet Ubuntu - Sun, 04/17/2016 - 00:00

Butterfly was a 150 point pwnable in the 2016 PlaidCTF. Basic properties:

  • x86_64
  • Not PIE
  • Assume ASLR, NX

It turns out to be a very simple binary, all the relevant code in one function (main), and using only a handful of libc functions. The first thing that jumped out to me was two calls to mprotect, at the same address. I spent some time looking at the disassembly and figuring out what was going on. The relevant portions can be seen here:

I determined that the binary performed the following:

  1. Print a message.
  2. Read a line of user input and convert it to a long with strtol.
  3. Take the read value and right shift by 3 bits. Let’s call this addr.
  4. Find the (4096-bit) page containing addr and call mprotect with PROT_READ|PROT_WRITE|PROT_EXEC.
  5. xor the byte at addr with 1 << (input & 7). In other words, the lowest 3 bits of the user-provided long are used to index the bit within the byte to flip.
  6. Reprotect the page containing addr as PROT_READ|PROT_EXEC.
  7. Print a final message.

The TL;DR is that we’re able to flip any bit in any mapped address space of the process. Due to ASLR, I decided to focus on the .text section of the binary as my first goal. Specifically, I began looking at the GOT and all of the executable code after the bit being flipped. I couldn’t immediately happen on anything obvious (there’s not some branch to flip to system("/bin/sh") after all).

I had an idea that redirecting control flow with one of the function calls (either mprotect or puts) seemed like a logical place to flip bits. I didn’t see an immediately obvious choice, so I wrote a script to brute force addresses within the two calls, flipping one bit at a time. I happened upon a bit flip in the call to mprotect that resulted in jumping back to _start+16, which effectively restarted the program. This meant I could continue to flip bits, as the one call had been replaced already.

Along with one of my team mates, we hit upon the idea of replacing the code at the end of the program with our shellcode by flipping the necessary bits. We chose code beginning at 0x40084d because it meant we could flip one bit in a je at the top to get to this code when we were ready to execute our shellcode.

We extracted the bytes originally at that address, xor’d with our shellcode (a simple 25-byte /bin/sh shellcode that I’ve previously featured), and determined which bits needed to be flipped. We then calculated the bit flips and wrote a list of numbers to perform them.

In short, we needed to:

  1. Flip a bit in call mprotect to give us a never-ending loop.
  2. Flip about 100 bits to deploy our shellcode.
  3. Flip one final bit to change the je to jne after the call to fgets.
  4. Provide garbage input for the final call to gets.

My team mate and I both wrote scripts to do this because we were playing with different techniques in python. Here’s mine:

base = 0x40084d sc = '\x48\xbb\xd1\x9d\x96\x91\xd0\x8c\x97\xff\x48\xf7\xdb\x53\x31\xc0\x99\x31\xf6\x54\x5f\xb0\x3b\x0f\x05' current = 'dH\x8b\x04%(\x00\x00\x00H;D$@u&D\x89\xf0H\x83\xc4H[A' flips = ''.join(chr(ord(a) ^ ord(b)) for a,b in zip(sc, current)) start_loop = 0x20041c6 end_loop = 0x2003eb0 print hex(start_loop) for i, pos in enumerate(flips): pos = ord(pos) for bit in xrange(8): if pos & (1<<bit): print hex(((base + i) << 3) | bit) print hex(end_loop) print 'whoami'

Redirecting to netcat allowed us to obtain a shell, and the flag. Great challenge and amazing to see how one bit flip can do so much!

Scott Kitterman: Future of secure systems in the US

Planet Ubuntu - Sat, 04/16/2016 - 22:49

As a rule, I avoid writing publicly on political topics, but I’m making an exception.

In case you haven’t been following it, the senior Republican and the senior Democrat on the Senate Intelligence Committee recently announced a legislative proposal misleadingly called the Compliance with Court Orders Act of 2016.  The full text of the draft can be found here.  It would effectively ban devices and software in the United States that the manufacturer cannot retrieve data from.  Here is a good analysis of the breadth of the proposal and a good analysis of the bill itself.

While complying with court orders might sound great in theory, in practice this means these devices and software will be insecure by design.  While that’s probably reasonably obvious to most normal readers here, don’t just take my word for it, take Bruce Schneier‘s.

In my opinion, policy makers (and it’s not just in the United States) are suffering from a perception gap about security and how technically hard it is to get right.  It seems to me that they are convinced that technologists could just do security “right” while still allowing some level of extraordinary access for law enforcement if they only wanted to.  We’ve tried this before and the story never seems to end well.  This isn’t a complaint from wide eyed radicals that such extraordinary access is morally wrong or inappropriate.  It’s hard core technologists saying it can’t be done.

I don’t know how to get the message across.  Here’s President Obama, in my opinion, completely missing the point when he equates a desire for security with “fetishizing our phones above every other value.”  Here are some very smart people trying very hard to be reasonable about some mythical middle ground.  As Riana Pfefferkorn’s analysis that I linked in the first paragraph discusses, this middle ground doesn’t exist and all the arm waving in the world by policy makers won’t create it.

Coincidentally, this same week, the White House announced a new “Commission on Enhancing National Cybersecurity“.  Cybersecurity is certainly something we could use more of, unfortunately Congress seems to be heading off in the opposite direction and no one from the executive branch has spoken out against it.

Security and privacy are important to many people.  Given the personal and financial importance of data stored in computers (traditional or mobile), users don’t want criminals to get a hold of it.  Companies know this, which is why both Apple IOS and Google Android both encrypt their local file systems by default now.  If a bill anything like what’s been proposed becomes law, users that care about security are going to go elsewhere.  That may end up being non-US companies’ products or US companies may shift operations to localities more friendly to secure design.  Either way, the US tech sector loses.  A more accurate title would have been Technology Jobs Off-Shoring Act of 2016.

EDIT: Fixed a typo.



Elizabeth K. Joseph: Color an Ubuntu Xenial Xerus

Planet Ubuntu - Sat, 04/16/2016 - 10:03

Last cycle I reached out to artist and creator of Full Circle Magazine Ronnie Tucker to see if he’d create a coloring page of a werewolf for some upcoming events. He came through and we had a lot of fun with it (blog post here).

With the LTS release coming up, I reached out to him again.

He quickly turned my request around, and now we have a xerus to color!

Click the image or here to download the full size version for printing.

Huge thanks to Ronnie for coming through with this, it’s shared with a CC-SA license, so I encourage people to print and share them at their release events and beyond!

While we’re on the topic of the our African ground squirrel friend, thanks to Tom Macfarlane of the Canonical Design Team I was able to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. For those of you who haven’t seen the mascot image, it’s a real treat.

It’s a great accompaniment to your release party. Download the SVG version for printing from the wiki page or directly here.

Kubuntu: Kubuntu Podcast #6 – Unleashing the Werewolf

Planet Ubuntu - Fri, 04/15/2016 - 17:22

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt

Show Schedule Intro

What have we (the hosts) been doing ?

Kubuntu News (5-15 minutes) In Focus (~30 minutes)
  • Kubuntu Wily Werewolf release
    • 4.2 Linux kernel
    • KDE Applications 15.08.2
    • Dolphin – OwnCloud via Webdav
Outro (2-5 minutes)

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

In show Notes:

Kubuntu Manual

Github: https://github.com/ahoneybun/kubuntu-manual

MediaWiki: https://userbase.kde.org/Kubuntu


Andrea Del Sarto on Google+

Where you can find Andrea’s Images and Artwork.



Wallpapers for Desktop and Mobile.



Appendix / Supporting Notes Mounting Owncloud as folders in your home directory  via Dolphin & Webdav
  1. Open Dolphin
  2. Right click in the Directory Window
  3. Select “Create New” > “Link to location (URL)”
  4. Enter webdav://your_owncloud_server.org/owncloud/remote.php/webdav
  5. Enter a name for this Folder Share – I used Owncloud
  6. Click “OK”
  7. Dolphin will prompt for your Owncloud Username and Password
  8. Tick to Remember password, if you want to avoid being prompted

Xubuntu: The small details: Wallpapers

Planet Ubuntu - Fri, 04/15/2016 - 10:11

In this series the Xubuntu team present some of the smaller details in Xubuntu to help you use your system more efficiently. Several of the features covered in this series are new for those who will be upgrading from 14.04 LTS to 16.04 LTS. We will also cover some features that have been in Xubuntu for longer, for those that are completely new to the operating system.

We have talked about customizing in this series before, but now we take a look at another aspect of it – wallpapers. Many people use personal ones, many just use whatever the default is. Some people don’t like them at all and change to solid or gradient colors. Let’s have a look at what you can do with them in Xubuntu.

If you’ve got the new 16.04 LTS release then you will have close to 20 wallpapers, including the new community wallpaper selection.

Applying different wallpapers per workspace

One of the easiest thing to accomplish is to have seperate wallpapers on your workspaces, so let’s assume for a moment that you have enabled more workspaces. Go to Settings Manager → Desktop and from the Background tab, disable Apply to all workspaces. After that you can set the wallpaper for each wallpaper by moving the dialog to each workspace and picking a wallpaper as you normally would.

Enabling automatically changing wallpapers

If you want, you can set the wallpaper to change automatically. To do this, enable Change the background in the dialog. After that you can tweak the settings: how often you want the change to happen, from somewhere in seconds … to daily. If you use the chronological option it will use all of the wallpapers you have in the selected folder split equally through the day. You can even set it to change at startup only. Finally, you can use the Random Order option to get it all mixed up!

Disabling wallpapers

If you want to use a single color on your desktop instead, set the Style to none. You’ll see the wallpapers become disabled. Now you can simply choose your Color and it will apply across the whole desktop. You can also use a horizontal or vertical gradient. Once you have enabled either of these options from the dropdown you will get to choose two colors.

Jorge Castro: I hate cloud credentials

Planet Ubuntu - Fri, 04/15/2016 - 05:10

Working on multiple clouds can be a bear sometimes. Multiple clouds, multiple regions per cloud. If your team is organized it gets even better, with multiple accounts with different permissions. Did I use the right keys for the right account for this task?

In Juju 1.x, this was particularly painful. We made you stick everything in an environments.yaml file, which is where we also had you stick other configuration options. For simple Hello World’s this was fine, but after a while of real world usage, it just became a mess, especially if you are constantly adding and removing credentials across clouds.

So for Juju 2.0 we’re going to be a lot smarter about this. First off, we can finally tell you what clouds we support, right from the CLI:

$ juju list-clouds CLOUD TYPE REGIONS aws ec2 us-east-1, us-west-1, us-west-2, eu-west-1, eu-central-1, ap-southeast-1, ap-southeast-2 ... aws-china ec2 cn-north-1 aws-gov ec2 us-gov-west-1 azure azure centralus, eastus, eastus2, northcentralus, southcentralus, westus, northeurope ... azure-china azure chinaeast, chinanorth cloudsigma cloudsigma hnl, mia, sjc, wdc, zrh google gce us-east1, us-central1, europe-west1, asia-east1 joyent joyent eu-ams-1, us-sw-1, us-east-1, us-east-2, us-east-3, us-west-1 lxd lxd localhost maas maas manual manual rackspace rackspace DFW, ORD, IAD, LON, SYD, HKG

We keep this up to date too, which means when a cloud provider adds a new region and it’s ready to go, you can just add it. Also if you’re keeping score, we’re supporting waaaaaay more clouds than we did before, giving you more choice on where to plop down infrastructure. Now you can just:

$ juju update-clouds Fetching latest public cloud list... done.

Find out about your favorite cloud with juju show-cloud azure, or set a default, juju set-default-region aws us-west-1. Or if you’ve got creds already just fire up a controller immediatelly and get modelling fast: juju bootstrap exampledeploy aws/us-west-2.

Ok let’s add some creds to this cloud, we can do it a prompty way:

$ juju add-credential google credential name: testing select auth-type [jsonfile*, oauth2]: oauth2 client-id: example client-email: your@email.com private-key: **** project-id: project-test credentials added for cloud google

Okay so that handles just trying a cloud, not bad. Now to my favorite feature. If you’re a cloud person you probably have credentials for all sorts of clouds. These are given to you by your cloud provider, and of course, they’re all different. AWS gave me a credentials.csv, Google has a json file, I have .novarc for OpenStack, and so on. Of course in order to use the myriad of cloud tools I need I also have environment variables set all over the place aka AWS_ACCESS_KEY_ID and so on. Juju will now just snag all that stuff and add it for you:

juju autoload-credentials

And follow the confirmation prompts, that’s all you need to get going on your existing clouds.

We also give you juju add-credential and juju remove-credential. Let’s have a look at ~/.local/share/juju/credentials.yaml so we can show you the structure of what a multi-cloud set up would look like:

credentials: aws: default-credential: peter default-region: us-west-2 peter: auth-type: access-key access-key: key secret-key: secret paul: auth-type: access-key access-key: key secret-key: secret homemaas: peter: auth-type: oauth1 maas-oauth: mass-oauth-key homestack: default-region: region-a peter: auth-type: userpass password: secret tenant-name: tenant username: user google: peter: auth-type: jsonfile file: path-to-json-file azure: peter: auth-type: userpass application-id: blah subscription-id: blah application-password: blah joyent: peter: auth-type: userpass sdc-user: blah sdc-key-id: blah private-key: blah (or private-key-path) algorithm: blah

So other than Peter having a personal cloud at home, everything here looks pretty normal and straightforward.

I’ve only really covered the initial user experience of our credentials work, but it’s by far the most boring part. The nice bit is that Juju also now allows full ACLs to the models allowing devops teams to set up infrastructure without giving up the keys to the entire castle to the wrong team. An example looks something like this:

juju add-user jorge --models mymodel --acl=write

But that is a topic for another day!

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2016

Planet Ubuntu - Thu, 04/14/2016 - 23:14

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 111.75 work hours have been dispatched among 10 paid contributors. Their reports are available:

  • Antoine Beaupré did 8h.
  • Ben Hutchings did 12.75 hours (out of 11 hours allocated + 7.25 extra hours remaining, meaning that he still has 5.50 extra hours to do over April).
  • Brian May did 10 hours.
  • Chris Lamb did 7 hours (instead of the 14.25 hours he was allocated +, compensating the extra hours he did last month).
  • Damyan Ivanov did nothing out of the 7.25 remaining hours he had, he opted to give them back and come back to LTS work later.
  • Guido Günther did 13 hours (out of 12 hours allocated + 4.25 remaining hours, leaving 3.25 extra hours for April).
  • Markus Koschany did 14.25 hours.
  • Mike Gabriel did nothing and opted to give back the 8 hours allocated. He will stop LTS work for now as he has other projects taking all his time.
  • Santiago Ruano Rincón did 10 hours (out of 12h allocated + 1.50 remaining, thus keeping 3.50 extra hours for April).
  • Scott Kitterman did a few hours but was not able to provide his report in time due to sickness. His next report will cover two months.
  • Thorsten Alteholz did 14.25 hours.
Evolution of the situation

The number of sponsored hours started to increase for April (116.75 hours, thanks to Sonus Networks) and should increase even further for May (with a new Gold sponsor currently joining us, Babiel GmbH). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

At the end of the month the LTS team will be fully responsible of all Debian 7 Wheezy updates. For now paid contributors are still helping the security team by fixing packages that were fixed in squeeze already but that are still outstanding in wheezy.

They are also looking for ways to ensure that some of the most complicated packages can be supported over the wheezy LTS timeframe. It is likely that we will seek external help (possibly from credativ which is already handling support of PostgreSQL) for the maintenance of Xen and that some other packages (like libav, vlc, maybe qemu?) will be upgraded to newer versions which are still maintained (either upstream or in Debian Jessie by the Debian maintainers).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

St&eacute;phane Graber: LXD 2.0: LXD in LXD [8/12]

Planet Ubuntu - Thu, 04/14/2016 - 10:30

This is the eighth blog post in this series about LXD 2.0.


In the previous post I covered how to run Docker inside LXD which is a good way to get access to the portfolio of application provided by Docker while running in the safety of the LXD environment.

One use case I mentioned was offering a LXD container to your users and then have them use their container to run Docker. Well, what if they themselves want to run other Linux distributions inside their container using LXD, or even allow another group of people to have access to a Linux system by running a container for them?

Turns out, LXD makes it very simple to allow your users to run nested containers.

Nesting LXD

The most simple case can be shown by using an Ubuntu 16.04 image. Ubuntu 16.04 cloud images come with LXD pre-installed. The daemon itself isn’t running as it’s socket-activated so it doesn’t use any resources until you actually talk to it.

So lets start an Ubuntu 16.04 container with nesting enabled:

lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true

You can also set the security.nesting key on an existing container with:

lxc config set <container name> security.nesting true

Or for all containers using a particular profile with:

lxc profile set <profile name> security.nesting true

With that container started, you can now get a shell inside it, configure LXD and spawn a container:

stgraber@dakara:~$ lxc launch ubuntu-daily:16.04 c1 -c security.nesting=true Creating c1 Starting c1 stgraber@dakara:~$ lxc exec c1 bash root@c1:~# lxd init Name of the storage backend to use (dir or zfs): dir We detected that you are running inside an unprivileged container. This means that unless you manually configured your host otherwise, you will not have enough uid and gid to allocate to your containers. LXD can re-use your container's own allocation to avoid the problem. Doing so makes your nested containers slightly less safe as they could in theory attack their parent container and gain more privileges than they otherwise would. Would you like to have your containers share their parent's allocation (yes/no)? yes Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured. root@c1:~# lxc launch ubuntu:14.04 trusty Generating a client certificate. This may take a minute... If this is your first time using LXD, you should also run: sudo lxd init Creating trusty Retrieving image: 100% Starting trusty root@c1:~# lxc list +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ | trusty | RUNNING | (eth0) | fd7:f15d:d1d6:da14:216:3eff:fef1:4002 (eth0) | PERSISTENT | 0 | +--------+---------+-----------------------+----------------------------------------------+------------+-----------+ root@c1:~#

It really is that simple!

The online demo server

As this post is pretty short, I figured I would spend a bit of time to talk about the demo server we’re running. We also just reached the 10000 sessions mark earlier today!

That server is basically just a normal LXD running inside a pretty beefy virtual machine with a tiny daemon implementing the REST API used by our website.

When you accept the terms of service, a new LXD container is created for you with security.nesting enabled as we saw above. You are then attached to that container as you would when using “lxc exec” except that we’re doing it using websockets and javascript.

The containers you then create inside this environment are all nested LXD containers.
You can then nest even further in there if you want to.

We are using the whole range of LXD resource limitations to prevent one user’s actions from impacting the others and pretty closely monitor the server for any sign of abuse.

If you want to run your own similar server, you can grab the code for our website and the daemon with:

git clone github.com/lxc/linuxcontainers.org git clone github.com/lxc/lxd-demo-server Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Ubuntu Podcast from the UK LoCo: S09E07 – Light in the Window - Ubuntu Podcast

Planet Ubuntu - Thu, 04/14/2016 - 07:00

It’s Episode Seven of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Jono Bacon: Mycroft and Building a Future of Open Artificial Intelligence

Planet Ubuntu - Wed, 04/13/2016 - 22:01

Last year a new project hit Kickstarter called Mycroft that promises to build an artificial intelligence assistant. The campaign set out to raise $99,000 and raised just shy of $128,000.

Now, artificial intelligence assistants are nothing particularly new. There are talking phones and tablets such as Apple’s Siri and Google Now, and of course the talking trash can, the Amazon Echo. Mycroft is different though and I have been pretty supportive of the project, so much so that I serve as an advisor to the team. Let me tell you why.

Here is a recent build in action, demoed by Ryan Sipes, Mycroft CTO and all round nice chap:

Mycroft is interesting both for the product it is designed to be and the way the team are building it.

For the former, artificial intelligence assistants are going to be a prevalent part of our future. Where these devices will be judged though is in the sheer scope of the functions, information, and data they can interact with. They won’t be judged by what they can do, but instead what they can’t do.

This is where the latter piece, how Mycroft is being built, really interests me.

Firstly, Mycroft is open source in not just the software, but also the hardware and service it connects to. You can buy a Mycroft, open it up, and peek into every facet of what it is, how it works, and how information is shared and communicated. Now, for most consumers this might not be very interesting, but from a product development perspective it offers some distinctive benefits:

  • A community can be formed that can play a role in the future development and success of the product. This means that developers, data scientists, advocates, and more can play a part in Mycroft.
  • Capabilities can be crowdsourced to radically expand what Mycroft can do. In much the same way OpenStreetmap has been able to map the world, developers can scratch their own itch and create capabilities to extend Mycroft.
  • The technology can be integrated far beyond the white box sitting on your kitchen counter and into Operating Systems, devices, connected home units, and beyond.
  • The hardware can be iterated by people building support for Mycroft on additional boards. This could potentially lower costs for future units with the integration work reduced.
  • Improved security for users with a wider developer community wrapped around the project.
  • A partner ecosystem can be developed where companies can use and invest in the core Mycroft open source projects to reduce their costs and expand the technology.

There is though a far wider set of implications with Mycroft too. Much has been been written about the concerns from people such as Elon Musk and Stephen Hawking about the risks of artificial intelligence, primarily if it is owned by a single company, or a small set of companies.

While I don’t think skynet is taking over anytime soon, these concerns are valid and this raises the importance that artificial intelligence is something that is open, not proprietary. I think Mycroft can play a credible role in building a core set of services around AI that are part of an open commons that companies can invest in. Think of this as the OpenStack of AI, if you will.

Hacking on Mycroft

So, it would be remiss if I didn’t share a few details of how the curious among you can get involved. Mycroft currently has three core projects:

  • The Adapt Intent Parser converts natural language into machine readable data structures.
  • Mimic takes in text and reads it out loud to create a high quality voice.
  • OpenSTT is aimed at creating an open source speech-to-text model that can be used by individuals and company to allow for high accuracy, low-latency conversion of speech into text.

You can also find the various projects here on GitHub and find a thriving user and developer community here.

Mycroft are also participating in the IBM Watson AI XPRIZE where the goal is to create an artificial intelligence platform that interacts with people so naturally that when people speak to it they’ll be unable to tell of they’re talking to a machine or to a person. You can find out more about how Mycroft is participating here.

I know the team are very interested in attracting developers, docs writers, translators, advocates, and more to play a role across these different parts of the project. If this all sounds very exciting to you, be sure to get started by posting to the forum.

Jono Bacon: Going Large on Medium

Planet Ubuntu - Wed, 04/13/2016 - 12:19

I just wanted to share a quick note to let you know that I will be sharing future posts both on jonobacon.org and on my Medium site.

I would love to hear what kind of content you would find interesting for me to share. Feel free to share in the comments!


Andrea Corbellini: Running Docker Swarm inside LXC

Planet Ubuntu - Wed, 04/13/2016 - 11:00

I've been using Docker Swarm inside LXC containers for a while now, and I thought that I could share my experience with you. Due to their nature, LXC containers are pretty lightweight and require very few resources if compared to virtual machines. This makes LXC ideal for development and simulation purposes. Running Docker Swarm inside LXC requires a few steps that I'm going to show you in this tutorial.

Before we begin, a quick premise: LXC, Docker and Swarm can be configured in many different ways. Here I'm showing just my preferred setup: LXC with AppArmor disabled, Docker with the OverlayFS storage driver, Swarm with etcd discovery. There exist many other kind of configurations that can work under LXC — leave a comment if you want to know more.


  1. Create the Swarm Manager container
  2. Modify configuration for the Swarm Manager container
  3. Load the OverlayFS module
  4. Start the container and install Docker
  5. Check if Docker is working
  6. Set up the Swarm Manager
  7. Create the Swarm Agents
  8. Play with the Swarm


  • the host is the system that will create and start the LXC containers (e.g. your laptop);
  • the manager is the LXC container that will run the Swarm manager (it'll run the swarm manage command);
  • an agent is one of the many LXC containers that will run a Swarm agent node (it'll run the swarm join command);

To avoid ambiguity, all commands will be prefixed with a prompt such as root@host:~#, root@swarm-manager:~# and root@swarm-agent-1:~#.


This tutorial assumes that you have at least a vague idea of what Docker and Docker Swarm are. You should also be familiar with the shell.

This tutorial has been succesfully tested on Ubuntu 15.10 (that ships with Docker 1.6) and Ubuntu 16.04 LTS (Docker 1.10), but it may work on other distributions and Docker versions as well.

Step 1: Create the Swarm Manager container

Create a new LXC container with:

root@host:~# lxc-create -t download -n swarm-manager

When prompted, choose your favorite distribution and architecture. I chose ubuntu / xenial / amd64.

lxc-create needs to run as root, unprivileged containers won't work. We could actually make Docker start inside an unprivileged container, the problem is that we wouldn't be allowed to create block and character devices, and many Docker containers need this ability.

Step 2: Modify the configuration for the Swarm Manager container

Before starting the LXC container, open the file /var/lib/lxc/swarm-manager/config on the host and add the following configuration to the bottom of the file:

# Distribution configuration # ... # Container specific configuration # ... # Network configuration # ... # Allow running Docker inside LXC lxc.aa_profile = unconfined lxc.cap.drop =

The first rule (lxc.aa_profile = unconfined) disables AppArmor confinement. The second one (lxc.cap.drop =) gives all capabilities to the processes in LXC container.

These two rules may seem harmful from a security standpoint, and in fact they are. However we must remember that we will be running Docker inside the LXC container. Docker already ships with its own AppArmor profile and the two rules above are needed exactly for the purposes of letting Docker talk to AppArmor.

So, while Docker itself won't be confined, Docker containers will be confined, and this is an encouraging fact.

Step 3: Load the OverlayFS module

OverlayFS is shipped with Ubuntu, but not enabled by default. To enable it:

root@host:~# modprobe overlay

It is important to do this step before installing Docker. Docker supports various storage drivers and when Docker is installed for the first time it tries to detect the most appropriate one for the system. If Docker detects that OverlayFS is not loaded, it'll fall back to the device mapper. There's nothing wrong with the device mapper, we can make it work, however, as I said at the beginning, in this tutorial I'm focusing only on OverlayFS.

If you want to load OverlayFS at boot, instead of doing it manually after every reboot, add it to /etc/modules-load.d/modules.conf:

root@host:~# echo overlay >> /etc/modules-load.d/modules.conf Step 4: Start the container and install Docker

It's time to see if we did everything right!

root@host:~# lxc-start -n swarm-manager root@host:~# lxc-attach -n swarm-manager root@swarm-manager:~# apt update root@swarm-manager:~# apt install docker.io

Installation should complete without any problem. If you get an error like this:

Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. invoke-rc.d: initscript docker, action "start" failed. dpkg: error processing package docker.io (--configure): subprocess installed post-installation script returned error exit status 1

It means that Docker failed to start. Try checking systemctl status docker as suggested, or run docker daemon manually. You might get an error like this:

root@swarm-manager:~# docker daemon WARN[0000] devmapper: Udev sync is not supported. This will lead to unexpected behavior, data loss and errors. For more information, see https://docs.docker.com/reference/commandline/daemon/#daemon-storage-driver-option ERRO[0000] There are no more loopback devices available. ERRO[0000] [graphdriver] prior storage driver "devicemapper" failed: loopback attach failed FATA[0000] Error starting daemon: error initializing graphdriver: loopback attach failed

In this case, Docker is using the devicemapper storage driver and is complaining about the lack of loopback devices. If that's the case, check whether OverlayFS is loaded and reinstall Docker.

Or you might get an error like this:

root@swarm-manager:~# docker daemon ... FATA[0000] Error starting daemon: AppArmor enabled on system but the docker-default profile could not be loaded.

It this other case, Docker is complaining about the fact that it can't talk to AppArmor. Check the configuration for the LXC container.

Step 5: Check if Docker is working

Once you are all set, you should be able to use Docker: try running docker info, docker ps or launch a container:

root@swarm-manager:~# docker run --rm docker/whalesay cowsay burp! Unable to find image 'docker/whalesay:latest' locally latest: Pulling from docker/whalesay ... Status: Downloaded newer image for docker/whalesay:latest _______ < burp! > ------- \ \ \ ## . ## ## ## == ## ## ## ## === /""""""""""""""""___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\______/

It appears to be working. By the way, we can check whether Docker is correctly confining containers. Try running a Docker container and check on the host the output of aa-status: you should see a process running with the docker-default profile. For example:

root@swarm-manager:~# docker run --rm ubuntu bash -c 'while true; do sleep 1; echo -n zZ; done' zZzZzZzZzZzZzZzZ... # On another shell root@host:~# aa-status apparmor module is loaded. 5 profiles are loaded. 5 profiles are in enforce mode. /sbin/dhclient /usr/lib/NetworkManager/nm-dhcp-client.action /usr/lib/NetworkManager/nm-dhcp-helper /usr/lib/connman/scripts/dhclient-script docker-default 0 profiles are in complain mode. 4 processes have profiles defined. 4 processes are in enforce mode. /sbin/dhclient (797) /sbin/dhclient (2832) docker-default (6956) docker-default (6973) 0 processes are in complain mode. 0 processes are unconfined but have a profile defined. root@host:~# ps -ef | grep 6956 root 6956 4982 0 17:17 ? 00:00:00 bash -c while true; do sleep 1; echo -n zZ; done root 6973 6956 0 17:17 ? 00:00:00 sleep 1 root 6982 6808 0 17:17 pts/3 00:00:00 grep --color=auto 6956

Yay! Everything is running as expected: we launched a process inside a Docker container, and that process is running with the docker-default AppArmor profile. Once again: even if LXC is running unconfined, our Docker containers are not.

Step 6: Set up the Swarm Manager

That was the hardest part. Now we can proceed setting up Swarm as we would usually do.

As I said at the beginning, Swarm can be configured in many ways. In this tutorial I'll show how to set it up with etcd discovery. First of all, we need the IP address of the LXC container:

root@swarm-manager:~# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:8e:cb:43 inet addr: Bcast: Mask: inet6 addr: fe80::216:3eff:fe8e:cb43/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:23177 errors:0 dropped:0 overruns:0 frame:0 TX packets:20859 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:147652946 (147.6 MB) TX bytes:1455613 (1.4 MB) is my IP address. Let's start etcd:

root@swarm-manager:~# SWARM_MANAGER_IP= root@swarm-manager:~# docker run -d --restart=always --name=etcd -p 4001:4001 -p 2380:2380 -p 2379:2379 \ quay.io/coreos/etcd -name etcd0 \ -advertise-client-urls http://$SWARM_MANAGER_IP:2379,http://$SWARM_MANAGER_IP:4001 \ -listen-client-urls, \ -initial-advertise-peer-urls http://$SWARM_MANAGER_IP:2380 \ -listen-peer-urls \ -initial-cluster-token etcd-cluster-1 \ -initial-cluster etcd0=http://$SWARM_MANAGER_IP:2380 \ -initial-cluster-state new Unable to find image 'quay.io/coreos/etcd:latest' locally latest: Pulling from coreos/etcd ... Status: Downloaded newer image for quay.io/coreos/etcd:latest e742278a97d2ad3f88658aa871903d20b4094e551969a03aa8332d3876fe5d0d root@swarm-manager:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e742278a97d2 quay.io/coreos/etcd "/etcd -name etcd0 -a" 32 seconds ago Up 31 seconds>2379-2380/tcp,>4001/tcp, 7001/tcp etcd

Replace with the IP address of your LXC container.

Note that I've started etcd with --restart=always, so that every time etcd is automatically started when the LXC container starts. With this option, etcd will restart even if you explicitly stop it. Drop --restart=always if that's not what you want.

Now we can start the Swarm manager:

root@swarm-manager:~# docker run -d --restart=always --name=swarm -p 3375:3375 \ swarm manage -H etcd://$SWARM_MANAGER_IP:2379 Unable to find image 'swarm:latest' locally latest: Pulling from library/swarm ... Status: Downloaded newer image for swarm:latest 8080c93c544ff92cc2cf682ff0bbc82e0d2dfb01e1f98f202c3a0801d3427330 root@swarm-manager:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 46b556e73e87 swarm "/swarm manage -H 0.0" 3 seconds ago Up 2 seconds 2375/tcp,>3375/tcp swarm e742278a97d2 quay.io/coreos/etcd "/etcd -name etcd0 -a" 7 minutes ago Up 7 minutes>2379-2380/tcp,>4001/tcp, 7001/tcp etcd

Our Swarm manager is up and running. We can connect to it and issue a few commands:

root@swarm-manager:~# docker -H localhost:3375 info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: swarm/1.1.3 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 0 Plugins: Volume: Network: Kernel Version: 4.4.0-15-generic Operating System: linux Architecture: amd64 CPUs: 0 Total Memory: 0 B Name: d39c33295ef3

As you can see there are no nodes connected, as we would expect. Everything looks good.

Step 7: Create the Swarm Agents

Our Swarm manager can't do anything interesting without agent nodes. Creating new LXC containers for the agents is not much different from what we already did with the manager. To set up new agents in an automatic fashion I've created a script, so that you don't need to repeat the steps manually:

#!/bin/bash set -eu SWARM_MANAGER_IP= DOWNLOAD_DIST=ubuntu DOWNLOAD_RELEASE=xenial DOWNLOAD_ARCH=amd64 for LXC_NAME in "$@" do LXC_PATH="/var/lib/lxc/$LXC_NAME" LXC_ROOTFS="$LXC_PATH/rootfs" # Create the container. lxc-create -t download -n "$LXC_NAME" -- \ -d "$DOWNLOAD_DIST" -r "$DOWNLOAD_RELEASE" -a "$DOWNLOAD_ARCH" cat <<EOF >> "$LXC_PATH/config" # Allow running Docker inside LXC lxc.aa_profile = unconfined lxc.cap.drop = EOF # Start the container and wait for networking to start. lxc-start -n "$LXC_NAME" sleep 10s # Install Docker. lxc-attach -n "$LXC_NAME" -- apt-get update lxc-attach -n "$LXC_NAME" -- apt-get install -y docker.io # Tell Docker to listen on all interfaces. sed -i -e 's/^#DOCKER_OPTS=.*$/DOCKER_OPTS="-H"/' "$LXC_ROOTFS/etc/default/docker" lxc-attach -n "$LXC_NAME" -- systemctl restart docker # Join the Swarm. SWARM_AGENT_IP="$(lxc-attach -n "$LXC_NAME" -- ifconfig eth0 | grep -Po '(?<=inet addr:)\S+')" lxc-attach -n "$LXC_NAME" -- docker run -d --restart=always --name=swarm \ swarm join --addr="$SWARM_AGENT_IP:2375" "etcd://$SWARM_MANAGER_IP:2379" done

Be sure to change the values for SWARM_MANAGER_IP, DOWNLOAD_DIST, DOWNLOAD_RELEASE and DOWNLOAD_ARCH to fit your needs.

Thanks to this script, creating 10 new agents is as simple as running one command:

root@host:~# ./swarm-agent-create swarm-agent-{0..9}

Here's an explaination of what the script does:

  • It first sets up a new LXC container following steps 1-5 above, that is: create a new LXC container (with lxc-create), apply the LXC configuration (lxc.aa_profile and lxc.cap.drop rules), start the container and install Docker.

    LXC_PATH="/var/lib/lxc/$LXC_NAME" LXC_ROOTFS="$LXC_PATH/rootfs" # Create the container. lxc-create -t download -n "$LXC_NAME" -- \ -d "$DOWNLOAD_DIST" -r "$DOWNLOAD_RELEASE" -a "$DOWNLOAD_ARCH" cat <<EOF >> "$LXC_PATH/config" # Allow running Docker inside LXC lxc.aa_profile = unconfined lxc.cap.drop = EOF # Start the container and wait for networking to start. lxc-start -n "$LXC_NAME" sleep 10s # Install Docker. lxc-attach -n "$LXC_NAME" -- apt-get update lxc-attach -n "$LXC_NAME" -- apt-get install -y docker.io
  • Our Swarm agents need to be reachable by the manager. For this reason we need to configure them so that they bind to a public interface. To do so, the script adds DOCKER_OPTS="-H" and restarts Docker.

    # Tell Docker to listen on all interfaces. sed -i -e 's/^#DOCKER_OPTS=.*$/DOCKER_OPTS="-H"/' "$LXC_ROOTFS/etc/default/docker" lxc-attach -n "$LXC_NAME" -- systemctl restart docker
  • Lastly, the script checks the IP address for the LXC container and it launches Swarm.

    # Join the Swarm. SWARM_AGENT_IP="$(lxc-attach -n "$LXC_NAME" -- ifconfig eth0 | grep -Po '(?<=inet addr:)\S+')" lxc-attach -n "$LXC_NAME" -- docker run -d --restart=always --name=swarm \ swarm join --addr="$SWARM_AGENT_IP:2375" "etcd://$SWARM_MANAGER_IP:2379"
Step 8: Play with the Swarm

Now, if we check docker info on the Swarm manager, we should see 10 healthy nodes:

root@swarm-manager:~# docker -H localhost:3375 info Containers: 10 Running: 10 Paused: 0 Stopped: 0 Images: 10 Server Version: swarm/1.1.3 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 10 swarm-agent-0: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:35Z swarm-agent-1: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:31:49Z swarm-agent-2: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:31:54Z swarm-agent-3: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:03Z swarm-agent-4: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:22Z swarm-agent-5: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:16Z swarm-agent-6: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:21Z swarm-agent-7: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:31:43Z swarm-agent-8: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:17Z swarm-agent-9: └ Status: Healthy └ Containers: 1 └ Reserved CPUs: 0 / 4 └ Reserved Memory: 0 B / 4.052 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.4.0-15-generic, operatingsystem=Ubuntu 16.04, storagedriver=overlay └ Error: (none) └ UpdatedAt: 2016-04-13T15:32:30Z Plugins: Volume: Network: Kernel Version: 4.4.0-15-generic Operating System: linux Architecture: amd64 CPUs: 40 Total Memory: 40.52 GiB Name: d39c33295ef3

Let's try running a command on the Swarm:

root@swarm-manager:~# docker -H localhost:3375 run -i --rm docker/whalesay cowsay 'It works!' ___________ < It works! > ----------- \ \ \ ## . ## ## ## == ## ## ## ## === /""""""""""""""""___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\______/ Conclusion

We created a Swarm cluster consisting of one manager and 10 agents, and we kept memory and disk usage low thanks to LXC containers. We also succeeded in confining our Docker containers with AppArmor. Overall, this setup is probably not ideal for use in a production environment, but very useful for simulating clusters on your laptop.

I hope you enjoyed the tutorial. Feel free to leave a comment if you have questions!

Ubuntu App Developer Blog: Ubuntu Scopes Showdown: Here are the winners!

Planet Ubuntu - Wed, 04/13/2016 - 09:14

After a month of deliberations, it's time to announce the Scopes Showdown 2016 winners!

It's been a blast to see the interaction this contest has generated between scopes developers and the scopes API team, many bugs have been fixed, contributions have been accepted and many suggestions have been considered for inclusion and are now on the roadmap (which will be discussed during the next Ubuntu Online Summit)!

About half of the accepted entries are using the new JavaScript API, which is very exciting, to say the least. All developers have put their heart in these scopes and they all have their merits, but we had to pick the three best and also the one seen as the most innovative...

Thanks to all participants and judges, here are the results!

See the results ›

Thomas Ward: Ubuntu Xenial: NGINX and PHP7.0

Planet Ubuntu - Tue, 04/12/2016 - 11:18

Hello again! NGINX 1.9.14 is now available in Ubuntu Xenial. There’s quite a few things we should make known to everyone who uses nginx in Ubuntu, with php5-fpm currently!

HTTP/2 is now enabled

Yes, HTTP/2 is now enabled for nginx-core, nginx-full, and nginx-extras in Ubuntu Xenial. Add http2 to your SSL listener line in your server blocks, and HTTP/2 will be enabled for that port and site.

For HTTP/2 on non-Xenial Ubuntu releases, you can use the Mainline PPA for Wily and later. Anything before Wily does not have full HTTP/2 support, and very likely will not be usable to get HTTP/2 working as intended.

Ubuntu Xenial ships php7.0-fpm, and not php5-fpm, and this will break existing site configurations

The Ubuntu Xenial packages for nginx have already been updated for this change, pointing to php7.0-fpm instead of php5-fpm.

However, users who have existing site configurations will not benefit from these changes. They must manually apply the changes.

Effectively, this is what a default setup uses to interface with the default php5-fpm setup on Ubuntu versions before Xenial, passing all PHP processing to the php5-fpm backend. This is from the default configuration file, but it’s still similar for all PHP passing:

location ~ \.php$ { include snippets/fastcgi-php.conf; # With php5-cgi alone: #fastcgi_pass; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; }

In Ubuntu Xenial, the TCP listener for php7.0-cgi will be unchanged, however for php7.0-fpm, it will be necessary to update the configuration to look like this for existing site configurations:

location ~ \.php$ { include snippets/fastcgi-php.conf; # With php7.0-cgi alone: #fastcgi_pass; # With php7.0-fpm: fastcgi_pass unix:/var/run/php7.0-fpm.sock; }

This will prevent HTTP 502 Bad Gateway errors, and will use the updated php7.0-fpm instead of the php5-fpm packages.

(If for some reason you still want to have php5-fpm under Xenial, you will not be able to get support from Ubuntu for this; you will need to use a PPA. I explain this on a different post on my blog.)

Thomas Ward: Ubuntu Xenial: Adding php5.6 to Xenial

Planet Ubuntu - Tue, 04/12/2016 - 11:16

Ubuntu Xenial will not ship php5 at all.

The only way to get continued php5 access is to use a PPA, specifically Ondřej Surý’s PPA for co-installable php5 and php7.0. However, this is not supported by the Ubuntu Server Team or the Ubuntu Security Team, and you accept the risks therein of using PPAs for getting php5.

The packages are *not* named php5 but are instead named php5.6.

So, to add php5.6-fpm to Xenial, you would do something like this to add the PPA, update, and then also install php5.6-fpm and dependencies:

sudo apt-get install python-software-properties sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php sudo apt-get update sudo apt-get install php5.6-fpm

(Note that I have not tested this; this is, however, supposedly usable based on user experience data gathered on Ask Ubuntu by myself.)

This should be a similar process for any of the other php5.6 packages you would need. However, you do NOT need to re-add the PPA if it’s already on your system.

The Fridge: Ubuntu Weekly Newsletter Issue 461

Planet Ubuntu - Mon, 04/11/2016 - 17:55

Welcome to the Ubuntu Weekly Newsletter. This is issue #461 for the week April 4 – 10, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Leonard Viator
  • Daniel Beck
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 461

The Fridge - Mon, 04/11/2016 - 17:54

Welcome to the Ubuntu Weekly Newsletter. This is issue #461 for the week April 4 – 10, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Leonard Viator
  • Daniel Beck
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Stephen Kelly: How do you use Grantlee?

Planet Ubuntu - Mon, 04/11/2016 - 14:19

Grantlee has been out in the wild for quite some years now. Development stagnated for a while as I was concentrating on other things, but I’m updating it now to prepare for a new release.

I’m giving a talk about it in just over a week (the first since 2009), and I’m wondering how people are using it these days. The last time I really investigated this was 6 years ago.

I’m really interested in knowing about other users of Grantlee and other use-cases where it fits. Here are some of the places I’m already aware of Grantlee in use:

Many areas of KDE PIM use Grantlee for rendering content such as addressbook entries and rss feeds along with some gui editors for creating a new look. The qgitx tool also uses it for rendering commits in the view with a simple template.

It is also used in the Cutelyst web framework for generating html, templated emails and any other use-cases users of that framework have.

There is also rather advanced Grantlee integration available in KDevelop for new class generation, using the same principles I blogged about some years ago.

It is also used by the subsurface application for creating dive summaries and reports. Skrooge also uses it for report generation.

It is used in Oyranos buildsystem, seemingly to generate some of the code compiled into the application.

Also on the subject of generating things, it seems to be used in TexturePacker, a tool for game developers to create efficient assets for their games. Grantlee enables one of the core selling points of that software, enabling it to work with any game engine.

Others have contacted me about using Grantlee to generate documentation, or to generate unit tests to feed to another DSL. That’s not too far from how kitemmodels uses it to generate test cases for proxy model crashes.

Do you know of any other users or use-cases for Grantlee? Let us know in the comments!

Aurélien Gâteau: Introducing Reposetup

Planet Ubuntu - Mon, 04/11/2016 - 14:03

The other day at work I was considering how we could setup a simple server to host Git repositories for proof-of-concepts and other one-off projects. Sometimes we create a proof-of-concept just to illustrate a point in a pull request, or to explore options before starting work. Git is handy for this: it makes it easy to try things, fail, rewind, explore other options. It felt overkill to ask our IT teams to create official repositories for such potentially short-lived projects which don't need the full infrastructure required to manage long-term projects.

Since we already have an internal server on which we have SSH access, one alternative I imagined was installing a tool like GitLab but that felt heavyweight: I didn't want to create additional load on the server by adding other services. Gitolite was another option, but using it is a bit more involved, and since we all have SSH access on this server, the main point of Gitolite, providing access without giving all users a shell account, was not required.

Hosting a Git repository on a server with SSH access is actually quite simple: create a bare repository with git init --bare, optionally mark it shareable with git-daemon and/or enable the hook to provide read-only HTTP access to it.

At this point, you can probably guess where I am heading to: I created another project... welcome Reposetup! Reposetup is a command-line tool (written in shell for now) which you can install on your server. Once it is installed you can run it over SSH to create, list, rename or delete repositories. Repositories are accessible for read/write access over SSH for you, but you can also add read-only access to others through Git dumb-HTTP protocol. Read-only access is easy to add if you use an HTTP server like Apache with the mod_userdir module.

Installation boils down to copying the reposetup binary to /usr/local/bin or similar, then creating a /etc/reposetuprc file with the following content:

# Path where repositories will be created REPO_BASE_DIR=$HOME/public_html/git # Repository url for read-write access REPO_RW_URL=$USER@<yourserver>:public_html/git/$REPO_NAME # Repository url for read-only access REPO_RO_URL=http://<yourserver>/~$USER/git/$REPO_NAME

REPO_RO_URL can be omitted if you don't want to provide read-only access.

Reposetup is ready. Now you can create a repository with:

$ ssh server.lan reposetup create testproj

Reposetup creates the repository and tells you how to push to it:

The "testproj" repository has been created. You can clone it with: git clone you@server.lan:public_html/git/testproj If you already have a local repository, you can push its content with: git remote add origin you@server.lan:public_html/git/testproj git push -u origin master The url for read-only access is: http://server.lan/~you/git/testproj

create is the main command, but there are a few others:

  • rename: to rename an existing repository
  • ls: to list created repositories and their urls
  • rm: to delete a repository

That's it, you can get it on GitHub. Hope you find it useful!


Subscribe to Ubuntu Arizona LoCo Team aggregator