Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 1 hour 54 min ago

Nathan Haines: UbuCon Europe 2016

Wed, 12/07/2016 - 22:04
UbuCon Europe 2016

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

Stéphane Graber: Running snaps in LXD containers

Wed, 12/07/2016 - 07:37

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it
Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud lxc exec nextcloud -- apt update lxc exec nextcloud -- apt dist-upgrade -y lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud +-----------+---------+----------------------+----------------------------------------------+ | NAME | STATE | IPV4 | IPV6 | +-----------+---------+----------------------+----------------------------------------------+ | nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) | +-----------+---------+----------------------+----------------------------------------------+

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true lxc exec lxd -- apt update lxc exec lxd -- apt dist-upgrade -y lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash root@lxd:~# lxd init Name of the storage backend to use (dir or zfs) [default=dir]: We detected that you are running inside an unprivileged container. This means that unless you manually configured your host otherwise, you will not have enough uid and gid to allocate to your containers. LXD can re-use your container's own allocation to avoid the problem. Doing so makes your nested containers slightly less safe as they could in theory attack their parent container and gain more privileges than they otherwise would. Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]? LXD has been successfully configured. root@lxd:~# lxd.lxc launch images:archlinux arch If this is your first time using LXD, you should also run: sudo lxd init To start your first container, try: lxc launch ubuntu:16.04 Creating arch Starting arch root@lxd:~# lxd.lxc list +------+---------+----------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+----------------------+-----------------------------------------------+------------+-----------+ | arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0 | +------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

The Fridge: Ubuntu Weekly Newsletter Issue 490

Mon, 12/05/2016 - 21:17

Welcome to the Ubuntu Weekly Newsletter. This is issue #490 for the week November 28 – December 4, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Seif Lotfy: Playing with .NET (dotnet) and IronFunctions

Mon, 12/05/2016 - 15:15

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service.

While AWS Lambda only supports Java, Python and Node.js, Iron Functions allows you to use any language you desire by running your code in containers.

With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool.

TL;DR:

The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it:

Using dotnet with functions

Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function:

dotnet new

By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs.

mv Program.cs func.cs

Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following:

using System; using System.Text; using System.Security.Cryptography; using System.IO; namespace ConsoleApplication { public class Program { public static void Main(string[] args) { // if nothing is being piped in, then exit if (!IsPipedInput()) return; var input = Console.In.ReadToEnd(); var stream = DownloadRemoteImageFile(input); var hash = CreateChecksum(stream); Console.WriteLine(hash); } private static bool IsPipedInput() { try { bool isKey = Console.KeyAvailable; return false; } catch { return true; } } private static byte[] DownloadRemoteImageFile(string uri) { var request = System.Net.WebRequest.CreateHttp(uri); var response = request.GetResponseAsync().Result; var stream = response.GetResponseStream(); using (MemoryStream ms = new MemoryStream()) { stream.CopyTo(ms); return ms.ToArray(); } } private static string CreateChecksum(byte[] stream) { using (var md5 = MD5.Create()) { var hash = md5.ComputeHash(stream); var sBuilder = new StringBuilder(); // Loop through each byte of the hashed data // and format each one as a hexadecimal string. for (int i = 0; i < hash.Length; i++) { sBuilder.Append(hash[i].ToString("x2")); } // Return the hexadecimal string. return sBuilder.ToString(); } } } }

Note: IO with an IronFunction is done via stdin and stdout. This code

Using with IronFunctions

Let's first init our code to become IronFunctions deployable:

fn init <username>/<funcname>

Since IronFunctions relies on Docker to work (we will add rkt support soon) the <username> is required to publish to docker hub. The <funcname> is the identifier of the function.

In our case we will use dotnethash as the <funcname>, so the command will look like:

fn init seiflotfy/dotnethash

When running the command it will create the func.yaml file required by functions, which can be built by running:

Push to docker fn push

This will create a docker image and push the image to docker.

Publishing to IronFunctions

To publish to IronFunctions run ...

fn routes create <app_name>

where <app_name> is (no surprise here) the name of the app, which can encompass many functions.

This creates a full path in the form of http://<host>:<port>/r/<app_name>/<function>

In my case, I will call the app myapp:

fn routes create myapp Calling

Now you can

fn call <app_name> <funcname>

or

curl http://<host>:<port>/r/<app_name>/<function>

So in my case

echo http://lorempixel.com/1920/1920/ | fn call myapp /dotnethash

or

curl -X POST -d 'http://lorempixel.com/1920/1920/' http://localhost:8080/r/myapp/dotnethash What now?

You can find the whole code in the examples on GitHub. Feel free to join the Iron.io Team on Slack.
Feel free to write your own examples in any of your favourite programming languages such as Lua or Elixir and create a PR :)

Harald Sitter: KDE Framworks 5 Content Snap Techno

Mon, 12/05/2016 - 09:10

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself.

The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc.

KDE Frameworks 5’s snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources.

I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap.

Looking at this snapcraft.yaml we’ll find some fancy stuff.

After the regular snap attributes the actual content-interface is defined. It’s fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree).

slots: kde-frameworks-5-slot: content: kde-frameworks-5-all interface: content read: - "."

The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I’ll only show the relevant excerpts.

The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration.

The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap.

kf5: plugin: nil stage-packages: - libkf5coreaddons5 ...

The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won’t have any of them end up in the final snap. After all, they would be entirely useless there.

kf5-dev: after: - kf5 plugin: nil stage-packages: - libkf5coreaddons-dev .... snap: - "-*"

Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren’t actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies.

An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snapcraft.yaml we configured the kf5-dev part to stage packages but then excluded everything from being snapped. However, knowing how snapcraft actually goes about its business we can “abuse” its inner workings to make use of the part after all. Before the actual snap is created snapcraft “primes” the snap, this effectively means that all installed trees (i.e. the stages) are combined into one tree (i.e. the primed tree), the exclusion rule of the kf5-dev part is then applied on this tree. Or in other words: the primed tree is the snap before exclusion was applied. Meaning the primed tree is everything from all parts, including the development headers and CMake configs. We pack this tree in a development tarball which we then use on the application side to stage a development environment for the KDE Frameworks 5 snap.

Specifically on the application-side we use a boilerplate part that employs the same trick of stage-everything but snap-nothing to provide the build dependencies while not having anything end up in the final snap.

kde-frameworks-5-dev: plugin: dump snap: [-*] source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

Using the KDE Framworks 5 content snap KDE can create application snaps that are a fraction of the size they would be if they contained all dependencies themselves. While this does give up optimization potential by aggregating requirements in a more central fashion it quickly starts paying off given we are saving upwards of 70 MiB per snap.

Application snaps can of course still add more stuff on top or even override things if needed.

Finally, as we approach the end of the year, we begin the season of giving. What would suit the holidays better than giving to the entire world by supporting KDE with a small donation?

Rafael Carreras: Yakkety Yak release parties

Mon, 12/05/2016 - 06:44

Catalan LoCo Team celebrated on November 5th a release party of the next Ubuntu version, in that case, 16.10 Xenial Xerus, in Ripoll, such a historical place. As always, we started explaining what Ubuntu is and how it adapts to new times and devices.

FreeCad 3D design and Games were both present at the party.

A few weeks later, in December 3rd, we did another release party, this time in Barcelona.

We went to Soko, previously a chocolate factory, that nowadays is a kind of Makers Lab, very excited about free software. First, Josep explained the current developments in Ubuntu and we carried some installations on laptops.

We ate some pizza and had discussions about free software on public administrations. Apart from the usual users who came to install Ubuntu on their computers, the responsible from Soko gave us 10 laptops for Ubuntu installation too. We ended the tasks installing Wine for some Lego to run.

That’s some art that is being made at Soko.

I’m releasing that post because we need some documentation on release parties. If you need some advice on how to manage a release party, you can contact me or anyone in Ubuntu community.

 

Svetlana Belkin: Community Service Learning Within Open * Communities

Sun, 12/04/2016 - 14:35

As the name implies, “service-learning is an educational approach that combines learning objectives with community service in order to provide a pragmatic, progressive learning experience while meeting societal needs” (Wikipedia).  When you add the “community” part to that definition it changes to, “about leadership development as well as traditional information and skill acquisition” (Janet 1999).

How does this apply to Open * communities?

Simple!  Community service learning is an ideal way to get middle/high school/college students to get involved within the various communities and understand the power of Open *. And also to stay active after their term of community service learning.

This idea came to me just today (as of writing, Nov. 30th) as a thought on what is really Open *.  Not the straightforward definition of it but the the affect Open * creates.  As I stated on my home page of my site, Open * creates a sense of empowerment.  One way is through the actions that create skills and improvements to those skills.  Which skills are those?  Mozilla Learning made a map and description to these skills on their Web Literacy pages.  They are show below also:

Most of these skills along with the ways to gain these skills (read, write, participate) can be used as skills to worked on for community service learning.

As stated above, community service learning is really the focus of gaining skills and leadership skills while (in the Open * sense) contribute to projects that impacts the society of the world.  This is really needed now as there are many local and world issues that Open * can provide solutions too.

I see this as an outreach program for schools and the various organizations/groups such as Ubuntu, System76, Mozilla, and even Linux Padawan.  Unlike Google Summer of Code (GSoC), no one receives a stipend but the idea of having a mentor could be taken from GSoC.  No, not could but should.  Because the student needs someone to guide them, hence Linux Padawan could benefit from this idea.

Having that said, I will try to work out a sample program that could be used and maybe test it with Linux Padawan.  Maybe I could have this ready by spring semester.

Random Fact #1: Simon Quigley, through his middle school, is in a way already doing this type of learning.

Random Fact #2: At one point of time, I wanted to translate that Web Literacy map into one that can be applied to Open *, not just one topic.

Jo Shields: A quick introduction to Flatpak

Sun, 12/04/2016 - 03:44

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Ross Gammon: My Open Source Contributions June – November 2016

Sat, 12/03/2016 - 04:52

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian
  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.
Ubuntu
  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.
Other
  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.
Plan for December Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.
Other
  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

Timo Aaltonen: Mesa 12.0.4 backport available for testing

Fri, 12/02/2016 - 15:28

Hi!

I’ve uploaded Mesa 12.0.4 for xenial and yakkety to my testing PPA for you to try out. 16.04 shipped with 11.2.0 so it’s a slightly bigger update there, while yakkety is already on 12.0.3 but the new version should give radeon users a 15% performance boost in certain games with complex shaders.

Please give it a spin and report to the (yakkety) SRU bug if it works or not, and mention the GPU you tested with. At least Intel Skylake seems to still work fine here.

UPDATE:

The ppa now has 12.0.5 for both xenial & yakkety. There have been no comments to the SRU bug about success/failure, so feel free to add a comment here instead.


Costales: Fairphone 2 & Ubuntu

Fri, 12/02/2016 - 10:54
En la Ubucon Europe pude conocer de primera mano los avances de Ubuntu Touch en el Fairphone 2.

Ubuntu Touch & Fairphone 2El Fairphone 2 es un móvil único. Como su propio nombre indica, es un móvil ético con el mundo. No usa mano de obra infantil, construido con minerales por los que no corrió la sangre y que incluso se preocupa por los residuos que genera.

Delantera y traseraEn el apartado de software ejecuta varios sistemas operativos, y por fin, Ubuntu es uno de ellos.

Tu elecciónEl port de Ubuntu está implementado por el proyecto UBPorts, que está avanzando a pasos de gigante cada semana.

Cuando yo probé el móvil, me sorprendió la velocidad de Unity, similar a la de mi BQ E4.5.
La cámara es suficientemente buena. Y la duración de la batería es aceptable.
Me encantó especialmente la calidad de la pantalla, con sólo mirarla se nota su nitidez.
Respecto a las aplicaciones, probé varias de la Store sin problema.

CarcasaEn resumen, un gran sistema operativo, para un gran móvil :) Un win:win

Si te interesa colaborar como desarrollador de este port, te recomiendo este grupo de Telegram: https://telegram.me/joinchat/AI_ukwlaB6KCsteHcXD0jw

All images are CC BY-SA 2.0.

Ubuntu Insights: Cloud Chatter: November 2016

Fri, 12/02/2016 - 05:01

Welcome to our November edition. We begin with details on our latest partnership with Docker. Next up, we bring you a co-hosted webinar with PLUMgrid exploring how enterprises can build and manage highly scalable OpenStack clouds. We also have a number of exciting announcements with partners including Microsoft, Cloud Native Computing Forum and Open Telekom Cloud. Take a look at our top blog posts for interesting tutorials and videos. And finally, don’t miss out on our round up of industry news.

A Commercial Partnership With Docker

Docker and Canonical have announced an integrated Commercially Supported (CS) Docker Engine offering on Ubuntu providing Canonical customers with a single path for support of the Ubuntu operating system and CS Docker Engine in enterprise Docker operations.

As part of this agreement Canonical will provide Level 1 and Level 2 technical support for CS DOcker Engine backed by Docker, Inc providing Level 3 support.
Learn more

Webinar: Secure, scale and simplify your OpenStack deployments

In our latest on-demand webinar, we explore how enterprises and telcos can build and manage highly scalable OpenStack clouds with BootStack, Juju and PLUMgrid. Arturo Suarez, Product Manager for BootStack at Canonical, and Justin Moore, Principal Solutions Architect, at PLUMgrid, discuss common issues users run into when running OpenStack at scale, and how to circumnavigate them using solutions such as BootStack, Juju and PLUMgrid ONS.

Watch on-demand

In Other News Microsoft loves Linux. SQL Server Public Preview available on Ubuntu

Canonical announced that the next public release of Microsoft’s SQL Server is now available for Ubuntu. SQL Server on Ubuntu now provides freedom of choice for developers and organisations alike whether you use on premises or in the cloud. With SQL Server on Ubuntu, there are significant cost savings, performance improvements, and the ability to scale & deploy additional storage and compute resources easier without adding more hardware. Learn more

Canonical launches fully managed Kubernetes and joins the CNCF

Canonical recently joined The Cloud Native Computing Foundation (CNCF), expanding the Canonical Distribution of Kubernetes to include consulting, integration and fully-managed on-prem and on-cloud Kubernetes services. Ubuntu leading the adoption of Linux containers, and Canonical’s definition of a new class of application and new approach to operations, are only some of the key contributions being made. Learn more

Open Telekom Cloud joins Certified Public Cloud

T-Systems, a subsidiary of Deutsche Telekom, recently launched its own Open Telekom Cloud platform, based on Huawei’s OpenStack and hardware platforms. Canonical and T-Systems have announced their partnership to provide certified Ubuntu images on all LTS versions of Ubuntu to users of their cloud services. Learn more

Top blog posts from Insights Industry News Ubuntu Cloud in the news OpenStack & NFV Containers & Storage Big data / Machine Learning / Deep Learning

Raphaël Hertzog: My Free Software Activities in November 2016

Fri, 12/02/2016 - 04:45

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Ubuntu Insights: Canonical’s Distribution of Kubernetes Reduces Operational Friction

Thu, 12/01/2016 - 08:00

Linux containers (LXC) are one of the hottest technologies in the market today. Developers are adopting containers, especially Docker, as a way to speed-up development cycles and deliver code into testing or production environments much faster than traditional methods. With the largest base of LXC, LXD, and Docker users, Ubuntu has long been the platform of choice for developers driving innovation with containers and is widely used to run infrastructure like Kubernetes as a result. Due to customer demand, Canonical recently announced a partnership with Google to deliver the Canonical Distribution of Kubernetes.


Marco Ceppi, Engineering Manager at Canonical, tells our container story at KubeCon 2016

Explaining Containers and Canonical’s Distribution of Kubernetes

First, a bit of background, containers offer an alternative to traditional virtualization. Containers allow organizations to virtually run multiple Linux systems on a single kernel without the need for a hypervisor. One of the most promising features of containers is the ability to put more applications onto a physical server than you could with a virtual machine. There are two types of containers – machine and process.

Machine containers (sometimes called OS containers) allow developers/organizations to configure, install, and run applications, multiple processes, or libraries within a single container. They create an environment where companies can manage distributed software solutions across various environments, operating systems, and configurations. Machine containers are largely used by organizations to “lift-and-shift” legacy applications from on-premise to the cloud. Whereas process containers (sometimes called application containers) share the same kernel host, they can only run a single process or command. Process containers are especially valuable for creating microservices or API functions calls that are fast, efficient, and optimized. Process containers also allow developers to deploy services and solutions more efficiently and on time without having to deploy virtual machines.

Ubuntu is the container OS (operating system) used by a majority of Docker developers and deployments worldwide, and Kubernetes is the leader in coordinating process containers across a cluster, enabling high-efficiency DevOps, horizontal scaling, and support for 12-factor apps. Our Distribution of Kubernetes allows organizations to manage and monitor their containers across all major public clouds, and within private infrastructures. Kubernetes is effectively the air traffic controller for managing how containers are deployed.

Even as the cost of software has declined, the cost to operate today’s complex and distributed solutions have increased as many companies have found themselves managing these systems in a vacuum. Even for experts, deploying, and operating containers and Kubernetes at scale can be a daunting task. However, by deploying Ubuntu, Juju for software modeling, and Canonical’s Kubernetes distribution helps organizations to make deployment simple. Further, we have certified our distribution of Kubernetes to work with most major public clouds as well as on-premise infrastructure like VMware or Metal as a Service (MaaS) solutions thereby eliminating many of the integration and deployment headaches.

A new approach to IT operations

Containers are only part of the major change in the way we think about software. Organisations are facing fundamental limits in their ability to manage escalating complexity, and Canonical’s focus on operations has proven successful in enabling cost-effective scale-out infrastructure. Canonical’s approach dramatically increases the ability of IT operations teams to run ever more complex and large scale systems.

Leading open source projects like MAAS, LXD, and Juju help enterprises to operate in a hybrid cloud world. Kubernetes extends the diversity of applications which can now be operated efficiently on any infrastructure.

Moving to Ubuntu and to containers enables an organization to reduce overhead and improve operational efficiency. Canonical’s mission is to help companies to operate software on their public and private infrastructure, bringing Netflix-style efficiency to the enterprise market.

Canonical views containers as one of the key technologies IT and DevOps organizations will use as they work to become more cost effective and based in the cloud. Forward-looking enterprises are moving from proof of concepts (POCs) to actual production deployments, and the window for competitive advantage is closing.

For more information on how we can help with education, consulting, and our fully-managed or on cloud Kubernetes services, check out the Canonical Distribution of Kubernetes.

Ubuntu Podcast from the UK LoCo: S09E40 – Dirty Dan’s Dark Delight - Ubuntu Podcast

Thu, 12/01/2016 - 08:00

It’s Season Nine Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Dan Kermac are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We review the nexdock and how it works with the Raspberry Pi 3, Meizu Pro 5 Ubuntu Phone, bq M10 FHD Ubuntu Tablet, Android, Dragonboard 410c, Roku, Chomecast, Amazon FireTV and laptops from Dell and Entroware.

  • We share a Command Line Lurve:

sudo apt install netdiscover sudo netdiscover

The output looks something like this:

_____________________________________________________________________________ IP At MAC Address Count Len MAC Vendor / Hostname ----------------------------------------------------------------------------- 192.168.2.2 fe:ed:de:ad:be:ef 1 42 Unknown vendor 192.168.2.1 da:d5:ba:be:fe:ed 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.11 ba:da:55:c0:ff:ee 1 60 BROTHER INDUSTRIES, LTD. 192.168.2.30 02:02:de:ad:be:ef 1 60 Elitegroup Computer Systems Co., Ltd. 192.168.2.31 de:fa:ce:dc:af:e5 1 60 GIGA-BYTE TECHNOLOGY CO.,LTD. 192.168.2.107 da:be:ef:15:de:af 1 42 16) 192.168.2.109 b1:gb:00:bd:ba:be 1 60 Denon, Ltd. 192.168.2.127 da:be:ef:15:de:ad 1 60 ASUSTek COMPUTER INC. 192.168.2.128 ba:df:ee:d5:4f:cc 1 60 ASUSTek COMPUTER INC. 192.168.2.101 ba:be:4d:ec:ad:e5 1 42 Roku, Inc 192.168.2.106 ba:da:55:0f:f1:ce 1 42 LG Electronics 192.168.2.247 f3:3d:de:ad:be:ef 1 60 Roku, Inc 192.168.3.2 ba:da:55:c0:ff:33 1 60 Raspberry Pi Foundation 192.168.3.1 da:d5:ba:be:f3:3d 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.103 da:be:ef:15:d3:ad 1 60 Unknown vendor 192.168.2.104 b1:gb:00:bd:ba:b3 1 42 Unknown vendor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Flickr.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Daniel Pocock: Using a fully free OS for devices in the home

Thu, 12/01/2016 - 06:11

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy will be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?
Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Ubuntu Insights: UbuCon Europe – a sure sign of community strength

Thu, 12/01/2016 - 04:42

UbuCon Europe, 18-20 Nov 2016, Essen, Germany

Recovering from a great UbuCon Europe session that just took place a couple weeks ago! This year was attended by 170 people from 20 countries, about half coming from Germany. Almost everyone was (obviously) running Ubuntu and had been around in the community for years.

The event was organised by the Ubuntu community, led by Sujeevan Vijayakumaran – congrats to him! The venue, the schedule, the social events and sponsoring were all flawlessly executed and it showed the level of commitment the European Ubuntu community has. So much so that the community are already looking forward to the next big UbuCon in Paris!

The venue was the Unperfekthaus, central to Essen. A beautifully weird and inclusive 4-storey coworking / artist’s / maker-space / café / event location.

Selection of talks
We had a good Canonical attendance at the event that included Jane, Thibaut, Thomas, Alan, Christian, Daniel, Martin and Michael! Many long-standing community members such as Laura Fautley (czajkowski), Elizabeth Joseph Krumbach (pleia2), cm-t arudy, José Antonio Rey, Marcos Costales, Marius Gripsgård (mariogrip), Philip Ballew and Nathan Haines also attended and presented to – all worthy of a mention.

Jane gave a keynote at the event to a packed room. The talks and workshop from Alan, Daniel, Michael and Thibaut were focused on snaps and IoT and were well-received. There seemed to be a lot of interest in learning more about the new way of doing things and getting started immediately – what we like to hear.

Malte Lantin from Microsoft gave an overview of Bash on Ubuntu on Windows. The talk started by covering why Microsoft worked on the project, some history of previous attempts at nix compatibility and posix compliance along with some technical infrastructure details. A few demos were given and technical questions from the audience answered.

Elizabeth K Joseph gave a talk reminding the audience that while the Ubuntu project has been branching out to new technology, the traditional desktop is still there, and still needs volunteers to help. She outlined a great selection of ways in which new contributors can get involved with practical advice and links to resources all the way through.

Alan gave a talk on “zero to store” about snaps and the store. The examples were well picked and lots of fun – the feedback after the session was mostly amazement of how easy it has become to build and publish software. Michael’s session “digging deep into snapcraft” which was in the following time-slot was very well-attended. Probably as a result.

On the snap workshop, everyone worked through the codelabs examples in their own pace and we had some upstream participation (Limesurvey) as well.

During the final session, UbuCon Feedback and 2017 plannings, some attendees new to the Ubuntu community commented on how they didn’t know anyone coming into the event. They felt welcome and made many new friends. So much so, there is some serious interest in creating an UbuCon in Romania as a result….let’s do it!

More info here with the event page and the schedule.

Pages