Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 9 hours 51 min ago

Serge Hallyn: Containers – inspect, don’t introspect

Wed, 01/20/2016 - 12:52

You’ve got a whatzit daemon running in a VM. The VM starts acting suspiciously – a lot more cpu, memory, or i/o than you’d expect. What do you do? You could log in and look around. But if the VM’s been 0wned, you may be running trojaned tools in the VM. In that case, you’d be better off mounting the VM’s root disk and looking around from your (hopefully) safe root context.

The same of course is true in containers. lxc-attach is a very convenient tool, as it doesn’t even require you to be running ssh in the container. But you’re trusting the container to be pristine.

One of the cool things about containers is that you can inspect pretty flexibly from the host. While the whatzit daemon is still running, you can strace it from the host, you can look for instance at it’s proc filesystem through /proc/$(pidof whatzit)/root/proc, you can see its process tree by just doing ps (i.e. pstree, ps -axjf).

So, the point of this post is mainly to recommend doing so :) Importantly, I’m not claiming here “and therefore containers are better/safer” – that would be nonsense. (The trivial counter argument would be that the container shares – and can easily exploit – the shared kernel). Rather, the point is to use the appropriate tools and, then, to use them as well as possible by exploiting its advantages.


Xubuntu: Xubuntu at The Working Centre’s Computer Recycling Project

Wed, 01/20/2016 - 10:00

The Xubuntu team hears stories about how it is used in organizations all over the world. In this “Xubuntu at..” series of interviews, we seek to interview organizations who wish to share their stories. If your organization is using Xubuntu and you want to share what you’re doing with us please contact Elizabeth K. Joseph at lyz@ubuntu.com to discuss details about your organization.

Recently Charles McColm and Paul Nijjar took time to write to us about their work at The Working Centre’s Computer Recycling Project in Kitchener, Ontario, Canada where they’re using Xubuntu. So without further ado, here’s what they are working on, in their own words.

Please tell us a bit about yourself, The Working Centre and your Computer Recycling Project

The Working Centre was originally established back in the early 1980s as a response to growing unemployment and poverty in downtown Kitchener, Ontario, Canada. Joe and Stephanie Mancini saw the potential for building a community around responding to unemployment and poverty through creative and engaging action. Different projects have rose out of this vision over the years divided into six areas; the Job Search Resource Centre, St. John’s Kitchen, Community Tools, Access to Technology, Affordable Supportive Housing and the Waterloo School for Community Development.

The Computer Recycling Project started as the result of creative thinking by an individual who had some serious obstacles to employment. The person couldn’t work, but wanted to help others find work. So in the late 1980s the individual put together a few computers for people to create resumes on. Other people became interested in helping out and the Computer Recycling Project was born.

Computer Recycling and the other projects have the following qualities:

  • They take donated or low cost materials, in our case computers and computer parts.
  • The apply volunteer labour to convert (repair/test) these materials into valuable goods. These goods are offered to the community at accessible prices (our computers generally range from $30 to $140)
  • Projects provide opportunities for people unable to work to contribute back to the community and help those looking to find jobs and upgrade their skills.
  • Projects also focus on creating open, friendly, and inclusive environments where everyone can contribute.

What influenced your decision to use Open Source software in your organization?

Computer Recycling didn’t always use Xubuntu and wasn’t originally a full time venture. Linux adoption started slowly. In 2001, thanks in part to our local Linux User Group, several volunteers (Paul Nijjar, Charles McColm and Daniel Allen) started working on a custom Linux distribution called Working Centre Linux Project (WCLP). At the time Computer Recycling had no legal options for distributing computers with Windows, so free and open source software was very appealing.

A couple of years later the project became a Microsoft Authorized Refurbisher. However restrictions on licensing (computers had to have a pre-existing Certificate of Authenticity and could only be sold to qualified low-income individuals) prevented us from installing Windows on all computers.

Linux didn’t have these kinds of restrictions so we continued to use it on a large number of computers. Over the years the Microsoft program changed and we became a Microsoft Registered Refurbisher (MRR). Microsoft dropped the “must have a pre-existing COA” on the computers we refurbish for low income individuals and provided another (more expensive) license for commercial sales, but we’ve continued to install both Windows and Linux. Last month Xubuntu Linux machines accounted for 63% of the computers we sold (due in part to the fact that we only sell Notebooks/Laptops with Xubuntu).

Price was definitely a factor for us in preferring open source software over other options. Most of the people we work with don’t have a lot of money, so accessing a low-cost ecosystem of reliable software was important. Each closed source license (Windows/Office) we buy costs the project money and this in turn translates into cost we have to pass on to the person buying a computer. We do a fair amount of quality assurance on systems, but if a system suffers some catastrophic failure (as we’ve had with a certain line of systems) we end up spending money on 4 licenses (2 for the original Windows/Office system and 2 for the replacement system). With Xubuntu we can often just pull the hard drive, put it in a different system and it’ll “just work” or need a little bit of work to get going. With Xubuntu the only cost to us is the effort we put into refurbishing the hardware.

Malware and viruses have also driven up the demand for Xubuntu systems. In the past several years we’ve seen many more people adopting Xubuntu because they’re fed up with how easy Windows systems get infected. Although centralized reliable software repositories are starting to become more popular in the proprietary software world, for years the availability and trustability of APT repositories was a big selling point.

What made you select Xubuntu for your deployments?

We mentioned earlier that Computer Recycling didn’t originally use Xubuntu. Instead, we started with The Working Centre Linux Project, our mix of IceWM on Debian designed to look like Windows 9x. Maintaining WCLP proved to be challenging because all 3 of the volunteers had separate jobs and a lot of other projects were starting to appear that made putting an attractive Linux desktop on donated hardware a lot easier. One of those projects was the Ubuntu project.

For a few years Computer Recycling adopted Ubuntu as its Linux OS of choice… right up until Ubuntu 10.10 (Unity) arrived on the scene. At this point we started looking at alternatives and we chose Xubuntu because it doesn’t make heavy demands of video processing or RAM. We also liked the fact that it had an interface that is relatively familiar to Windows users. We considered other desktop environments like LXDE, but small things matter and ease of use features like automounting USB keys tipped our choice in favour of Xubuntu.

Can you tell us a bit about your Xubuntu setup?

Paul has done all the work on our current form of installation: preseed files and some customization scripts. Computers are network booted and depending on the system we’ll install either a 32 or 64 bit version of Xubuntu with the following customizations:

  • Proprietary Flash Player
  • We show the bottom panel with an icon tray and don’t auto hide it
  • We use a customized version of the Whisker menu to show particular items
  • We include LibreOffice instead of depending on Abiword and Gnumeric
  • We include some VNC remote connection scripts so our volunteers can securely (and with the consent of the person at the other end) connect and provide remote help
  • We’ve set VLC as the default media player instead of Parole

We are very grateful for the existence of high-quality free and open source software. In addition to using it for desktop and laptop computers, we use it for information and testing (using a Debian-Live infrastructure) and for several in-house servers. About 8 years ago we hired a programmer looking for work to turn an open source eCommerce project into the point of sale software we still use today. The flexibility of open source was really important and has made a big difference when we needed to adjust to changes (merging of GST and PST in Canada to HST for example). Closed source solutions were carefully considered, but ultimately we decided to go open source because we knew we could adapt it to our needs without depending on a vendor.

Is there anything else you wish to share with us about your organization or how you use Xubuntu?

Paul and I are also grateful for the opportunities open source afforded us. In 2005 The Working Centre hired me (Charles) to advance the Computer Recycling Project into a more regular venture. This opportunity wouldn’t have come if I hadn’t first worked with Paul as a volunteer on the Working Centre Linux Project and worked on several other open source projects (a PHP/MySQL inventory system and a SAMBA server). Around 2007 The Working Centre hired Paul to help with system administration and general IT. Since then he’s spearheaded many FLOSS projects at The Working Centre. Daniel Allen, who also worked with us on WCLP, currently works as a Software Specialist for the Science department at the University of Waterloo.

People can find out more about The Working Centre’s Computer Recycling Project by visiting the web site at http://www.theworkingcentre.org/cr/

Ubuntu Studio: Ubuntustudio 16.04 Wallpaper Contest!

Wed, 01/20/2016 - 04:44
Contest Entries are here!! >>> https://flic.kr/s/aHsksiP3by <<< Where is YOUR entry? Ubuntustudio 16.04 will be officially released in April 2016. As this will be a Long Term Support version, we are most anxious to offer an excellent experience on the desktop. Therefore, the Ubuntustudio community will host a wallpaper design contest! The contest is open to […]

Mattia Migliorini: Best Ways to Save While You are Starting Your Business

Wed, 01/20/2016 - 03:44

Starting your own business can put you on the path to success and financial freedom. You become the master of your own destiny, and you work toward creating your own wealth rather than being at the whim of someone else. Your work enriches you – not someone else.

But starting your own business can also be very expensive. Not everyone can afford to just quit their jobs, rent an office space, hire a staff, and invest in a top-tier marketing campaign, among other expenses. Instead, most people start working for themselves at home, sometimes while they are still working on their full-time jobs.

Here are a few things you can do to save money while yo are getting your business started, whether you are working at it full-time or part-time or just working out of your home:

Choose the Most Affordable Service Providers

Depending on where you live, you likely have your choice in service providers. You don’t have to consign yourself to paying the prices you see on your bill for your electricity, Internet, or cable.

Shop around to fine out what providers are available in your area and what kind of rates they are currently offering. For example, you can check out Frontier Internet to get the best rates on your Internet service, which is one of the most important services you will need to run your business.

Buy the Best Computer You Can

While you will technically spend more money on a high-powered and sophisticated computer, it will actually save you money in the long run.

If you try to skimp on your computer purchase, you will end up with a machine that runs slowly, is more vulnerable to viruses, and crashes frequently. You will end up losing money on lost productivity, lost data, and lost contracts because of missed deadlines or customer dissatisfaction with your service.

Spring for the high-end computer that will help you get your work done more quickly and deliver better service for your clients. You’ll also keep your data safer and protect yourself against theft.

Invest in the Right Software

The right software can be like a service professional in your computer. For example, if you invest in professional tax software, you can reliably do your own taxes without hiring an accountant. Invest in payroll software, and you can work without a bookkeeper.

Other software will help you improve the efficiency of your operation. For example, the right invoicing software can help you put together accurate invoices in a timely manner ,ensuring that you don’t ever miss a payment.

Focus oN the Most Effective Marketing Tools

The right marketing campaign is key to your success when you are just starting out with your business. The right marketing campaign will help your build brand awareness and reach your target audience.

However, once you get started, you will quickly realize that there are hundreds of marketing tools available. You can quickly spend a lot of money on these tools without necessarily getting any results. Instead of trying to market with as many of these tools as you can, focus on only the most effective tools as identified by your research.

You will want to invest in a quality email marketing service, social media manager, and advertising network to start. Depending on your business, you may identify a few other tools that are necessary to help you get started.

Whatever you can to save money while you are getting started with your business can help keep you afloat while you are still trying to get customers and sales. You can also set aside money to invest in growing your business, such as moving into a dedicated space or even starting to hire a staff.

The post Best Ways to Save While You are Starting Your Business appeared first on deshack.

Dustin Kirkland: Data Driven Analysis: /tmp on tmpfs

Tue, 01/19/2016 - 21:16
tl;dr
  • Put /tmp on tmpfs and you'll improve your Linux system's I/O, reduce your carbon foot print and electricity usage, stretch the battery life of your laptop, extend the longevity of your SSDs, and provide stronger security.
  • In fact, we should do that by default on Ubuntu servers and cloud images.
  • Having tested 502 physical and virtual servers in production at Canonical, 96.6% of them could immediately fit all of /tmp in half of the free memory available and 99.2% could fit all of /tmp in (free memory + free swap).
Try /tmp on tmpfs Yourself$ echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab
$ sudo reboot
BackgroundIn April 2009, I proposed putting /tmp on tmpfs (an in memory filesystem) on Ubuntu servers by default -- under certain conditions, like, well, having enough memory. The proposal was "approved", but got hung up for various reasons.  Now, again in 2016, I proposed the same improvement to Ubuntu here in a bug, and there's a lively discussion on the ubuntu-cloud and ubuntu-devel mailing lists.

The benefits of /tmp on tmpfs are:
  • Performance: reads, writes, and seeks are insanely fast in a tmpfs; as fast as accessing RAM
  • Security: data leaks to disk are prevented (especially when swap is disabled), and since /tmp is its own mount point, we should add the nosuid and nodev options (and motivated sysadmins could add noexec, if they desire).
  • Energy efficiency: disk wake-ups are avoided
  • Reliability: fewer NAND writes to SSD disks
In the interest of transparency, I'll summarize the downsides:
  • There's sometimes less space available in memory, than in your root filesystem where /tmp may traditionally reside
  • Writing to tmpfs could evict other information from memory to make space
You can learn more about Linux tmpfs here.Not Exactly Uncharted Territory...Fedora proposed and implemented this in Fedora 18 a few years ago, citing that Solaris has been doing this since 1994. I just installed Fedora 23 into a VM and confirmed that /tmp is a tmpfs in the default installation, and ArchLinux does the same. Debian debated doing so, in this thread, which starts with all the reasons not to put /tmp on a tmpfs; do make sure you read the whole thread, though, and digest both the pros and cons, as both are represented throughout the thread.Full Data TreatmentIn the current thread on ubuntu-cloud and ubuntu-devel, I was asked for some "real data"...

In fact, across the many debates for and against this feature in Ubuntu, Debian, Fedora, ArchLinux, and others, there is plenty of supposition, conjecture, guesswork, and presumption.  But seeing as we're talking about data, let's look at some real data!

Here's an analysis of a (non-exhaustive) set of 502 of Canonical's production servers that run Ubuntu.com, Launchpad.net, and hundreds of related services, including OpenStack, dozens of websites, code hosting, databases, and more. These servers sampled are slightly biased with more physical machines than virtual machines, but both are present in the survey, and a wide variety of uptime is represented, from less than a day of uptime, to 1306 days of uptime (with live patched kernels, of course).  Note that this is not an exhaustive survey of all servers at Canonical.

I humbly invite further study and analysis of the raw, tab-separated data, which you can find at:
The column headers are:
  • Column 1: The host names have been anonymized to sequential index numbers
  • Column 2: `du -s /tmp` disk usage of /tmp as of 2016-01-17 (ie, this is one snapshot in time)
  • Column 3-8: The output of the `free` command, memory in KB for each server
  • Column 9-11: The output of the `free` command, sway in KB for each server
  • Column 12: The number of inodes in /tmp
I have imported it into a Google Spreadsheet to do some data treatment. You're welcome to do the same, or use the spreadsheet of your choice.
For the numbers below, 1 MB = 1000 KB, and 1 GB = 1000 MB, per Wikipedia. (Let's argue MB and MiB elsewhere, shall we?)  The mean is the arithmetic average.  The median is the middle value in a sorted list of numbers.  The mode is the number that occurs most often.  If you're confused, this article might help.  All calculations are accurate to at least 2 significant digits.Statistical summary of /tmp usage:
  • Max: 101 GB
  • Min: 4.0 KB
  • Mean: 453 MB
  • Median: 16 KB
  • Mode: 4.0 KB
Looking at all 502 servers, there are two extreme outliers in terms of /tmp usage. One server has 101 GB of data in /tmp, and the other has 42 GB. The latter is a very noisy django.log. There are 4 more severs using between 10 GB and 12 GB of /tmp. The remaining 496 severs surveyed (98.8%) are using less than 4.8 GB of /tmp. In fact, 483 of the servers surveyed (96.2%) use less than 1 GB of /tmp. 454 of the servers surveyed (90.4%) use less than 100 MB of /tmp. 414 of the servers surveyed (82.5%) use less than 10 MB of /tmp. And actually, 370 of the servers surveyed (73.7%) -- the overwhelming majority -- use less than 1MB of /tmp.
Statistical summary of total memory available:
  • Max: 255 GB
  • Min: 1.0 GB
  • Mean: 24 GB
  • Median: 10.2 GB
  • Mode: 4.1 GB
All of the machines surveyed (100%) have at least 1 GB of RAM.  495 of the machines surveyed (98.6%) have at least 2GB of RAM.   437 of the machines surveyed (87%) have at least 4 GB of RAM.   255 of the machines surveyed (50.8%) have at least 10GB of RAM.    157 of the machines surveyed (31.3%) have more than 24 GB of RAM.  74 of the machines surveyed (14.7%) have at least 64 GB of RAM.
Statistical summary of total swap available:
  • Max: 201 GB
  • Min: 0.0 KB
  • Mean: 13 GB
  • Median: 6.3 GB
  • Mode: 2.96 GB
485 of the machines surveyed (96.6%) have at least some swap enabled, while 17 of the machines surveyed (3.4%) have zero swap configured. One of these swap-less machines is using 415 MB of /tmp; that machine happens to have 32 GB of RAM. All of the rest of the swap-less machines are using between 4 KB and 52 KB (inconsequential) /tmp, and have between 2 GB and 28 GB of RAM.  5 machines (1.0%) have over 100 GB of swap space.
Statistical summary of swap usage:
  • Max: 19 GB
  • Min: 0.0 KB
  • Mean: 657 MB
  • Median: 18 MB
  • Mode: 0.0 KB
476 of the machines surveyed (94.8%) are using less than 4 GB of swap. 463 of the machines surveyed (92.2%) are using less than 1 GB of swap. And 366 of the machines surveyed (72.9%) are using less than 100 MB of swap.  There are 18 "swappy" machines (3.6%), using 10 GB or more swap.
Modeling /tmp on tmpfs usageNext, I took the total memory (RAM) in each machine, and divided it in half which is the default allocation to /tmp on tmpfs, and subtracted the total /tmp usage on each system, to determine "if" all of that system's /tmp could actually fit into its tmpfs using free memory alone (ie, without swap or without evicting anything from memory).

485 of the machines surveyed (96.6%) could store all of their /tmp in a tmpfs, in free memory alone -- i.e. without evicting anything from cache.

Now, if we take each machine, and sum each system's "Free memory" and "Free swap", and check its /tmp usage, we'll see that 498 of the systems surveyed (99.2%) could store the entire contents of /tmp in tmpfs free memory + swap available. The remaining 4 are our extreme outliers identified earlier, with /tmp usages of [101 GB, 42 GB, 13 GB, 10 GB].
Performance of tmpfs versus ext4-on-SSDFinally, let's look at some raw (albeit rough) read and write performance numbers, using a simple dd model.

My /tmp is on a tmpfs:
kirkland@x250:/tmp⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
tmpfs 7.7G 2.6M 7.7G 1% /tmp

Let's write 2 GB of data:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=/tmp/zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.56469 s, 1.4 GB/s

And let's write it completely synchronously:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.47235 s, 869 MB/s

Let's try the same thing to my Intel SSD:
kirkland@x250:/local⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 217G 106G 100G 52% /

And write 2 GB of data:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 7.52918 s, 285 MB/s

And let's redo it completely synchronously:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 11.9599 s, 180 MB/s

Let's go back and read the tmpfs data:
kirkland@x250:~⟫ dd if=/tmp/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.94799 s, 1.1 GB/s

And let's read the SSD data:
kirkland@x250:~⟫ dd if=/local/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.55302 s, 841 MB/s

Now, let's create 10,000 small files (1 KB) in tmpfs:
kirkland@x250:/tmp/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m15.518s
user 0m1.592s
sys 0m7.596s

And let's do the same on the SSD:
kirkland@x250:/local/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m26.713s
user 0m2.928s
sys 0m7.540s

For better or worse, I don't have any spinning disks, so I couldn't repeat the tests there.

So on these rudimentary read/write tests via dd, I got 869 MB/s - 1.4 GB/s write to tmpfs and 1.1 GB/s read from tmps, and 180 MB/s - 285 MB/s write to SSD and 841 MB/s read from SSD.

Surely there are more scientific ways of measuring I/O to tmpfs and physical storage, but I'm confident that, by any measure, you'll find tmpfs extremely fast when tested against even the fastest disks and filesystems.
Summary
  • /tmp usage
    • 98.8% of the servers surveyed use less than 4.8 GB of /tmp
    • 96.2% use less than 1.0 GB of /tmp
    • 73.7% use less than 1.0 MB of /tmp
    • The mean/median/mode are [453 MB / 16 KB / 4 KB]
  • Total memory available
    • 98.6% of the servers surveyed have at least 2.0 GB of RAM
    • 88.0% have least 4.0 GB of RAM
    • 57.4% have at least 8.0 GB of RAM
    • The mean/median/mode are [24 GB / 10 GB / 4 GB]
  • Swap available
    • 96.6% of the servers surveyed have some swap space available
    • The mean/median/mode are [13 GB / 6.3 GB / 3 GB]
  • Swap used
    • 94.8% of the servers surveyed are using less than 4 GB of swap
    • 92.2% are using less than 1 GB of swap
    • 72.9% are using less than 100 MB of swap
    • The mean/median/mode are [657 MB / 18 MB / 0 KB]
  • Modeling /tmp on tmpfs
    • 96.6% of the machines surveyed could store all of the data they currently have stored in /tmp, in free memory alone, without evicting anything from cache
    • 99.2% of the machines surveyed could store all of the data they currently have stored in /tmp in free memory + free swap
    • 4 of the 502 machines surveyed (0.8%) would need special handling, reconfiguration, or more swap
Conclusion
  • Can /tmp be mounted as a tmpfs always, everywhere?
    • No, we did identify a few systems (4 out of 502 surveyed, 0.8% of total) consuming inordinately large amounts of data in /tmp (101 GB, 42 GB), and with insufficient available memory and/or swap.
    • But those were very much the exception, not the rule.  In fact, 96.6% of the systems surveyed could fit all of /tmp in half of the freely available memory in the system.
  • Is this the first time anyone has suggested or tried this as a Linux/UNIX system default?
    • Not even remotely.  Solaris has used tmpfs for /tmp for 22 years, and Fedora and ArchLinux for at least the last 4 years.
  • Is tmpfs really that much faster, more efficient, more secure?
    • Damn skippy.  Try it yourself!
:-Dustin

Serge Hallyn: Cgroups are now handled a bit differently in Xenial

Tue, 01/19/2016 - 10:54

In the past, when you logged into an Ubuntu system, you would receive and be logged into a cgroup which you owned, one per controller (i.e. memory, freezer, etc). The main reason for this is so that unprivileged users can use things like lxc.

However this caused some trouble, especially through the cpuset controller. The problem is that when a cpu is plugged in, it is not added to any existing cpusets (in the legacy cgroup hierarchy, which we use). This is true even if you previously unplugged that cpu. So if your system has two cpus, when you first login you have cpus 0-1. 1 gets unplugged and replugged, now you only have 0. Now 0 gets unplugged…

The cgroup creation previously was done through a systemd patch, and is not configurable. In Xenial, we’ve now reduced that patch to only work on the name=systemd cgroup. Other controllers are to be handled by the new libpam-cgm package. By default it only creates a cgroup for the freezer controller. You can change the list by editing /etc/pam.d/common-session. For instance to add memory, you would change the line

optional pam_cgm.so -c freezer

to

optional pam_cgm.so -c freezer,memory

One more change expected to come to Xenial is to switch libpam-cgm to using lxcfs instead of cgmanager (or, just as likely, create a new conflicting libpam-cgroup package which does so). Since Xenial and later systems use systemd, which won’t boot without lxcfs anyway, we’ll lose no functionality by requiring lxcfs for unprivileged container creation on login.

On a side note, reducing the set of user-owned cgroups also required a patch to lxc. This means that in a mixture of nested lxcs, you may run into trouble if using nested unprivileged containers in older releases. For instance, if you create an unprivileged Trusty container on a Xenial host, you won’t own the memory cgroup by default, even if you’re root in the container. At the moment Trusty’s lxc doesn’t know how to handle that yet to create a nested container. The lxc patches should hopefully get SRUd, but in the meantime you can use the ubuntu-lxc ppas to get newer packages if needed. (Note that this is a non-issue when running lxd on the host.)


The Fridge: Ubuntu Weekly Newsletter Issue 450

Tue, 01/19/2016 - 00:27

Jonathan Riddell: In the Mansion House

Mon, 01/18/2016 - 15:35

Here is deepest Padania a 4 story mansion provides winter cover to KDE developers working to free your computers.


I woke up this morning and decided I liked it


The mansion keeps a close lock on the important stuff


The pool room has a table with no pockets, it must be posh


Front door


The not so important stuff


Jens will not open the borgen to the Danish


David prefers Flappy Birds to 1000€ renaissance painting


Engineers fail to implement continuous integration


Bring on the 7 course meal!


In the basement Smaug admires the view

by

Seif Lotfy: Skizze progress and REPL

Mon, 01/18/2016 - 10:41




Over the last 3 weeks, based on feedback we proceeded fledging out the concepts and the code behind Skizze.
Neil Patel suggested the following:

So I've been thinking about the server API. I think we want to choose one thing and do it as well as possible, instead of having six ways to talk to the server. I think that helps to keep things sane and simple overall.

Thinking about usage, I can only really imagine Skizze in an environment like ours, which is high-throughput. I think that is it's 'home' and we should be optimising for that all day long.

Taking that into account, I believe we have two options:

  1. We go the gRPC route, provide .proto files and let people use the existing gRPC tooling to build support for their favourite language. That means we can happily give Ruby/Node/C#/etc devs a real way to get started up with Skizze almost immediately, piggy-backing on the gRPC docs etc.

  2. We absorb the Redis Protocol. It does everything we need, is very lean, and we can (mostly) easily adapt it for what we need to do. The downside is that to get support from other libs, there will have to be actual libraries for every language. This could slow adoption, or it might be easy enough if people can reuse existing REDIS code. It's hard to tell how that would end up.

gRPC is interesting because it's built already for distributed systems, across bad networks, and obviously is bi-directional etc. Without us having to spend time on the protocol, gRPC let's us easily add features that require streaming. Like, imagine a client being able to listen for changes in count/size and be notified instantly. That's something that gRPC is built for right now.

I think gRPC is a bit verbose, but I think it'll pay off for ease of third-party lib support and as things grow.

The CLI could easily be built to work with gRPC, including adding support for streaming stuff etc. Which could be pretty exciting.

That being said, we gave Skizze a new home, where based on feedback we developed .proto files and started rewriting big chunks of the code.

We added a new wrapper called "domain" which represents a stream. It wraps around Count-Min-Log, Bloom Filter, Top-K and HyperLogLog++, so when feeding it values it feeds all the sketches. Later we intend to allow attaching and detaching sketches from "domains" (We need a better name).

We also implemented a gRPC API which should allow easy wrapper creation in other languages.

Special thanks go to Martin Pinto for helping out with unit tests and Soren Macbeth for thorough feedback and ideas about the "domain" concept.
Take a look at our initial REPL work there:


click for GIF

Canonical Design Team: Ubuntu Clock Refresh

Mon, 01/18/2016 - 07:47

In the coming months, users will start noticing things looking more and more different on Ubuntu. As we gradually roll out our new and improved Suru visual language, you’ll see apps and various other bits of the platform take on a lighter, more clean and cohesive look. The latest app to undergo a visual refresh is the Clock app and I’ll take you through how we approached its redesign.

The Redesign

Our Suru visual language is based on origami, with graphic elements containing meaningful folds and shadows to create the illusion of paper and draw focus to certain areas. Using the main clock face’s current animation (where the clock flips from analog to digital on touch) as inspiration, it seemed natural to place a fold in the middle of the clock. On touch, the clock “folds” from analog to digital.

To further the paper look, drop shadows are used to give the illusion of layers of paper. The shadow under the clock face elevates it from the page, adding a second layer. The drop shadows on the clock hands add yet another layer.

As for colours, the last clock design featured a grey and purple scheme. In our new design, we make use of our very soon-to-be released new color palette. We brightened the interface with a white background and light grey clock face. On the analog clock, the hour and second hands are now Ubuntu orange. With the lighter UI, this subtle use of branding is allowed to stand out more. Also, the purple text is gone in favor of a more readable dark grey.

The bottom edge hint has also been redesigned. The new design is much more minimal, letting users get used to the gesture without interrupting the content too much.

In the stopwatch section, the fold is absent from the clock face since the element is static. We also utilize our new button styling. In keeping with the origami theme, the buttons now have a subtle drop shadow rather than an inward shadow, to again create a more “paper” feel.

This project has been one of my favorites so far. Although it wasn’t a complete redesign (the functionality remains the same) it was fun seeing how the clock would evolve next. Hope you enjoy the new version of the clock, it’s currently in development so look out for it on your phones soon and stay tuned for more visual changes.

Visual Design: Rae Shambrook

UX Design: James Mulholland

 

Elizabeth K. Joseph: Color me Ubuntu at UbuCon Summit & SCALE14x

Sun, 01/17/2016 - 11:32

This week I’ll be flying down to Pasadena, California to attend the first UbuCon Summit, which is taking place at the the Fourteenth Annual Southern California Linux Expo (SCALE14x). The UbuCon Summit was the brain child of meetings we had over the summer that expressed concern over the lack of in person collaboration and connection in the Ubuntu community since the last Ubuntu Developer Summit back in 2012. Instead of creating a whole new event, we looked at the community-run UbuCon events around the world and worked with the organizers of the one for SCALE14x to bring in funding and planning help from Canonical, travel assistance to project members and speakers to provide a full two days of conference and unconference event content.

As an attendee of and speaker at these SCALE UbuCons for several years, I’m proud to see the work that Richard Gaskin and Nathan Haines has put into this event over the years turn into something bigger and more broadly supported. The event will feature two tracks on Thursday, one for Users and one for Developers. Friday will begin with a panel and then lead into an unconference all afternoon with attendee-driven content (don’t worry if you’ve never done an unconference before, a full introduction after the panel will be provided on to how to participate).

As we lead up to this the UbuCon Summit (you can still register here, it’s free!) on Thursday and Friday, I keep learning that more people from the Ubuntu community will be attending, several of whom I haven’t seen since that last Developer Summit in 2012. Mark Shuttleworth will be coming in to give a keynote for the event, along with various other speakers. On Thursday at 3PM, I’ll be giving a talk on Building a Career with Ubuntu and FOSS in the User track, and on Friday I’ll be one of several panelists participating in an Ubuntu Leadership Panel at 10:30AM, following the morning SCALE keynote by Cory Doctorow. Check out the full UbuCon schedule here: http://ubucon.org/en/events/ubucon-summit-us/schedule/

Over the past few months I’ve been able to hop on some of the weekly UbuCon Summit planning calls to provide feedback from folks preparing to participate and attend. During one of our calls, Abi Birrell of Canonical held up an origami werewolf that she’d be sending along instructions to make. Turns out, back in October the design team held a competition that included origami instructions and gave an award for creating an origami werewolf. I joked that I didn’t listen to the rest of the call after seeing the origami werewolf, I had already gone into planning mode!

With instructions in hand, I hosted an Ubuntu Hour in San Francisco last week where I brought along the instructions. I figured I’d use the Ubuntu Hour as a testing ground for UbuCon and SCALE14x. Good news: We had a lot of fun, it broke the ice with new attendees and we laughed a lot. Bad news: We’re not very good at origami. There were no completed animals at the end of the Ubuntu Hour!


The xerus helps at werewolf origami

At 40 steps to create the werewolf, one hour and a crowd inexperienced with origami, it was probably not the best activity if we wanted animals at the end, but it did give me a set of expectations. The success of how fun it was to try it (and even fail) did get me thinking though, what other creative things could we do at Ubuntu events? Then I read an article about adult coloring books. That’s it! I shot an email off to Ronnie Tucker, to see if he could come up with a coloring page. Most people in the Ubuntu community know Ronnie as the creator of Full Circle Magazine: the independent magazine for the Ubuntu Linux community, but he’s also a talented artist whose skills were a perfect matched for this task. Lucky for me, it was a stay-home snowy day in Glasgow yesterday and within a couple hours he had a werewolf draft to me. By this morning he had a final version ready for printing in my inbox.

You can download the creative commons licensed original here to print your own. I have printed off several (and ordered some packets of crayons) to bring along to the UbuCon Summit and Ubuntu booth in the SCALE14x expo hall. I’m also bringing along a bunch of origami paper, so people can try their hand at the werewolf… and unicorn too.

Finally, lest we forget that my actual paid job is a systems administrator on the OpenStack Infrastructure team, I’m also doing a talk at DevOpsDayLA on Open Source tools for distributed systems administration. If you think I geek out about Ubuntu and coloring werewolves, you should see how I act when I’m talking about the awesome systems work I get to do at my day job.

Dustin Kirkland: Intercession -- Check out this book!

Sun, 01/17/2016 - 09:57
https://www.inkshares.com/projects/intercessionA couple of years ago, a good friend of mine (who now works for Canonical) bought a book recommended in a byline in Wired magazine.  The book was called Daemon, by Leinad Zeraus.  I devoured it in one sitting.  It was so...different, so...technically correct, so....exciting and forward looking.  Self-driving cars and autonomous drones, as weapons in the hands of an anonymous network of hackers.  Yikes.  A thriller, for sure, but a thinker as well.  I loved it!

I blogged about it here in September of 2008, and that blog was actually read by the author of Daemon, who reached out to thank me for the review.  He sent me a couple of copies of the book, which I gave away to readers of my blog, who solved a couple of crypto-riddles in a series of blog posts linking eCryptfs to some of the techniques and technology used in Daemon.

I can now count Daniel Suarez (the award winning author who originally published Daemon under a pseudonym) as one of my most famous and interesting friends, and I'm always excited to receive an early draft of each of his new books.  I've enjoyed each of Daemon, Freedom™, Kill Decision, and Influx, and gladly recommend them to anyone interested in cutting edge, thrilling fiction.

Knowing my interest in the genre, another friend of mine quietly shared that they were working on their very first novel.  They sent me an early draft, which I loaded on my Kindle and read in a couple of days while on a ski vacation in Utah in February.  While it took me a few chapters to stop thinking about it as-a-story-written-by-a-friend-of-mine, once I did, I was in for a real treat!  I ripped through it in 3 sittings over two snowy days, on a mountain ski resort in Park City, Utah.

The title is Intercession.  It's an adventure story -- a hero, a heroin, and a villain.  It's a story about time -- intriguingly non-linear and thoughtfully complex.  There's subtle, deliberate character development, and a couple of face-palming big reveals, constructed through flashbacks across time.

They have published it now, under a pseudonym, Nyneve Ransom, on InkShares.com -- a super cool self-publishing platform (I've spent hours browsing and reading stories there now!).  If you love sci-fi, adventure, time, heroes, and villains, I'm sure you'll enjoy Intercession!  You can read a couple of chapters for free right now ;-)

Happy reading!
:-Dustin

Marcin Juszkiewicz: Running 32-bit ARM virtual machine on AArch64 hardware

Sun, 01/17/2016 - 03:24

It was a matter of days and finally all pieces are done. Running 32-bit ARM virtual machines on 64-bit AArch64 hardware is possible and quite easy.

Requirements
  • AArch64 hardware (I used APM Mustang as usual)
  • ARM rootfs (fetched Fedora 22 image with “virt-builder” tool)
  • ARM kernel and initramfs (I used Fedora 24 one)
  • Virt Manager (can be done from shell too)
Configuration

Start “virt-manager” and add new machine:

Select rootfs, kernel, initramfs (dtb will be provided internally by qemu) and tell kernel where rootfs is:

Then set amount of memory and cores. I did 10GB of RAM and 8 cores. Save machine.

Let’s run

Open created machine and press Play button. It should boot:

I upgraded F22 to F24 to have latest development system.

Is it fast?

If I would just boot and write about it then there will be questions about performance. I did build of gcc 5.3.1-3 using mock (standard Fedora way). On arm32 Fedora builder it took 19 hours, on AArch64 builder 4.5h only. On my machine AArch64 build took 9.5 hour and in this vm it took 12.5h (slow hdd used). So builder with memory and some fast storage will boost arm32 builds a lot.

Numbers from “openssl speed” shows performance similar to host cpu:

The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes md2 1787.41k 3677.19k 5039.02k 5555.88k 5728.94k mdc2 0.00 0.00 0.00 0.00 0.00 md4 24846.05k 81594.07k 226791.59k 418185.22k 554344.45k md5 18881.79k 60907.46k 163927.55k 281694.58k 357168.47k hmac(md5) 21345.25k 69033.83k 177675.52k 291996.33k 357250.39k sha1 20776.17k 65099.46k 167091.03k 275240.62k 338582.71k rmd160 15867.02k 42659.95k 88652.54k 123879.77k 140571.99k rc4 167878.11k 186243.61k 191468.46k 192576.51k 193112.75k des cbc 35418.48k 37327.19k 37803.69k 37954.56k 37991.77k des ede3 13415.40k 13605.87k 13641.90k 13654.36k 13628.76k idea cbc 36377.06k 38284.93k 38665.05k 38864.71k 39032.15k seed cbc 42533.48k 43863.15k 44276.22k 44376.75k 44397.91k rc2 cbc 29523.86k 30563.20k 30763.09k 30940.50k 30857.44k rc5-32/12 cbc 0.00 0.00 0.00 0.00 0.00 blowfish cbc 60512.96k 66274.07k 67889.66k 68273.15k 68302.17k cast cbc 56795.77k 61845.42k 63236.86k 63251.11k 63445.82k aes-128 cbc 61479.48k 65319.32k 67327.49k 67773.78k 66590.04k aes-192 cbc 53337.95k 55916.74k 56583.34k 56957.61k 57024.51k aes-256 cbc 46888.06k 48538.97k 49300.82k 49725.44k 50402.65k camellia-128 cbc 59413.00k 62610.45k 63400.53k 63593.13k 63660.03k camellia-192 cbc 47212.40k 49549.89k 50590.21k 50843.99k 50012.16k camellia-256 cbc 47581.19k 49388.89k 50519.13k 49991.68k 50978.82k sha256 27232.09k 64660.84k 119572.57k 151862.27k 164874.92k sha512 9376.71k 37571.93k 54401.88k 74966.36k 84322.99k whirlpool 3358.92k 6907.67k 11214.42k 13301.08k 14065.66k aes-128 ige 60127.48k 65397.14k 67277.65k 67428.35k 67584.00k aes-192 ige 52340.73k 56249.81k 57313.54k 57559.38k 57191.08k aes-256 ige 46090.63k 48848.96k 49684.82k 49861.32k 49892.01k ghash 150893.11k 171448.55k 177457.92k 179003.39k 179595.95k sign verify sign/s verify/s rsa 512 bits 0.000322s 0.000026s 3101.3 39214.9 rsa 1024 bits 0.001446s 0.000073s 691.7 13714.6 rsa 2048 bits 0.008511s 0.000251s 117.5 3987.5 rsa 4096 bits 0.058092s 0.000945s 17.2 1058.4 sign verify sign/s verify/s dsa 512 bits 0.000272s 0.000297s 3680.6 3363.6 dsa 1024 bits 0.000739s 0.000897s 1353.1 1115.2 dsa 2048 bits 0.002762s 0.002903s 362.1 344.5 sign verify sign/s verify/s 256 bit ecdsa (nistp256) 0.0005s 0.0019s 1977.8 538.3 384 bit ecdsa (nistp384) 0.0015s 0.0057s 663.0 174.6 521 bit ecdsa (nistp521) 0.0035s 0.0136s 286.8 73.4 op op/s 256 bit ecdh (nistp256) 0.0016s 616.0 384 bit ecdh (nistp384) 0.0049s 204.8 521 bit ecdh (nistp521) 0.0115s 87.2

Related posts:

  1. Fedora 21 RC5 released for AArch64
  2. AArch64 can build OpenEmbedded
  3. Running VMs on Fedora/AArch64

Mathieu Trudel: In full tinfoil hat mode: Using GPG with smartcards

Fri, 01/15/2016 - 21:48
Breaking OPSEC for a bit to write a how-to on using GPG keys with smartcards...

I've thought about experimenting with smartcards for a while. Turns out that my Thinkpad has a built-in smartcard reader, but most of my other systems don't. Also, I'd like to use a smartcard to protect my SSH keys, some of which I may use on systems that I do not fully control (ie. at the university to push code to Github or Bitbucket), or to get to my server. Smartcard readers are great, but they're not much fun to add to a list of stuff to carry everywhere.

There's an alternate option: the Yubikey. Yubico appears to have made a version 4 of the Yubikey which has CCID (smartcard magic), U2F (2-factor for GitHub and Google, on Chrome), and their usual OTP token, all on the same tiny USB key. What's more, it is documented as supporting 4096 bit RSA keys, and includes some ECC support (more on this later).

Setting up GPG keys for use with smartcards is simple. One has the choice of either creating your own keys locally, and moving them on the smartcard, or generating them on the smartcard right away. In other to have a backup of my full key available in a secure location, I've opted to generate the keys off of the card, and transferring them.

For this, you will need one (or two) Yubikey 4 (or Yubikey 4 Nano, or if you don't mind being limited to 2048 bit keys, the Yubikey NEO, which can also do NFC), some backup media of your choice, and apparently, at least the following packages:

gnupg2 gnupg-agent libpth20 libccid pcscd scdaemon libksba8 opensc
You should do all of this on a trusted system, not connected to any network.

First, setup gnupg2 to a reasonable level of security. Edit ~/.gnupg/gpg.conf to pick the options you want, I've based my config on Jeffrey Clement's blog entry on the subject:

#default-key AABBCC90DEADBEEF
keyserver hkp://keyserver.ubuntu.com
no-emit-version
no-comments
keyid-format 0xlong
with-fingerprint
use-agent
personal-cipher-preferences AES256 AES192 AES CAST5
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP UncompressedYou'll want to replace default-key later with the key you've created, and uncomment the line.

The downside to all of this is that you'll need to use gpg2 in all cases rather than gpg; which is still the default on Ubuntu and Debian. gpg2 so far seems to work just fine for ever use I've had (including debsign, after setting DEBSIGN_PROGRAM=gpg2 in ~/.devscripts).

You can now generate your master key:
gpg2 --gen-key
Then edit the key to add new UIDs (identities) and subkeys, which will each have their own different capabilities:

gpg2 --expert --edit-key 0xAABBCC90DEADBEEFBest is to follow jclement's blog entry for this. There is no point in reiterating all of it. There's also a pretty complete guide from The Linux Foundation IT here, though it seems to include a lot of stuff that does not appear to be required here on my system, in xenial.

Add the subkeys. You should have one of encryption, one for signing, and one for authentication. Works out pretty well, since there are three slots, one for each of these capabilities, on the Yubikey.

If you also want your master key on a smartcard, you'll probably need a second Yubikey (that's why I wrote two earlier), which would only get used to sign other people's keys, extend expiration dates, generate new subkeys, etc. That one should be left in a very secure location.

This is a great point to backup all the keys you've just created:

gpg2 -a --export-secret-keys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.master.key
gpg2 -a --export-secret-subkeys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.sub.key
gpg2 -a --export 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.pub
Next step is to configure the smartcard/Yubikey to add your name, a URL for the public key, set the PINs, etc. Use the following command for this:
gpg2 --card-edit
Finally, go back to editing your GPG key:
gpg2 --expert --edit-key 0xAABBCC90DEADBEEF
From this point you can use toggle to select each subkey (using key #), move them to the smartcard (keytocard), and deselect them (key #). To move the master key to the card, "toggle" out of toggle mode then back in, then immediately run 'keytocard'. GPG will ask if you're certain. There is no way to get a key back out of the card, if you want a local copy, you needed to make a backup first.

Now's probably a great time to copy your key to a keyserver, so that people may eventually start to use it to send you encrypted mail, etc.

After transferring the keys, you may want to make a "second backup", which would only contain the "clues" for GPG to know on which smartcard to find the private part of your keys. This will be useful if you need to use the keys on another system.

Another option is to use the public portion of your key (saved somewhere, like on a keyserver), then have gpg2 discover that it's on a smartcard using:

gpg2 --card-status
Unfortunately, it appears to only manage to pick up either only the master key, or only the subkeys, if you use separate smartcards. This may be a blessing in disguise, in that you'd still only use the master key on an offline, very secure system, and only the subkeys in your typical daily use scenario.

Don't forget to generate a revocation certificate. This is essential if you ever lose your key, if it's compromised, or you're ever in a situation where you want to let the world know quickly not to use your key anymore:

gpg2 --gen-revoke 0xAABBCC90DEADBEEFStore that data in a safe place.

Finally, more on backing up the GPG keys. It could be argued that keeping your master key on a smartcard might be a bad idea. After all, if the smartcard is lost, while it would be difficult to get the key out of the smartcard, you would probably want to treat it as compromised and get the key revoked. The same applies to keys kept on USB drives or on CD. A strong passphrase will help, but you still lost control of your key and at that point, no longer know whether it is still safe.

What's more, USB drives and CDs tend to eventually fail. CDs rot after a number of years, and USB drives just seem to not want to work correctly when you really need them. Paper is another option for backing up your keys, since there are ways (paperkey, for instance) to represent the data in a way that it could either be retyped or scanned back into digital data to be retrieved. Further securing a backup key could involve using gfshare to split it into multiple bits, in the hope that while one of its locations could be compromised (lost), you'll still have some of the others sufficient to reconstruct the key.

With the subkeys on the Yubikey, and provided gpg2 --card-status reports your card as detected, if you have the gpg-agent running with SSH support enabled you should be able to just run:

ssh-add -l
And have it list your card serial number. You can then use ssh-add -L to get the public key to use to add to authorized_keys files to use your authentication GPG subkey as a SSH key. If it doesn't work, make sure the gpg-agent is running and that ssh-add uses the right socket, and make sure pcscd isn't interfering (it seemed to get stuck in a weird state, and not shutting down automatically as it should after dealing with a request).

Whenever you try to use one of the subkeys (or the master key), rather than being asked for the passphrase for the key (which you should have set as a very difficult, absolutely unguessable string that you and only you could remember and think of), you will be asked to enter the User PIN set for the smartcard.

You've achieved proper two-factor authentication.

Note of ECC on the Yubikey: according to the marketing documentation, the Yubikey knows about ECC p256 and ECC p384. Unfortunately, it looks like safecurves.cr.yp.to considers these unsafe, since they do not meet all the SafeCurves requirements. I'm not especially versed in cryptography, but this means I'll read up more on the subject, and stay away from the ECC implementation on the Yubikey 4 for now. However, it doesn't seem, at first glace, that this ECC implementation is meant for GPG at all. The Yubikey also has PIV magic which would allow it to be used as a pure SSH smartcard (rather than using a GPG authentication subkey for SSH), with a private certificate being generated by the card. These certificates could be created using RSA or ECC. I tried to play a bit with it (using RSA), following the SSH with PIV and PKCS11 document on developers.yubico.com; but I didn't manage to make it work. It looks like the GPG functions might interfere with PIV in some way or I could just not handle the ssh-agent the right way. I'm happy to be shown how to use this correctly.




The Fridge: Ubuntu IRCC Nominations 2016

Fri, 01/15/2016 - 16:49

The current IRC council has 3 members whose 2 year terms are ending. This means it is now election season.

Details about the IRC Council and its charter may be viewed here. Council members normally serve a two year term, and may stand for multiple terms.

From the wiki page the election process is as follows:

Elections of new IRC Council members will be held in the following way:

  • An open call for nominations should be announced in the IRC Community, and people can nominate themselves for a seat on the council. Everyone is welcome to apply.
  • To apply for a seat the candidate creates a Wiki page outlining their work in the community, and inviting others to provide testimonials.
  • When the application deadline has passed, the IRC Council will review the applications and provide feedback on the candidates for the Community Council to review.
  • The Community Council will identify a shortlist for the board and circulate the list publically for feedback from the community.
  • The shortlist identified by the Community Council will be voted upon by team members as described at CommunityCouncil/Delegation. Members of the Ubuntu IRC Members Team are eligible to vote.
  • The Community Council will then finalize the appointment of IRC Council members.

As you may have guessed, this is our call for nominations. Please feel free to nominate yourself, and remember to talk to others who you intend to nominate first.

All Ubuntu Members are welcome to apply. If you’re not a member but believe you meet the criteria to be one, then visit the Membership page and learn how to make it happen. IRC contributions are highly regarded in the search for IRC Council members, but are not essential.

To nominate yourself, create a wiki page for yourself and announce it and your candidacy on the IRC team mailing list. Nominations will be open through to 14 February, 2016, when a full list of applicants will be forwarded to the Community Council for checking. If there are more qualified applicants than positions, a vote will be announced to take place at the end of February.

Originally posted to the ubuntu-irc mailing list on Fri Jan 15 08:17:28 UTC 2016 by Melissa Draper on behalf of the Ubuntu IRC Council

Carla Sella: RockWork the Ubuntu clockwork for the Pebble smartwatch

Fri, 01/15/2016 - 14:11

RockWork is a community driven project that aims to provide an open-source unofficial app to be able to use a Pebble smartwatch on an Ubuntu phone/device.
You can find it on Launchpad here:
https://launchpad.net/rockwork.


So far this is what it looks like, I am using it with my Pebble Classic watch and it is installed on my BQ Acquaris E4.5



You  connect your Pebble watch to your Ubuntu device using bluetooth.You can manage notifications by deciding witch ones to activate...

install apps directly from the app and manage their settings...
   



manage wathcfaces...

  







and screenshots of your Pebble dispaly that you can then share on social media, e-mail or SMS.

















Lubuntu Blog: Lubuntu Xenial Xerus (with LXQt) in a Raspberry Pi 2

Fri, 01/15/2016 - 11:27
A nice experiment made by wxl from the Lubuntu QA Team: running Lubuntu Xenial Xerus on a Raspberry Pi 2, with LXQt desktop. Made with Ubuntu Pi Flavour Maker, and following simple instructions: Get the image from here Install the image do-release-upgrade to Xenial Install LXQt packages following the wiki guide And that’s all. Enjoy Lubuntu in your […]

Lubuntu Blog: Lubuntu 16.04 LTS will use kernel 4.4 LTS

Fri, 01/15/2016 - 10:42
The Ubuntu Linux kernel team has announced that the Linux kernel in Ubuntu 16.04 LTS has been upgraded to version 4.4, the latest stable release made available. Linus Torvalds released a new stable version of the Linux kernel just a few days ago, and the Ubuntu developers have been quick to integrate it into the […]

Ubuntu App Developer Blog: Announcing the Ubuntu Scopes Showdown 2016!

Fri, 01/15/2016 - 10:11

Today we announce the launch of our second Ubuntu Scopes Showdown! We are excited to bring you yet another engaging developer competition, where the Ubuntu app developer community brings innovative and interesting new experiences for Ubuntu on mobile devices.

Scopes in Javascript and Go were introduced recently and are the hot topic of this competition!

Contestants will have six weeks to build and publish their Unity8 scopes to the store using the Ubuntu SDK and Scopes API (JavaScript, Go or C++), starting Monday January 18th.

A great number of exciting prizes are up for grabs: a System76 Meerkat computer, BQ E5 Ubuntu phones, Steam Controllers, Steam Link, Raspberry Pi 2 and convergence packs for Ubuntu phones!

Find out more details on how to enter the competition. Good luck and get developing! We look forward to seeing your scopes on our Ubuntu homescreens!

Sebastian K&uuml;gler: Is privacy Free software’s next milestone?

Fri, 01/15/2016 - 07:35

I am concerned. In the past years, it has become clear that real privacy has become harder to come by. Our society is quickly heading into a situation where an unknown number of entities and people can follow my every single step, and it’s not possible to keep to myself what I don’t want others to know. With every step into that direction, there’s less and less things about my life of which I don’t control who knows about it.

Privacy as product or weapon

Realistically, I won’t be able to do that, however, since in this modern age, tools that need to share data are rather the norm, than the exception. Most of the time, this sharing of data (even if only between my own devices) goes through the hand of a third party. On top of that, there’s a whole lot of spying going on, and of course malicious hackers which are keen to acquire large personal sets of identity data. My personal data can make me a product, and worse, it can be used as a weapon against myself. It is really in my best interest to share only the absolute minimal amount of data with as little others as possible.

Traditionally, this urgency for privacy has been closely connected to the goals of Free software. This is not a coincidence. Free software and was intended as a way to give control to the users, and copyleft is an effective tool to achieve “software democracy”, in the best interest of the user. In reverse, someone who is not in control of his data cannot truly be free. Privacy and freedom are in fact closely related concepts.

Software Freedom: economics and ideology

I prefer Free software over proprietary solutions. It puts me in control what my machine does, it allows me to fulfill my needs and influence the tools I use for communication, work and entertainment into a direction that is driven by value to the user, rather than return-on-investment measured in money.

When I started using computers, Free software was sub-par to proprietary solutions, that is largely not the case anymore. In many cases, Free software surpasses what proprietary alternatives offer. In a lot of areas, Free software has come to dominate the market.
This is not surprising, given the economic model behind Free software. In the long run, building on the shoulder of giants, sharing the work across more stakeholders, open code and processes are more economical, scale better and tend to be more sustainable.
The ideological point of view benefits from that, I can lead a fully functional digital life using almost exclusively Free software and I certain guarantees of continuity often unmet in the proprietary world.

Shifting purpose

To me, the purpose of Free software has shifted a bit, or rather expands to enabling privacy. A good measurement whether the Free software movement has achieved its goal is the degree of privacy it allows me to have, while enabling all the modern amendments that our digital age makes possible, or even just to have a private conversation with a friend.

Effective privacy

Effective privacy needs network effects, so it doesn’t work very well for niche products. Of what use is a secure and private communication tool if I can’t use it to talk with my friends? Luckily the initial successes of Free software still play in our advantage: being able to collaboratively develop and share the work across many shoulders, we should be able to not just build all the pieces, but put together a complete set of solutions that make better privacy achievable for more people. In terms of achieving network effects, we’re not starting at zero, but our adversaries are strong, and often ahead of our game, some tend to play unfair.

Purpose means responsibility

Is it not our responsibility as Free software community (or even just as citizens) to provide the tools that maximize privacy for the users? If the answer is yes, then I suppose the measurement for success is how much can we make possible while maximizing privacy? How attractive can we make the tools in terms of functionality, effectiveness and availability?

A happy user is one who finds that a useful and fun-to-use tool also protects him from threats that he often may not fully appreciate until it’s too late.

Pages