Feed aggregator

Ubuntu Podcast from the UK LoCo: S09E05 – Dark side of the Toon - Ubuntu Podcast

Planet Ubuntu - Thu, 03/31/2016 - 07:00

It’s Episode Five of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Xubuntu: The small details: Theme configuration

Planet Ubuntu - Wed, 03/30/2016 - 13:37

In this series the Xubuntu team present some of the smaller details in Xubuntu to help you use your system more efficiently. Several of the features covered in this series are new for those who will be upgrading from 14.04 LTS to 16.04 LTS. We will also cover some features that have been in Xubuntu for longer, for those that are completely new to the operating system.

Customization is one of the strengths of Xubuntu and Xfce alike. To make the user experience complete, the Xubuntu team has created a few small applications to make Xubuntu even more customizable.

One of the customization options that is provided in all operating systems across the board is the ability to change the theming of the user interface. However, the themes that are provided often come in specific colors only. Modifying them to use a different color manually can be a tedious task.

As originally highlighted in the Xubuntu 14.10 release, the Xubuntu team has created the Theme configuration application, with which you can change the colors of any theme on the fly, without having to modify the theme source!

Modifying the theme colors

To change the individual colors in your theme, open Theme configuration from Menu → Settings Manager → Theme Configuration. From this dialog you can change the highlight colors, panel colors and menu colors individually.

Simply turn the custom colors on or off as you wish and use the color picker to pick your preferred colors. Finally, press the Apply button to apply the changes. If you ever want to get back to the original colors, use the Reset button.

Note that some applications might need a restart even after pressing the Apply button for the changes to take effect.

Dustin Kirkland: Ubuntu on Windows -- The Ubuntu Userspace for Windows Developers

Planet Ubuntu - Wed, 03/30/2016 - 09:58
See also Scott Hanselman's blog hereI'm in San Francisco this week, attending Microsoft's Build developer conference, as a sponsored guest of Microsoft.



That's perhaps a bit odd for me, as I hadn't used Windows in nearly 16 years.  But that changed a few months ago, as I embarked on a super secret (and totally mind boggling!) project between Microsoft and Canonical, as unveiled today in a demo during Kevin Gallo's opening keynote of the Build conference....



An Ubuntu user space and bash shell, running natively in a Windows 10 cmd.exe console!


Did you get that?!?  Don't worry, it took me a few laps around that track, before I fully comprehended it when I first heard such crazy talk a few months ago :-)

Here's let's break it down slowly...
  1. Windows 10 users
  2. Can open the Windows Start menu
  3. And type "bash" [enter]
  4. Which opens a cmd.exe console
  5. Running Ubuntu's /bin/bash
  6. With full access to all of Ubuntu user space
  7. Yes, that means apt, ssh, rsync, find, grep, awk, sed, sortxargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch...
  8. And most of the tens of thousands binary packages available in the Ubuntu archives!
"Right, so just Ubuntu running in a virtual machine?"  Nope!  This isn't a virtual machine at all.  There's no Linux kernel booting in a VM under a hypervisor.  It's just the Ubuntu user space.
"Ah, okay, so this is Ubuntu in a container then?"  Nope!  This isn't a container either.  It's native Ubuntu binaries running directly in Windows.
"Hum, well it's like cygwin perhaps?"  Nope!  Cygwin includes open source utilities are recompiled from source to run natively in Windows.  Here, we're talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.
[long pause]
"So maybe something like a Linux emulator?"  Now you're getting warmer!  A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls.  Linux geeks can think of it sort of the inverse of "wine" -- Ubuntu binaries running natively in Windows.  Microsoft calls it their "Windows Subsystem for Linux".  (No, it's not open source at this time.)
Oh, and it's totally shit hot!  The sysbench utility is showing nearly equivalent cpu, memory, and io performance.
So as part of the engineering work, I needed to wrap the stock Ubuntu root filesystem into a Windows application package (.appx) file for suitable upload to the Windows Store.  That required me to use Microsoft Visual Studio to clone a sample application, edit a few dozen XML files, create a bunch of icon .png's of various sizes, and so on.
Not being Windows developer, I struggled and fought with Visual Studio on this Windows desktop for a few hours, until I was about ready to smash my coffee mug through the damn screen!
Instead, I pressed the Windows key, typed "bash", hit enter.  Then I found the sample application directory in /mnt/c/Users/Kirkland/Downloads, and copied it using "cp -a".  I used find | xargs | rename to update a bunch of filenames.  And a quick grep | xargs | sed to comprehensively search and replace s/SampleApp/UbuntuOnWindows/. And Ubuntu's convert utility quickly resized a bunch of icons.   Then I let Visual Studio do its thing, compiling the package and uploading to the Windows Store.  Voila!
Did you catch that bit about /mnt/c...  That's pretty cool...  All of your Windows drives, like C: are mounted read/write directly under /mnt.  And, vice versa, you can see all of your Ubuntu filesystem from Windows Explorer in C:\Users\Kirkland\AppData\Local\Lxss\rootfs\


Meanwhile, I also needed to ssh over to some of my other Ubuntu systems to get some work done.  No need for Putty!  Just ssh directly from within the Ubuntu shell.



Of course apt install and upgrade as expected.



Is everything working exactly as expected?  No, not quite.  Not yet, at least.  The vast majority of the LTP passes and works well.  But there are some imperfections still, especially around tty's an the vt100.  My beloved byobu, screen, and tmux don't quite work yet, but they're getting close!

And while the current image is Ubuntu 14.04 LTS, we're expecting to see Ubuntu 16.04 LTS replacing Ubuntu 14.04 in the Windows Store very, very soon.

Finally, I imagine some of you -- long time Windows and Ubuntu users alike -- are still wondering, perhaps, "Why?!?"  Having dedicated most of the past two decades of my career to free and open source software, this is an almost surreal endorsement by Microsoft on the importance of open source to developers.  Indeed, what a fantastic opportunity to bridge the world of free and open source technology directly into any Windows 10 desktop on the planet.  And what a wonderful vector into learning and using more Ubuntu and Linux in public clouds like Azure.  From Microsoft's perspective, a variety of surveys and user studies have pointed to bash and Linux tools -- very specifically, Ubuntu -- be available in Windows, and without resource-heavy full virtualization.

So if you're a Windows Insider and have access to the early beta of this technology, we certainly hope you'll try it out!  Let us know what you think!

If you want to hear more, hopefully you'll tune into the Channel 9 Panel discussion at 16:30 PDT on March 30, 2016.

Cheers,
Dustin

Ubuntu Insights: Ubuntu on Windows — The Ubuntu Userspace for Windows Developers

Planet Ubuntu - Wed, 03/30/2016 - 09:11

I’m in San Francisco this week, attending Microsoft’s Build developer conference, as a sponsored guest of Microsoft.

That’s perhaps a bit odd for me, as I hadn’t used Windows in nearly 16 years.  But that changed a few months ago, as I embarked on a super secret (and totally mind boggling!) project between Microsoft and Canonical, as unveiled today in a demo during Kevin Gallo‘s opening keynote of the Build conference….

An Ubuntu user space and bash shell, running natively in a Windows 10 cmd.exe console!

Did you get that?!?  Don’t worry, it took me a few laps around that track, before I fully comprehended it when I first heard such crazy talk a few months ago

Here’s let’s break it down slowly…

  1. Windows 10 users
  2. Can open the Windows Start menu
  3. And type “bash” [enter]
  4. Which opens a cmd.exe console
  5. Running Ubuntu’s /bin/bash
  6. With full access to all of Ubuntu user space
  7. Yes, that means apt, ssh, rsync, find, grep, awk, sed, sortxargs, md5sum, gpg, curl, wget, apache, mysql, python, perl, ruby, php, gcc, tar, vim, emacs, diff, patch
  8. And most of the tens of thousands binary packages available in the Ubuntu archives!

“Right, so just Ubuntu running in a virtual machine?”  Nope!  This isn’t a virtual machine at all.  There’s no Linux kernel booting in a VM under a hypervisor.  It’s just the Ubuntu user space.

“Ah, okay, so this is Ubuntu in a container then?”  Nope!  This isn’t a container either.  It’s native Ubuntu binaries running directly in Windows.

“Hum, well it’s like cygwin perhaps?”  Nope!  Cygwin includes open source utilities are recompiled from source to run natively in Windows.  Here, we’re talking about bit-for-bit, checksum-for-checksum Ubuntu ELF binaries running directly in Windows.

[long pause]

“So maybe something like a Linux emulator?”  Now you’re getting warmer!  A team of sharp developers at Microsoft has been hard at work adapting some Microsoft research technology to basically perform real time translation of Linux syscalls into Windows OS syscalls.  Linux geeks can think of it sort of the inverse of “wine” — Ubuntu binaries running natively in Windows.  Microsoft calls it their “Windows Subsystem for Linux”.  (No, it’s not open source at this time).

Oh, and it’s totally sh*t hot!  The sysbench utility is showing nearly equivalent cpu, memory, and io performance.

So as part of the engineering work, I needed to wrap the stock Ubuntu root filesystem into a Windows application package (.appx) file for suitable upload to the Windows Store.  That required me to use Microsoft Visual Studio to clone a sample application, edit a few dozen XML files, create a bunch of icon .png’s of various sizes, and so on.

Not being Windows developer, I struggled and fought with Visual Studio on this Windows desktop for a few hours, until I was about ready to smash my coffee mug through the damn screen!

Instead, I pressed the Windows key, typed “bash“, hit enter.  Then I found the sample application directory in /mnt/c/Users/Kirkland/Downloads, and copied it using “cp -a“.  I used find | xargs | rename to update a bunch of filenames.  And a quick grep | xargs | sed to comprehensively search and replace s/SampleApp/UbuntuOnWindows/. And Ubuntu’s convert utility quickly resized a bunch of icons.   Then I let Visual Studio do its thing, compiling the package and uploading to the Windows Store.  Voila!

Did you catch that bit about /mnt/c…  That’s pretty cool…  All of your Windows drives, like C: are mounted read/write directly under /mnt.  And, vice versa, you can see all of your Ubuntu filesystem from Windows Explorer in C:UsersKirklandAppDataLocalLxssrootfs

Meanwhile, I also needed to ssh over to some of my other Ubuntu systems to get some work done.  No need for Putty!  Just ssh directly from within the Ubuntu shell.

Of course apt install and upgrade as expected.

Is everything working exactly as expected?  No, not quite.  Not yet, at least.  The vast majority of the LTP passes and works well.  But there are some imperfections still, especially around tty’s an the vt100.  My beloved byobu, screen, and tmux don’t quite work yet, but they’re getting close!

And while the current image is Ubuntu 14.04 LTS, we’re expecting to see Ubuntu 16.04 LTS replacing Ubuntu 14.04 in the Windows Store very, very soon.

Finally, I imagine some of you — long time Windows and Ubuntu users alike — are still wondering, perhaps, “Why?!?”  Having dedicated most of the past two decades of my career to free and open source software, this is an almost surreal endorsement by Microsoft on the importance of open source to developers.  Indeed, what a fantastic opportunity to bridge the world of free and open source technology directly into any Windows 10 desktop on the planet.  And what a wonderful vector into learning and using more Ubuntu and Linux in public clouds like Azure.  From Microsoft’s perspective, a variety of surveys and user studies have pointed to bash and Linux tools — very specifically, Ubuntu — be available in Windows, and without resource-heavy full virtualization.

Original article

Colin Watson: Re-signing PPAs

Planet Ubuntu - Wed, 03/30/2016 - 02:20

Julian has written about their efforts to strengthen security in APT, and shortly before that notified us that Launchpad’s signatures on PPAs use weak SHA-1 digests. Unfortunately we hadn’t noticed that before; GnuPG’s defaults tend to result in weak digests unless carefully tweaked, which is a shame.

I started on the necessary fixes for this immediately we heard of the problem, but it’s taken a little while to get everything in place, and I thought I’d explain why since some of the problems uncovered are interesting in their own right.

Firstly, there was the relatively trivial matter of using SHA-512 digests on new signatures. This was mostly a matter of adjusting our configuration, although writing the test was a bit tricky since PyGPGME isn’t as helpful as it could be. (Simpler repository implementations that call gpg from the command line should probably just add the --digest-algo SHA512 option instead of imitating this.)

After getting that in place, any change to a suite in a PPA will result in it being re-signed with SHA-512, which is good as far as it goes, but we also want to re-sign PPAs that haven’t been modified. Launchpad hosts more than 50000 active PPAs, though, a significant percentage of which include packages for sufficiently recent Ubuntu releases that we’d want to re-sign them for this. We can’t expect everyone to push new uploads, and we need to run this through at least some part of our usual publication machinery rather than just writing a hacky shell script to do the job (which would have no idea which keys to sign with, to start with); but forcing full reprocessing of all those PPAs would take a prohibitively long time, and at the moment we need to interrupt normal PPA publication to do this kind of work. I therefore had to spend some quality time working out how to make things go fast enough.

The first couple of changes (1, 2) were to add options to our publisher script to let us run just the one step we need in “careful” mode: that is, forcibly re-run the Release file processing step even if it thinks nothing has changed, and entirely disable the other steps such as generating Packages and Sources files. Then last week I finally got around to timing things on one of our staging systems so that we could estimate how long a full run would take. It was taking a little over two seconds per archive, which meant that if we were to re-sign all published PPAs then that would take more than 33 hours! Obviously this wasn’t viable; even just re-signing xenial would be prohibitively slow.

The next question was where all that time was going. I thought perhaps that the actual signing might be slow for some reason, but it was taking about half a second per archive: not great, but not enough to account for most of the slowness. The main part of the delay was in fact when we committed the database transaction after processing each archive, but not in the actual PostgreSQL commit, rather in the ORM invalidate method called to prepare for a commit.

Launchpad uses the excellent Storm for all of its database interactions. One property of this ORM (and possibly of others; I’ll cheerfully admit to not having spent much time with other ORMs) is that it uses a WeakValueDictionary to keep track of the objects it’s populated with database results. Before it commits a transaction, it iterates over all those “alive” objects to note that if they’re used in future then information needs to be reloaded from the database first. Usually this is a very good thing: it saves us from having to think too hard about data consistency at the application layer. But in this case, one of the things we did at the start of the publisher script was:

def getPPAs(self, distribution): """Find private package archives for the selected distribution.""" if (self.isCareful(self.options.careful_publishing) or self.options.include_non_pending): return distribution.getAllPPAs() else: return distribution.getPendingPublicationPPAs() def getTargetArchives(self, distribution): """Find the archive(s) selected by the script's options.""" if self.options.partner: return [distribution.getArchiveByComponent('partner')] elif self.options.ppa: return filter(is_ppa_public, self.getPPAs(distribution)) elif self.options.private_ppa: return filter(is_ppa_private, self.getPPAs(distribution)) elif self.options.copy_archive: return self.getCopyArchives(distribution) else: return [distribution.main_archive]

That innocuous-looking filter means that we do all the public/private filtering of PPAs up-front and return a list of all the PPAs we intend to operate on. This means that all those objects are alive as far as Storm is concerned and need to be considered for invalidation on every commit, and the time required for that stacks up when many thousands of objects are involved: this is essentially accidentally quadratic behaviour, because all archives are considered when committing changes to each archive in turn. Normally this isn’t too bad because only a few hundred PPAs need to be processed in any given run; but if we’re running in a mode where we’re processing all PPAs rather than just ones that are pending publication, then suddenly this balloons to the point where it takes a couple of seconds. The fix is very simple, using an iterator instead so that we don’t need to keep all the objects alive:

from itertools import ifilter def getTargetArchives(self, distribution): """Find the archive(s) selected by the script's options.""" if self.options.partner: return [distribution.getArchiveByComponent('partner')] elif self.options.ppa: return ifilter(is_ppa_public, self.getPPAs(distribution)) elif self.options.private_ppa: return ifilter(is_ppa_private, self.getPPAs(distribution)) elif self.options.copy_archive: return self.getCopyArchives(distribution) else: return [distribution.main_archive]

After that, I turned to that half a second for signing. A good chunk of that was accounted for by the signContent method taking a fingerprint rather than a key, despite the fact that we normally already had the key in hand; this caused us to have to ask GPGME to reload the key, which requires two subprocess calls. Converting this to take a key rather than a fingerprint gets the per-archive time down to about a quarter of a second on our staging system, about eight times faster than where we started.

Using this, we’ve now re-signed all xenial Release files in PPAs using SHA-512 digests. On production, this took about 80 minutes to iterate over around 70000 archives, of which 1761 were modified. Most of the time appears to have been spent skipping over unmodified archives; even a few hundredths of a second per archive adds up quickly there. The remaining time comes out to around 0.4 seconds per modified archive. There’s certainly still room for speeding this up a bit.

We wouldn’t want to do this procedure every day, but it’s acceptable for occasional tasks like this. I expect that we’ll similarly re-sign wily, vivid, and trusty Release files soon in the same way.

Didier Roche: Ubuntu Make 16.03 features Eclipse JEE, Intellij EAP, Kotlin and a bunch of fixes!

Planet Ubuntu - Wed, 03/30/2016 - 02:15

I'm really delighted to announce a new Ubuntu Make release, scoring 16.03, bringing updates for a bunch of frameworks while introducing new support!

I'm also really proud as this new release features three new awesome contributors: Tankypon, adding the Superpowers game editor framework, Eakkapat Pattarathamrong, adding more tests for Visual Studio Code, and Almeida, doing some great updates to the portuguese translations!

The returning awesome work from Galileo Sartor an Omer Sheikh got us new Eclipse JEE installation support, IntelliJ IDEA EAP and Kotlin compiler. In addition to those new features, we have a lot fixes for Unity3D, Android-NDK, Clang, Visual Studio Code and Intellij-based IDEs as the server counter-part changed. The usual polish and a bunch of additional smaller incremental improvements joined the party as well! If you are interested into the nifty details, you can head over the change log.

If you can't wait to try it, grab this latest version direcly through its ppa for the 14.04 LTS, 15.10 ubuntu and xenial releases. This release wouldn't have been possible without our awesome contributors community, thanks to them again!

Our issue tracker is full of ideas and opportunities, and pull requests remain opened for any issues or suggestions! If you want to be the next featured contributor and want to give an hand, you can refer to this post with useful links!

Ubuntu Insights: LXD 2.0: Resource control [4/12]

Planet Ubuntu - Wed, 03/30/2016 - 01:00

This is part 4 of a series about LXD 2.0: CPU support and applying limits to memory, disk and networks to your container.

Available resource limits

LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.

As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.

All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.

We don’t support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.

Disk

This is perhaps the most requested and obvious one. Simply setting a size limit on the container’s filesystem and have it enforced against the container.

And that’s exactly what LXD lets you do!
Unfortunately this is far more complicated than it sounds. Linux doesn’t have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.

This means that right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.

CPU

When it comes to CPU limits, we support 4 different things:

  • Just give me X CPUs
    In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
    The container only sees that number of CPU.
  • Give me a specific set of CPUs (say, core 1, 3 and 5)
    Similar to the first mode except that no load-balancing is happening, you’re stuck with those cores no matter how busy they may be.
  • Give me 20% of whatever you have
    In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isn’t busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
  • Out of every measured 200ms, give me 50ms (and no more than that)
    This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.

It’s also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.

On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when you’re under load and two containers are fighting for the same resource.

Memory

Memory sounds pretty simple, just give me X MB of RAM!

And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!

Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if it’s on, set a priority so you can choose what container will have their memory swapped out to disk first!

Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.

Alternatively you can set the enforcement policy to “soft”, in which case you’ll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you won’t be able to allocate anything until you’re back under your limit or until the host has memory to spare again.

Network I/O

Network I/O is probably our simplest looking limit, trust me, the implementation really isn’t simple though!

We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.

The second thing is a global network I/O priority which only applies when the network interface you’re trying to talk through is saturated.

Block I/O

I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it won’t exactly do what you think it should.

What we support here is basically identical to what I described in Network I/O.

You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.

The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we can’t set per-partition I/O limits let alone per-path.

It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively don’t know what block device is providing a given path.

This means that it’s entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.

And that’s where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.

That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When it’s done looking at all the paths, it gets to the very weird part. It averages the limits you’ve set for every affected block devices and then applies those.

That means that “in average” you’ll be getting the right speed in the container, but it also means that you can’t have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, they’ll both give you the average of the two values.

How does it all work?

Most of the limits described above are applied through the Linux kernel Cgroups API. That’s with the exception of the network limits which are applied through good old “tc”.

LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.

On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.

Applying some limits

All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:

lxc config set CONTAINER KEY VALUE

or for a profile:

lxc profile set PROFILE KEY VALUE

while device-specific ones are applied with:

lxc config device set CONTAINER DEVICE KEY VALUE

or for a profile:

lxc profile device set CONTAINER DEVICE KEY VALUE

The complete list of valid configuration keys, device types and device keys can be found here.

CPU

To just limit a container to any 2 CPUs, do:

lxc config set my-container limits.cpu 2

To pin to specific CPU cores, say the second and fourth:

lxc config set my-container limits.cpu 1,3

More complex pinning ranges like this works too:

lxc config set my-container limits.cpu 0-3,7-11

The limits are applied live, as can be seen in this example:

stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2 processor : 3 stgraber@dakara:~$ lxc config set zerotier limits.cpu 2 stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1

Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.

As with just about everything in LXD, those settings can also be applied in profiles:

stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2 processor : 3 stgraber@dakara:~$ lxc profile set default limits.cpu 3 stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2

To limit the CPU time of a container to 10% of the total, set the CPU allowance:

lxc config set my-container limits.cpu.allowance 10%

Or to give it a fixed slice of CPU time:

lxc config set my-container limits.cpu.allowance 25ms/200ms

And lastly, to reduce the priority of a container to a minimum:

lxc config set my-container limits.cpu.priority 0 Memory

To apply a straightforward memory limit run:

lxc config set my-container limits.memory 256MB

(The supported suffixes are kB, MB, GB, TB, PB and EB)

To turn swap off for the container (defaults to enabled):

lxc config set my-container limits.memory.swap false

To tell the kernel to swap this container’s memory first:

lxc config set my-container limits.memory.swap.priority 0

And finally if you don’t want hard memory limit enforcement:

lxc config set my-container limits.memory.enforce soft Disk and block I/O

Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.

To set a disk limit (requires btrfs or ZFS):

lxc config device set my-container root size 20GB

For example:

stgraber@dakara:~$ lxc exec zerotier -- df -h / Filesystem Size Used Avail Use% Mounted on encrypted/lxd/containers/zerotier 179G 542M 178G 1% / stgraber@dakara:~$ lxc config device set zerotier root size 20GB stgraber@dakara:~$ lxc exec zerotier -- df -h / Filesystem Size Used Avail Use% Mounted on encrypted/lxd/containers/zerotier 20G 542M 20G 3% /

To restrict speed you can do the following:

lxc config device set my-container limits.read 30MB lxc config device set my-container limits.write 10MB

Or to restrict IOps instead:

lxc config device set my-container limits.read 20Iops lxc config device set my-container limits.write 10Iops

And lastly, if you’re on a busy system with over-commit, you may want to also do:

lxc config set my-container limits.disk.priority 10

To increase the I/O priority for that container to the maximum.

Network I/O

Network I/O is basically identical to block I/O as far the knobs available.

For example:

stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null --2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: '/dev/null' /dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s 2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600] stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null --2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: '/dev/null' /dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s 2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]

And that’s how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!

And as with block I/O, you can set an overall network priority with:

lxc config set my-container limits.network.priority 5 Getting the current resource usage

The LXD API exports quite a bit of information on current container resource usage, you can get:

  • Memory: current, peak, current swap and peak swap
  • Disk: current disk usage
  • Network: bytes and packets received and transferred for every interface

And now if you’re running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:

stgraber@dakara:~$ lxc info zerotier Name: zerotier Architecture: x86_64 Created: 2016/02/20 20:01 UTC Status: Running Type: persistent Profiles: default Pid: 29258 Ips: eth0: inet 172.17.0.101 eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8 eth0: inet6 fe80::216:3eff:feec:65a8 lo: inet 127.0.0.1 lo: inet6 ::1 lxcbr0: inet 10.0.3.1 lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2 zt0: inet 29.17.181.59 zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1 zt0: inet6 fe80::79:e7ff:fe0d:5123 Resources: Processes: 33 Disk usage: root: 808.07MB Memory usage: Memory (current): 106.79MB Memory (peak): 195.51MB Swap (current): 124.00kB Swap (peak): 124.00kB Network usage: lxcbr0: Bytes received: 0 bytes Bytes sent: 570 bytes Packets received: 0 Packets sent: 0 zt0: Bytes received: 1.10MB Bytes sent: 806 bytes Packets received: 10957 Packets sent: 10957 eth0: Bytes received: 99.35MB Bytes sent: 5.88MB Packets received: 64481 Packets sent: 64481 lo: Bytes received: 9.57kB Bytes sent: 9.57kB Packets received: 81 Packets sent: 81 Snapshots: zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless) Conclusion

The LXD team spent quite a few months iterating over the language we’re using for those limits. It’s meant to be as simple as it can get while remaining very powerful and specific when you want it to.

Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!

Original article

Stéphane Graber: LXD 2.0: Image management [5/12]

Planet Ubuntu - Tue, 03/29/2016 - 23:10

This is the fifth blog post in this series about LXD 2.0.

Container images

If you’ve used LXC before, you probably remember those LXC “templates”, basically shell scripts that spit out a container filesystem and a bit of configuration.

Most templates generate the filesystem by doing a full distribution bootstrapping on your local machine. This may take quite a while, won’t work for all distributions and may require significant network bandwidth.

Back in LXC 1.0, I wrote a “download” template which would allow users to download pre-packaged container images, generated on a central server from the usual template scripts and then heavily compressed, signed and distributed over https. A lot of our users switched from the old style container generation to using this new, much faster and much more reliable method of creating a container.

With LXD, we’re taking this one step further by being all-in on the image based workflow. All containers are created from an image and we have advanced image caching and pre-loading support in LXD to keep the image store up to date.

Interacting with LXD images

Before digging deeper into the image format, lets quickly go through what LXD lets you do with those images.

Transparently importing images

All containers are created from an image. The image may have come from a remote image server and have been pulled using its full hash, short hash or an alias, but in the end, every LXD container is created from a local image.

Here are a few examples:

lxc launch ubuntu:14.04 c1 lxc launch ubuntu:75182b1241be475a64e68a518ce853e800e9b50397d2f152816c24f038c94d6e c2 lxc launch ubuntu:75182b1241be c3

All of those refer to the same remote image (at the time of this writing), the first time one of those is run, the remote image will be imported in the local LXD image store as a cached image, then the container will be created from it.

The next time one of those commands are run, LXD will only check that the image is still up to date (when not referring to it by its fingerprint), if it is, it will create the container without downloading anything.

Now that the image is cached in the local image store, you can also just start it from there without even checking if it’s up to date:

lxc launch 75182b1241be c4

And lastly, if you have your own local image under the name “myimage”, you can just do:

lxc launch my-image c5

If you want to change some of that automatic caching and expiration behavior, there are instructions in an earlier post in this series.

Manually importing images Copying from an image server

If you want to copy some remote image into your local image store but not immediately create a container from it, you can use the “lxc image copy” command. It also lets you tweak some of the image flags, for example:

lxc image copy ubuntu:14.04 local:

This simply copies the remote image into the local image store.

If you want to be able to refer to your copy of the image by something easier to remember than its fingerprint, you can add an alias at the time of the copy:

lxc image copy ubuntu:12.04 local: --alias old-ubuntu lxc launch old-ubuntu c6

And if you would rather just use the aliases that were set on the source server, you can ask LXD to copy the for you:

lxc image copy ubuntu:15.10 local: --copy-aliases lxc launch 15.10 c7

All of the copies above were one-shot copy, so copying the current version of the remote image into the local image store. If you want to have LXD keep the image up to date, as it does for the ones stored in its cache, you need to request it with the –auto-update flag:

lxc image copy images:gentoo/current/amd64 local: --alias gentoo --auto-update Importing a tarball

If someone provides you with a LXD image as a single tarball, you can import it with:

lxc image import <tarball>

If you want to set an alias at import time, you can do it with:

lxc image import <tarball> --alias random-image

Now if you were provided with two tarballs, identify which contains the LXD metadata. Usually the tarball name gives it away, if not, pick the smallest of the two, metadata tarballs are tiny. Then import them both together with:

lxc image import <metadata tarball> <rootfs tarball> Importing from a URL

“lxc image import” also works with some special URLs. If you have an https web server which serves a path with the LXD-Image-URL and LXD-Image-Hash headers set, then LXD will pull that image into its image store.

For example you can do:

lxc image import https://dl.stgraber.org/lxd --alias busybox-amd64

When pulling the image, LXD also sets some headers which the remote server could check to return an appropriate image. Those are LXD-Server-Architectures and LXD-Server-Version.

This is meant as a poor man’s image server. It can be made to work with any static web server and provides a user friendly way to import your image.

Managing the local image store

Now that we have a bunch of images in our local image store, lets see what we can do with them. We’ve already covered the most obvious, creating containers from them but there are a few more things you can do with the local image store.

Listing images

To get a list of all images in the store, just run “lxc image list”:

stgraber@dakara:~$ lxc image list +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | alpine-32 | 6d9c131efab3 | yes | Alpine edge (i386) (20160329_23:52) | i686 | 2.50MB | Mar 30, 2016 at 4:36am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | gentoo | 1a134c5951e0 | no | Gentoo current (amd64) (20160329_14:12) | x86_64 | 232.50MB | Mar 30, 2016 at 4:34am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | my-image | c9b6e738fae7 | no | Scientific Linux 6 x86_64 (default) (20160215_02:36) | x86_64 | 625.34MB | Mar 2, 2016 at 4:56am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+ | | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) | +---------------+--------------+--------+------------------------------------------------------+--------+----------+------------------------------+

You can filter based on the alias or fingerprint simply by doing:

stgraber@dakara:~$ lxc image list amd64 +---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | busybox-amd64 | 74186c79ca2f | no | Busybox x86_64 | x86_64 | 0.79MB | Mar 30, 2016 at 4:33am (UTC) | +---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+ | w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) | +---------------+--------------+--------+-----------------------------------------+--------+----------+------------------------------+

Or by specifying a key=value filter of image properties:

stgraber@dakara:~$ lxc image list os=ubuntu +-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE | +-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+ | old-ubuntu | 4d558b08f22f | no | ubuntu 12.04 LTS amd64 (release) (20160315) | x86_64 | 155.09MB | Mar 30, 2016 at 4:30am (UTC) | +-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+ | w (11 more) | d3703a994910 | no | ubuntu 15.10 amd64 (release) (20160315) | x86_64 | 153.35MB | Mar 30, 2016 at 4:31am (UTC) | +-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+ | | 75182b1241be | no | ubuntu 14.04 LTS amd64 (release) (20160314) | x86_64 | 118.17MB | Mar 30, 2016 at 4:27am (UTC) | +-------------+--------------+--------+---------------------------------------------+--------+----------+------------------------------+ To see everything LXD knows about a given image, you can use “lxc image info”: stgraber@castiana:~$ lxc image info ubuntu Fingerprint: e8a33ec326ae7dd02331bd72f5d22181ba25401480b8e733c247da5950a7d084 Size: 139.43MB Architecture: i686 Public: no Timestamps: Created: 2016/03/15 00:00 UTC Uploaded: 2016/03/16 05:50 UTC Expires: 2017/04/26 00:00 UTC Properties: version: 12.04 aliases: 12.04,p,precise architecture: i386 description: ubuntu 12.04 LTS i386 (release) (20160315) label: release os: ubuntu release: precise serial: 20160315 Aliases: - ubuntu Auto update: enabled Source: Server: https://cloud-images.ubuntu.com/releases Protocol: simplestreams Alias: precise/i386 Editing images

A convenient way to edit image properties and some of the flags is to use:

lxc image edit <alias or fingerprint>

This opens up your default text editor with something like this:

autoupdate: true properties: aliases: 14.04,default,lts,t,trusty architecture: amd64 description: ubuntu 14.04 LTS amd64 (release) (20160314) label: release os: ubuntu release: trusty serial: "20160314" version: "14.04" public: false

You can change any property you want, turn auto-update on and off or mark an image as publicly available (more on that later).

Deleting images

Remove an image is a simple matter of running:

lxc image delete <alias or fingerprint>

Note that you don’t have to remove cached entries, those will automatically be removed by LXD after they expire (by default, after 10 days since they were last used).

Exporting images

If you want to get image tarballs from images currently in your image store, you can use “lxc image export”, like:

stgraber@dakara:~$ lxc image export old-ubuntu . Output is in . stgraber@dakara:~$ ls -lh *.tar.xz -rw------- 1 stgraber domain admins 656 Mar 30 00:55 meta-ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz -rw------- 1 stgraber domain admins 156M Mar 30 00:55 ubuntu-12.04-server-cloudimg-amd64-lxd.tar.xz Image formats

LXD right now supports two image layouts, unified or split. Both of those are effectively LXD-specific though the latter makes it easier to re-use the filesystem with other container or virtual machine runtimes.

LXD being solely focused on system containers, doesn’t support any of the application container “standard” image formats out there, nor do we plan to.

Our images are pretty simple, they’re made of a container filesystem, a metadata file describing things like when the image was made, when it expires, what architecture its for, … and optionally a bunch of file templates.

See this document for up to date details on the image format.

Unified image (single tarball)

The unified image format is what LXD uses when generating images itself. They are a single big tarball, containing the container filesystem inside a “rootfs” directory, have the metadata.yaml file at the root of the tarball and any template goes into a “templates” directory.

Any compression (or none at all) can be used for that tarball. The image hash is the sha256 of the resulting compressed tarball.

Split image (two tarballs)

This format is most commonly used by anyone rolling their own images and who already have a compressed filesystem tarball.

They are made of two distinct tarball, the first contains just the metadata bits that LXD uses, so the metadata.yaml file at the root and any template in the “templates” directory.

The second tarball contains only the container filesystem directly at its root. Most distributions already produce such tarballs as they are common for bootstrapping new machines. This image format allows re-using them unmodified.

Any compression (or none at all) can be used for either tarball, they can absolutely use different compression algorithms. The image hash is the sha256 of the concatenation of the metadata and rootfs tarballs.

Image metadata

A typical metadata.yaml file looks something like:

architecture: "i686" creation_date: 1458040200 properties: architecture: "i686" description: "Ubuntu 12.04 LTS server (20160315)" os: "ubuntu" release: "precise" templates: /var/lib/cloud/seed/nocloud-net/meta-data: when: - start template: cloud-init-meta.tpl /var/lib/cloud/seed/nocloud-net/user-data: when: - start template: cloud-init-user.tpl properties: default: | #cloud-config {} /var/lib/cloud/seed/nocloud-net/vendor-data: when: - start template: cloud-init-vendor.tpl properties: default: | #cloud-config {} /etc/init/console.override: when: - create template: upstart-override.tpl /etc/init/tty1.override: when: - create template: upstart-override.tpl /etc/init/tty2.override: when: - create template: upstart-override.tpl /etc/init/tty3.override: when: - create template: upstart-override.tpl /etc/init/tty4.override: when: - create template: upstart-override.tpl Properties

The two only mandatory fields are the creation date (UNIX EPOCH) and the architecture. Everything else can be left unset and the image will import fine.

The extra properties are mainly there to help the user figure out what the image is about. The “description” property for example is what’s visible in “lxc image list”. The other properties can be used by the user to search for specific images using key/value search.

Those properties can then be edited by the user through “lxc image edit” in contrast, the creation date and architecture fields are immutable.

Templates

The template mechanism allows for some files in the container to be generated or re-generated at some point in the container lifecycle.

We use the pongo2 templating engine for those and we export just about everything we know about the container to the template. That way you can have custom images which use user-defined container properties or normal LXD properties to change the content of some specific files.

As you can see in the example above, we’re using those in Ubuntu to seed cloud-init and to turn off some init scripts.

Creating your own images

LXD being focused on running full Linux systems means that we expect most users to just use clean distribution images and not spin their own image.

However there are a few cases where having your own images is useful. Such as having pre-configured images of your production servers or building your own images for a distribution or architecture that we don’t build images for.

Turning a container into an image

The easiest way by far to build an image with LXD is to just turn a container into an image.

This can be done with:

lxc launch ubuntu:14.04 my-container lxc exec my-container bash <do whatever change you want> lxc publish my-container --alias my-new-image

You can even turn a past container snapshot into a new image:

lxc publish my-container/some-snapshot --alias some-image Manually building an image

Building your own image is also pretty simple.

  1. Generate a container filesystem. This entirely depends on the distribution you’re using. For Ubuntu and Debian, it would be by using debootstrap.
  2. Configure anything that’s needed for the distribution to work properly in a container (if anything is needed).
  3. Make a tarball of that container filesystem, optionally compress it.
  4. Write a new metadata.yaml file based on the one described above.
  5. Create another tarball containing that metadata.yaml file.
  6. Import those two tarballs as a LXD image with: lxc image import <metadata tarball> <rootfs tarball> --alias some-name

You will probably need to go through this a few times before everything works, tweaking things here and there, possibly adding some templates and properties.

Publishing your images

All LXD daemons act as image servers. Unless told otherwise all images loaded in the image store are marked as private and so only trusted clients can retrieve those images, but should you want to make a public image server, all you have to do is tag a few images as public and make sure you LXD daemon is listening to the network.

Just running a public LXD server

The easiest way to share LXD images is to run a publicly visible LXD daemon.

You typically do that by running:

lxc config set core.https_address "[::]:8443"

Remote users can then add your server as a public image server with:

lxc remote add <some name> <IP or DNS> --public

They can then use it just as they would any of the default image servers. As the remote server was added with “–public”, no authentication is required and the client is restricted to images which have themselves been marked as public.

To change what images are public, just “lxc image edit” them and set the public flag to true.

Use a static web server

As mentioned above, “lxc image import” supports downloading from a static http server. The requirements are basically:

  • The server must support HTTPs with a valid certificate, TLS1.2 and EC ciphers
  • When hitting the URL provided to “lxc image import”, the server must return an answer including the LXD-Image-Hash and LXD-Image-URL HTTP headers

If you want to make this dynamic, you can have your server look for the LXD-Server-Architectures and LXD-Server-Version HTTP headers which LXD will provide when fetching the image. This allows you to return the right image for the server’s architecture.

Build a simplestreams server

The “ubuntu:” and “ubuntu-daily:” remotes aren’t using the LXD protocol (“images:” is), those are instead using a different protocol called simplestreams.

simplestreams is basically an image server description format, using JSON to describe a list of products and files related to those products.

It is used by a variety of tools like OpenStack, Juju, MAAS, … to find, download or mirror system images and LXD supports it as a native protocol for image retrieval.

While certainly not the easiest way to start providing LXD images, it may be worth considering if your images can also be used by some of those other tools.

More information can be found here.

Conclusion

I hope this gave you a good idea of how LXD manages its images and how to build and distribute your own. The ability to have the exact same image easily available bit for bit on a bunch of globally distributed system is a big step up from the old LXC days and leads the way to more reproducible infrastructure.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!

Xubuntu: My media manager: Clementine

Planet Ubuntu - Tue, 03/29/2016 - 15:10

Xubuntu 16.04 LTS will be the first Xubuntu release without a default media manager. To help those without a favorite one, we’ve put up this series where some of the Xubuntu team members talk about their favorite media managers. Later in the series we discuss some cloud services and other media manager options in the Ubuntu repositories. Enjoy!

This time we’re covering the favorite media manager for several people working with the Quality Assurance. We’ll leave for you to decide whether this is a coincidence or not!

How do you listen to music?

Kev: Sometimes quietly, sometimes annoyingly loud :) The media manager starts up when the PC does first thing in the morning, it turns off when the PC does at night. I tend to run with half a dozen dynamic playlists.

My collection is about 1 terabytes, which converts to around 50,000 tracks and 4000 albums. I like the collection to be organized by artist, filetype and album.

Daniel: Most of the time with headphones. I mostly listen to music when I want to focus and/or shut other people’s voices out.

Why is Clementine the best choice for you?

Kev: It’s hard to quantify why but, a few things I like – used it for ages, know it’s ins and outs now. In fact I’ve been using Clementine since more or less it’s beginning, I was using Amarok 1.6x till that went, then found this fork of Amarok.

The Android controller for Clementine is also a useful addition for me.

Clementine has Hypnotoad…

All in all Clementine does what I want with the minimum of fuss – it does add some Qt packages, but what applications don’t add something.

Daniel: Because it is very stable and runs on all platforms on which I need it. Also, it has the rainbow cat! And I also use the Android remote sometimes.

Have you customized Clementine?

Kev: I’ve customised Clementine a bit. Things like moodbar and internet sources – all turned off.

Basic stuff I set up are things like making sure it doesn’t do anything but add a track when adding to queue – I’ll start that when I want. And set up keyboard shortcuts – when keyboard does not have media keys.

More specifically I have some dynamic playlists which I use a lot, these use the genre tag. This does mean you have to make sure genre is right in the tags. As an aside I found easytag easiest to use here, given my file library is alphabetic it was a fairly quick job to set tags for them all. For one off tag edits I can do that within Clementine’s right click track menu if I need to.

In addition I have a few very specific dynamic lists based on artists (or group of artists).

Daniel: No – I use it as it comes by default.

Have you used other media managers in the past?

Kev: I think that I have in the last 8 years tried just about everything I could find to try – I always find myself coming back to Clementine though.

Daniel: I have also tried just about every available player, but I like using the same player on each platform, and I could never find anything with the same usability as Clementine.

Is there something else you would like to share with us?

Kev: I often install Audacious with its library plugin alongside Clementine. I tend to run the development version of Xubuntu from the day after the rest of you find out about the newly released version. It can be useful in my world to have an alternative – Audacious is it. It’s a simple application but without the plugin, not really a media manager.

Finally, feel free to talk about your favorite artists!

Kev: If only I was allowed too – I was asked not to rabbit on interminably about the 70s… suffice it to say that my taste in music is pretty wide-ranging – except for Country/Western and that thing loosely called Pop…

Daniel: I listen to just about every kind of kind of music. I am fairly open to most genres.

Oli Warner: "Your Policy Just Got Better"… Did it E&L? No, it just got more expensive.

Planet Ubuntu - Tue, 03/29/2016 - 04:57

Five words to chill the heart. Becki from E&L, the people who insure my cameras, wrote to me today to let me know the outstanding news that I get to pay more money for something I never asked for.

I'm not sure why I'm so surprised...

I have a couple of cameras. They're not super expensive, but expensive enough that if stolen or destroyed, I'd feel pretty bad. To offset this risk —and my anxiety— I pay E&L (The Equine and Livestock Insurance Company Ltd) about £6 every 4 weeks to insure it all.

Yeah, 4 weeks... Because with "lunar months" they get to bill 13 times a year, not 12. But that's not what this is about.

Redefinitions of "month" and "lunar month" aside, I was happy enough with the arrangement. I was insuring something with a high value to me for relatively little money.

But they exhausted my tolerance this morning.

Long story short, they're increasing my premium by 30% and adding three super-low risk policy "features" that I'll almost certainly never use.

And it all automatically kicks in unless I do something

I don't get the arrogance here. The language in their letter talks about these improvements like they're the second coming... Which is stupid. If I needed this sort of cover, I would have asked and paid for it pro-actively. I was more than adequately insured before this.

Of course I can see why they do this. People are, statistically speaking, lazy idiots. If you tell us you're doing something for our betterment, why wouldn't you believe us. You even said FREE OF CHARGE and 20% DISCOUNT in capitals. That's free stuff for me then, right?

Well thanks but no thanks.
I'm now looking for alternative insurance and will happily take recommendations here.

Just stepping back, you get an indication of the scale of this shake down by the simple fact that E&L has a upgrade-declination page. E&L does a lot of types of insurance and this page covers all of them. What else are they extending cover for without solicitation? Third wheels on bicycles? 50% more legs on horses?

Seems I'm not the only person unhappy with E&L. Only a little reading around and you'll find them nicknamed E&Hell. Searching for that should give you some idea of how bad insurance can be at times. If you're looking for a recommendation, I'd strongly suggest you find another insurer.

Harald Sitter: Zabbix IRC Notifications

Planet Ubuntu - Tue, 03/29/2016 - 03:52

Some months ago I rolled out the terrifyingly fancy monitoring platform Zabbix to monitor all Blue Systems servers conveniently. Ever since then I wanted IRC notifications but there didn’t seem to be anything compelling available, so I got quickly annoyed and moved on.

Eventually our very own Bhushan Shah poked me enough to figure out IRC notifications.

So, now we have zabbix-irc-pusher. It is an incredibly simple script connecting to IRC and sending messages to a channel. It does so without actually demonizing, which some might argue makes the script simpler. It does however mean that the script will make numerous join/quit messages appear in the relevant IRC channels, so it is advisable to enable outside messages for that channel so the bot doesn’t actually need to join the channel.

Setting the notifications up is a bit meh though, so here’s how. This is talking about Zabbix 2.x, but all of this should largely be the same for the recently released Zabbix 3.x.

First things first. Zabbix has built-in script support that is meant to be a simple notification solution where a specific notification script is simply called with 3 arguments corresponding to an e-mail’s To, Subject and Body field. These notification scripts need to be placed into a directory your zabbix-server uses for alert scripts. You can check the zabbix_server.conf’s AlertScriptsPath variable to find or change the directory in question. By default it will be something like /usr/lib/zabbix/alertscripts/ so we are going to roll with this for now. The script in question needs to be in that directory and made executable. Once the script is working and in the correct directory all the rest of the configuration happens in Zabbix itself.

In Administration→Media types create a new media type, make it type Script and write the name of the script file.

Next you need to use the script as notification strategy for a specific user. Notifications will not be issued if your script is not actually used for notifications on any user!

Go to Administration→Users pick any enabled user and go to the Media tab. Add a new media, select your IRC notification media, set an IRC channel to send notifications to and pick the notifications that should be sent. Don’t forget to actually update the user, once you add the media.

At this point we have the notification method set up, but not the content. To do that we’ll have to configure an action. In Configuration→Actions create a new action and define content.

We use the following:

Name: Report problems to IRC Default subject: {TRIGGER.STATUS}: {TRIGGER.NAME} - http://m.neon.kde.org/zabbix/tr_events.php?triggerid={TRIGGER.ID}&eventid={EVENT.ID} Default message: Recovery subject: {TRIGGER.STATUS}: {TRIGGER.NAME} - http://m.neon.kde.org/zabbix/tr_events.php?triggerid={TRIGGER.ID}&eventid={EVENT.ID} Recovery message:

You can define a bunch of conditions in which to notify.

Last but not least, you need to associate the action with the notification method we set up. In the operations tab add a new operation and associate with the user for which you set up the notification method. Don’t forget to actually hit add for the operation and also for the action to save both.

Once you are done you should have working IRC notifications. To check simply cause an event (e.g. take an agent offline) and check the event info under Monitoring→Events. Events fitting the action conditions should now have a message actions entry with information about the message delivery and the notification should have arrived on IRC. That’s it!

Naturally, all this applies to any script based notification, so whether your script forwards the information to IRC, Telegram or perhaps even an issue tracking system doesn’t really matter as far as the Zabbix side is concerned.

Unfortunately debugging script notifications is a bit of a crafty topic, so to make sure you don’t forget anything here’s a short list of things to do:

  1. Make sure Zabbix-Server has an alert scripts path set up
  2. Put script in alert scripts path
  3. Make script exectuable (chmod +x)
  4. In Zabbix add a media with type script and the relevant script’s file name
  5. Add a notification method to an enabled Zabbix user
  6. Add an action and associate it with the Zabbix user
  7. Check that new events have a message actions entry for the new action

 

The Fridge: Ubuntu Weekly Newsletter Issue 459

Planet Ubuntu - Mon, 03/28/2016 - 17:56

Welcome to the Ubuntu Weekly Newsletter. This is issue #459 for the weeks of: March 14 – 27, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Daniel Beck
  • Leonard Viator
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 459

The Fridge - Mon, 03/28/2016 - 17:55

Welcome to the Ubuntu Weekly Newsletter. This is issue #459 for the weeks of: March 14 – 27, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Daniel Beck
  • Leonard Viator
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Rhonda D&#39;Vine: Ich bin was ich bin

Planet Ubuntu - Mon, 03/28/2016 - 15:09

As my readers probably are well aware, I wrote my transgender coming out poem Mermaids over 10 years ago, to make it clear to people how I define, what I am and how I would hope they could accept me. I did put it publicly into my blog so I could point people to it. And I still do so regularly. It still comes from the bottom of my heart. And I am very happy that I got the chance to present it in a Poetry Slam last year, it was even recorded and uploaded to YouTube.

There is just one thing that I was also told over the time every now and then by some people that I would have liked to understand what's going on: Why is it in English, my English isn't that good. My usual response was along the lines of that the events that triggered me writing it were in an international context and I wanted to make sure that they understood what I wrote. At that time I didn't realize that I am cutting out a different group of people from being able to understand what's going on inside me.

So this year there was a similar event: the Flawless Poetry Slam which touched the topics of Feminist? Queer? Gender? Rolemodels? - Let's talk about it. I took that as motivation to finally write another text on the topic, and this time in German. Unfortunately though I wasn't able to present it that evening, I wasn't drawn for the lineup. But, I was told that there was another slam going on just last wednesday, so I went there ... and made it onto the stage! And this is the text that I presented there. I am uncertain how well online translators work for you, but I hope you get the core points if you don't understand German:

Ich bin was ich bin
Fünf Worte mit wahrem Sinn:
Ich bin was ich bin

Du denkst: "Mann im Rock?
Das ist ja wohl lächerlich,
der ist sicher schwul."

"Fingernagellack?
Na da schau ich nicht mehr hin,
wer will das schon seh'n."

Jedoch liegst du falsch,
Mit all deinen Punkten, denn:
Ich bin was ich bin.

Ich bin Transgender
Und erlebe mich selber,
ich bin eine Frau.

"Haha, eine Frau?
Wem willst du das weismachen?
Heb mal den Rock hoch!"

Und wie ist's bei dir?
Was ist zwischen den Beinen?
Geht mich das nichts an?

Warum fragst du mich?
Da ist's dann in Ordnung?
Oder vielleicht nicht?

Ich bin was ich bin
Fünf Worte mit ernstem Sinn:
Ich bin was ich bin

Ich steh weiblich hier
Und das hier ist mein Körper
Mein Geschlecht ist's auch

Oberflächlichkeit
Das ist mein größtes Problem
Schlägt mir entgegen

Wenn ich mich öffne
Verständnis fast überall
Es wird akzeptiert

Doch gelegentlich
und das schmerzt mich am meisten
sagt doch mal wer "er"

Von Fremden? Egal
Doch hab ich mich geöffnet
Ist es eine Qual

"Ich seh dich als Mann"
Da ist, was es transportiert
Akzeptanz? Dahin

Meine Pronomen
Wenn ihr über mich redet
sind sie, ihr, ihres

Ich leb was ich leb
Fünf Worte mit tiefem Sinn:
Ich bin was ich bin

"Doch, wie der erst spricht!
Ich meinte, wie sie denn spricht!
Das ist nicht normal."

Ich schreib hier Haikus:
Japanische Gedichtsform
Mit fixem Versmars

Sind fünf, sieben, fünf
Silben in jeder Zeile
Haikus sind simpel

Probier es mal aus
Transportier eine Message
Es macht auch viel Spaß

Wortwahl ist wichtig
Ein guter Thesaurus hilft
Sei kurz und prägnant

Ich sag was ich sag
Fünf Worte mit klugem Sinn:
Ich bin was ich bin

Doch ich schweife ab
Verständnis fast überall?
Wird es akzeptiert?

Erstaunlicherweise
Doch ich bin auch was and'res
Und hier geht's bergab

Eine Sache gibt's
Die erwäh'n ich besser nicht
für die steck ich ein

"Deshalb bin ich hier"
So der Titel eines Lieds
verfasst von Thomas D

"Wenn ich erkläre
warum ich mich wie ernähr"
So weit komm ich nicht

Man erwähnt Vegan
Die Intoleranz ist da
Man ist unten durch

"Mangelerscheinung!"
"Das Essen meines Essens!"
Akzeptanz ade

Hab 'ne Theorie:
Vegan sein: 'ne Entscheidung
Transgender sein nicht

Mensch fühlt sich dann schlecht
dass bei sich selbst die Kraft fehlt
und greift damit an

"Ich könnte das nicht"
Ich verurteile dich nicht
Iss doch was du willst

Ich zwing es nicht auf
Aber Rücksicht wär schon fein
Statt nur Hohn und Schmäh

Ich ess was ich ess
Fünf Worte zum nachdenken:
Ich bin was ich bin

Hope you get the idea. The audience definitely liked it, the jury wasn't so much on board but that's fine, it's five random people and it's mostly for fun anyway. Later that night though some things happened that didn't make me feel so comfortable anymore. I went to the loo, waiting in line with the other ladies, a bit later the waitress came along telling me "the men's room is over there". I told her that I'm aware of that and thanked her, which got her confused and said something along the lines of "so you are both, or what?" but went away after that. Her tone and response wasn't really giving me much comfort, though none of the other ladies in the line did look strangely.
But the most disturbing event after that was to find out about North Carolina signed the bathroom bill making it illegal for trans people to use the bathroom for their gender and insisting on using the one for the gender they were assigned at birth. So men like James Sheffield are now forced to go to the lady's restroom, or face getting arrested. Brave new world. :/

So, enjoy the text and don't get too wound up by stupid laws and hope for time to fix people's discriminatory minds for fixing issues that already are regulated: Assaults are assaults and are already banned. Arguing with people might get assaulted and thus discriminating trans people is totally missing the point, by miles.

/personal | permanent link | Comments: 2 |

Aurélien Gâteau: Yokadi 1.0.2

Planet Ubuntu - Mon, 03/28/2016 - 08:59

Today I released 1.0.2 of Yokadi, the command-line todo list, it comes with a few fixes:

  • Use a more portable way to get the terminal size. This makes it possible to use Yokadi inside Android terminal emulators like Termux
  • Sometimes the task lock used to prevent editing the same task description from multiple Yokadi instances was not correctly released
  • Deleting a keyword from the database caused a crash when a t_list returned tasks which previously contained this keyword

Download it or install it with pip3 install yokadi.

David Tomaschik: Another Milestone: Offensive Security Certified Expert

Planet Ubuntu - Mon, 03/28/2016 - 00:00

This weekend, I attempted what might possibly be my hardest academic feat ever: to pass the Offensive Security Certified Expert exam, the culmination of OffSec’s Cracking the Perimeter course. 48 hours of being pushed to my limits, followed by 24 hours of time to write a report detailing my exploits. I expected quite a challenge, but it really pushed me to my limits. The worst part of all, however, was the 50 hours or so that passed between the time I submitted my exam report and the time I got my response.

For obvious reasons (and to comply with their student code of conduct), I can’t reveal details of the exam nor the exact contents of the course, but I did want to review a few things about it.

The Course

The course covers a variety of topics ranging from web exploitation to bypassing anti-virus to custom shellcoding with egghunters and restricted character sets. The combination of different techniques to exploit services is also covered. While there are web topics that will obviously apply to all operating systems, all of the memory corruption exploits and anti-virus bypass are targeting Windows systems, though the techniques discussed mostly apply to any operating system. (There is discussion of SEH exploits, which is obviously Windows-specific.)

Compared to PWK, there’s a number of differences. PWK focuses mostly on identifying, assessing, and exploiting publicly-known vulnerabilities in a network in the setting of a penetration test. CTP focuses on identifying and exploiting newly-discovered vulnerabilties (i.e., 0-days) as well as bypassing limited protections. While PWK has a massive lab environment for you to compromise, the CTP lab environment is much smaller and you have access to all the systems in there. The CTP lab, rather than being a target environment, is essentially a lab for creating proofs-of-concept.

My biggest disappointment with the CTP lab is the lack of novel targets or exercises compared to the material presented in the coursebook and videos. For the most part, you’re recreating the coursebook material and experiencing it for yourself, but I almost felt a bit spoonfed by the availability of information from the coursebook when performing the labs. I would have liked more exercises to practice the described techniques on targets that were not described in the course materials.

Depending on how many hours a day you can spend on it and your previous experience, you may only need 30 days of lab time. I bought 60, but I think I would’ve been fine with 30. (On the other hand, I appreciated having 60 days for the PWK lab.)

The Exam

If you’ve successfully completed (and understood) all of the lab material, you’ll be well-prepared for the exam. The course material prepares you well, and the exam focuses on the core concepts from the course.

The exam has a total of 90 points of challenges, and 75 points are required to pass. I don’t know if everyone’s exam has the same number and point value of challenges (though I suspect they do), but I’ll point out that more than one of my challenges on the exam was worth more than the 15 points you’re allowed to miss. Put another way, some of the challenges are mandatory to complete on the exam.

The exam is difficult, but not overly so if you’re well prepared. I began at about 0800 on Friday, and went until 0100 Saturday morning, then slept for about 5 hours, then put in another 3 or 4 hours of effort. At that point, I had managed the core of all of the objectives and felt I had refined my techniques and exploits as far as I could. Though there was a point or two where it could have gotten better, I wasn’t sure I could do that in even 24 hours, so I moved on to the report – I figured I’d rather get a good report and have access to the lab to get any last minute data, screenshots, or fix anything I realized I screwed up. About noon, I had my report done and emailed, and began waiting for results. The fact that my F5 key is now worn down is purely coincidence. :)

Tips:

  • Be well rested when you begin.
  • Don’t think you’ll power through the full 48 hours. At a certain energy level, you’ve hit a point of diminishing returns and will start even working backwards by making more mistakes than you can make headway.
  • You’ll want caffeine, but moderate your intake. Jumpy and jittery isn’t clear-headed either.
  • Take good notes. You’ll thank yourself when you write the report.
Conclusion

The Cracking the Perimeter class is totally worth the experience. Before this class, I’d never implemented an egghunter, and I’d barely even touched Win32 exploitation. Though some people have complained that the material is dated, I believe it’s a case of “you have to walk before you can run”, and I definitely feel the material is still relevant. (That’s not to say it couldn’t use a bit of an update, but it’s definitely useful.) Now I have to find my next course. (Too bad AWE and AWAE are always all full-up at Black Hat!)

Salih Emin: uCareSystem Core v3.0 released and available in PPA

Planet Ubuntu - Sun, 03/27/2016 - 15:03
uCareSystem Core is a small software that automates the basic system maintenance processes. Now it is also available to install it via PPA if you are interested to automatically receive new features.

St&eacute;phane Graber: LXD 2.0: Resource control [4/12]

Planet Ubuntu - Sat, 03/26/2016 - 15:43

This is the fourth blog post in this series about LXD 2.0.

Available resource limits

LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.

As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.

All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.

We don’t support resource limits pooling where a limit would be shared by a group of containers, there is simply no good way to implement something like that with the existing kernel APIs.

Disk

This is perhaps the most requested and obvious one. Simply setting a size limit on the container’s filesystem and have it enforced against the container.

And that’s exactly what LXD lets you do!
Unfortunately this is far more complicated than it sounds. Linux doesn’t have path-based quotas, instead most filesystems only have user and group quotas which are of little use to containers.

This means that right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend. It may be possible to implement this feature for LVM too but this depends on the filesystem being used with it and gets tricky when combined with live updates as not all filesystems allow online growth and pretty much none of them allow online shrink.

CPU

When it comes to CPU limits, we support 4 different things:

  • Just give me X CPUs
    In this mode, you let LXD pick a bunch of cores for you and then load-balance things as more containers and CPUs go online/offline.
    The container only sees that number of CPU.
  • Give me a specific set of CPUs (say, core 1, 3 and 5)
    Similar to the first mode except that no load-balancing is happening, you’re stuck with those cores no matter how busy they may be.
  • Give me 20% of whatever you have
    In this mode, you get to see all the CPUs but the scheduler will restrict you to 20% of the CPU time but only when under load! So if the system isn’t busy, your container can have as much fun as it wants. When containers next to it start using the CPU, then it gets capped.
  • Out of every measured 200ms, give me 50ms (and no more than that)
    This mode is similar to the previous one in that you get to see all the CPUs but this time, you can only use as much CPU time as you set in the limit, no matter how idle the system may be. On a system without over-commit this lets you slice your CPU very neatly and guarantees constant performance to those containers.

It’s also possible to combine one of the first two with one of the last two, that is, request a set of CPUs and then further restrict how much CPU time you get on those.

On top of that, we also have a generic priority knob which is used to tell the scheduler who wins when you’re under load and two containers are fighting for the same resource.

Memory

Memory sounds pretty simple, just give me X MB of RAM!

And it absolutely can be that simple. We support that kind of limits as well as percentage based requests, just give me 10% of whatever the host has!

Then we support some extra stuff on top. For example, you can choose to turn swap on and off on a per-container basis and if it’s on, set a priority so you can choose what container will have their memory swapped out to disk first!

Oh and memory limits are “hard” by default. That is, when you run out of memory, the kernel out of memory killer will start having some fun with your processes.

Alternatively you can set the enforcement policy to “soft”, in which case you’ll be allowed to use as much memory as you want so long as nothing else is. As soon as something else wants that memory, you won’t be able to allocate anything until you’re back under your limit or until the host has memory to spare again.

Network I/O

Network I/O is probably our simplest looking limit, trust me, the implementation really isn’t simple though!

We support two things. The first is a basic bit/s limits on network interfaces. You can set a limit of ingress and egress or just set the “max” limit which then applies to both. This is only supported for “bridged” and “p2p” type interfaces.

The second thing is a global network I/O priority which only applies when the network interface you’re trying to talk through is saturated.

Block I/O

I kept the weirdest for last. It may look straightforward and feel like that to the user but there are a bunch of cases where it won’t exactly do what you think it should.

What we support here is basically identical to what I described in Network I/O.

You can set IOps or byte/s read and write limits directly on a disk device entry and there is a global block I/O priority which tells the I/O scheduler who to prefer.

The weirdness comes from how and where those limits are applied. Unfortunately the underlying feature we use to implement those uses full block devices. That means we can’t set per-partition I/O limits let alone per-path.

It also means that when using ZFS or btrfs which can use multiple block devices to back a given path (with or without RAID), we effectively don’t know what block device is providing a given path.

This means that it’s entirely possible, in fact likely, that a container may have multiple disk entries (bind-mounts or straight mounts) which are coming from the same underlying disk.

And that’s where things get weird. To make things work, LXD has logic to guess what block devices back a given path, this does include interrogating the ZFS and btrfs tools and even figures things out recursively when it finds a loop mounted file backing a filesystem.

That logic while not perfect, usually yields a set of block devices that should have a limit applied. LXD then records that and moves on to the next path. When it’s done looking at all the paths, it gets to the very weird part. It averages the limits you’ve set for every affected block devices and then applies those.

That means that “in average” you’ll be getting the right speed in the container, but it also means that you can’t have a “/fast” and a “/slow” directory both coming from the same physical disk and with differing speed limits. LXD will let you set it up but in the end, they’ll both give you the average of the two values.

How does it all work?

Most of the limits described above are applied through the Linux kernel Cgroups API. That’s with the exception of the network limits which are applied through good old “tc”.

LXD at startup time detects what cgroups are enabled in your kernel and will only apply the limits which your kernel support. Should you be missing some cgroups, a warning will also be printed by the daemon which will then get logged by your init system.

On Ubuntu 16.04, everything is enabled by default with the exception of swap memory accounting which requires you pass the “swapaccount=1” kernel boot parameter.

Applying some limits

All the limits described above are applied directly to the container or to one of its profiles. Container-wide limits are applied with:

lxc config set CONTAINER KEY VALUE

or for a profile:

lxc profile set PROFILE KEY VALUE

while device-specific ones are applied with:

lxc config device set CONTAINER DEVICE KEY VALUE

or for a profile:

lxc profile device set PROFILE DEVICE KEY VALUE

The complete list of valid configuration keys, device types and device keys can be found here.

CPU

To just limit a container to any 2 CPUs, do:

lxc config set my-container limits.cpu 2

To pin to specific CPU cores, say the second and fourth:

lxc config set my-container limits.cpu 1,3

More complex pinning ranges like this works too:

lxc config set my-container limits.cpu 0-3,7-11

The limits are applied live, as can be seen in this example:

stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2 processor : 3 stgraber@dakara:~$ lxc config set zerotier limits.cpu 2 stgraber@dakara:~$ lxc exec zerotier -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1

Note that to avoid utterly confusing userspace, lxcfs arranges the /proc/cpuinfo entries so that there are no gaps.

As with just about everything in LXD, those settings can also be applied in profiles:

stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2 processor : 3 stgraber@dakara:~$ lxc profile set default limits.cpu 3 stgraber@dakara:~$ lxc exec snappy -- cat /proc/cpuinfo | grep ^proces processor : 0 processor : 1 processor : 2

To limit the CPU time of a container to 10% of the total, set the CPU allowance:

lxc config set my-container limits.cpu.allowance 10%

Or to give it a fixed slice of CPU time:

lxc config set my-container limits.cpu.allowance 25ms/200ms

And lastly, to reduce the priority of a container to a minimum:

lxc config set my-container limits.cpu.priority 0 Memory

To apply a straightforward memory limit run:

lxc config set my-container limits.memory 256MB

(The supported suffixes are kB, MB, GB, TB, PB and EB)

To turn swap off for the container (defaults to enabled):

lxc config set my-container limits.memory.swap false

To tell the kernel to swap this container’s memory first:

lxc config set my-container limits.memory.swap.priority 0

And finally if you don’t want hard memory limit enforcement:

lxc config set my-container limits.memory.enforce soft Disk and block I/O

Unlike CPU and memory, disk and I/O limits are applied to the actual device entry, so you either need to edit the original device or mask it with a more specific one.

To set a disk limit (requires btrfs or ZFS):

lxc config device set my-container root size 20GB

For example:

stgraber@dakara:~$ lxc exec zerotier -- df -h / Filesystem Size Used Avail Use% Mounted on encrypted/lxd/containers/zerotier 179G 542M 178G 1% / stgraber@dakara:~$ lxc config device set zerotier root size 20GB stgraber@dakara:~$ lxc exec zerotier -- df -h / Filesystem Size Used Avail Use% Mounted on encrypted/lxd/containers/zerotier 20G 542M 20G 3% /

To restrict speed you can do the following:

lxc config device set my-container limits.read 30MB lxc config device set my-container limits.write 10MB

Or to restrict IOps instead:

lxc config device set my-container limits.read 20Iops lxc config device set my-container limits.write 10Iops

And lastly, if you’re on a busy system with over-commit, you may want to also do:

lxc config set my-container limits.disk.priority 10

To increase the I/O priority for that container to the maximum.

Network I/O

Network I/O is basically identical to block I/O as far the knobs available.

For example:

stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null --2016-03-26 22:17:34-- http://speedtest.newark.linode.com/100MB-newark.bin Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: '/dev/null' /dev/null 100%[===================>] 100.00M 58.7MB/s in 1.7s 2016-03-26 22:17:36 (58.7 MB/s) - '/dev/null' saved [104857600/104857600] stgraber@dakara:~$ lxc profile device set default eth0 limits.ingress 100Mbit stgraber@dakara:~$ lxc profile device set default eth0 limits.egress 100Mbit stgraber@dakara:~$ lxc exec zerotier -- wget http://speedtest.newark.linode.com/100MB-newark.bin -O /dev/null --2016-03-26 22:17:47-- http://speedtest.newark.linode.com/100MB-newark.bin Resolving speedtest.newark.linode.com (speedtest.newark.linode.com)... 50.116.57.237, 2600:3c03::4b Connecting to speedtest.newark.linode.com (speedtest.newark.linode.com)|50.116.57.237|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 104857600 (100M) [application/octet-stream] Saving to: '/dev/null' /dev/null 100%[===================>] 100.00M 11.4MB/s in 8.8s 2016-03-26 22:17:56 (11.4 MB/s) - '/dev/null' saved [104857600/104857600]

And that’s how you throttle an otherwise nice gigabit connection to a mere 100Mbit/s one!

And as with block I/O, you can set an overall network priority with:

lxc config set my-container limits.network.priority 5 Getting the current resource usage

The LXD API exports quite a bit of information on current container resource usage, you can get:

  • Memory: current, peak, current swap and peak swap
  • Disk: current disk usage
  • Network: bytes and packets received and transferred for every interface

And now if you’re running a very recent LXD (only in git at the time of this writing), you can also get all of those in “lxc info”:

stgraber@dakara:~$ lxc info zerotier Name: zerotier Architecture: x86_64 Created: 2016/02/20 20:01 UTC Status: Running Type: persistent Profiles: default Pid: 29258 Ips: eth0: inet 172.17.0.101 eth0: inet6 2607:f2c0:f00f:2700:216:3eff:feec:65a8 eth0: inet6 fe80::216:3eff:feec:65a8 lo: inet 127.0.0.1 lo: inet6 ::1 lxcbr0: inet 10.0.3.1 lxcbr0: inet6 fe80::f0bd:55ff:feee:97a2 zt0: inet 29.17.181.59 zt0: inet6 fd80:56c2:e21c:0:199:9379:e711:b3e1 zt0: inet6 fe80::79:e7ff:fe0d:5123 Resources: Processes: 33 Disk usage: root: 808.07MB Memory usage: Memory (current): 106.79MB Memory (peak): 195.51MB Swap (current): 124.00kB Swap (peak): 124.00kB Network usage: lxcbr0: Bytes received: 0 bytes Bytes sent: 570 bytes Packets received: 0 Packets sent: 0 zt0: Bytes received: 1.10MB Bytes sent: 806 bytes Packets received: 10957 Packets sent: 10957 eth0: Bytes received: 99.35MB Bytes sent: 5.88MB Packets received: 64481 Packets sent: 64481 lo: Bytes received: 9.57kB Bytes sent: 9.57kB Packets received: 81 Packets sent: 81 Snapshots: zerotier/blah (taken at 2016/03/08 23:55 UTC) (stateless) Conclusion

The LXD team spent quite a few months iterating over the language we’re using for those limits. It’s meant to be as simple as it can get while remaining very powerful and specific when you want it to.

Live application of those limits and inheritance through profiles makes it a very powerful tool to live manage the load on your servers without impacting the running services.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net

And if you don’t want or can’t install LXD on your own machine, you can always try it online instead!

Ben Collins: Easy Rake-based Deployment for Git-hosted Rails Apps

Planet Ubuntu - Fri, 03/25/2016 - 10:57
I searched a lot of places for an easy way to automate my deployments for OpenStrokes, a website I've been working on. Some things were just too complex (capistrano) and some were way too simple (StackOverflow answers that didn't do everything, or didn't check for errors).

So, as most people do, I wrote my own. Hopefully this short rake task can help you as well. This assumes that your application server has your app checked out as a clone of some git repo you push changes to and that you are running under passenger. When I want to deploy, I log in to my production server, cd to my app repo, and then run:

rake myapp:deploy
For just strictly view updates, it completes in 3 seconds or less. There are several things it does, in addition to checking for errors:

  • Checks to make sure the app's git checkout isn't dirty from any local edits.
  • Fetches the remote branch and checks if there are any new commits, exits if not.
  • Tags the current production code base before pulling the changes.
  • Does a git pull with fast-forward only (to avoid unexpected merging).
  • Checks if there are any new gems to install via bundle (checks for changes in Gemfile and Gemfile.lock).
  • Checks if there are any database migrations that need to be done (checks for changes to db/schema.db db/migrations/*).
  • Checks for possible changes to assets and precompiles if needed (checks Gemfile.lock and app/assets/*).
  • Restarts passenger to pick up the changes.
  • Does a HEAD request on / to make sure it gets an expected 200 showing the server is running without errors.

The script can also take a few arguments:

  • :branch Git branch, defaults to master
  • :remote Git remote, defaults to origin
  • :server_url URL for HEAD request to check server after completion

Note, if the task encounters an error, you have to manually complete the deploy. You should not rerun the task.

Any finally, here is the task itself. You can save this to lib/tasks/myapp.rb

# We can't use Rake::Task because it can fail when things are mid
# upgrade

require "net/http"

def do_at_exit(start_time)
puts "Time: #{(Time.now - start_time).round(3)} secs"
end

def start_timer
start_time = Time.now
at_exit { do_at_exit(start_time) }
end

namespace :myapp do
desc 'Deployment automation'
task :deploy, [:branch, :remote, :server_url] do |t, args|
start_timer

# Arg supercedes env, which supercedes default
branch = args[:branch] || ENV['DEPLOY_BRANCH'] || 'master'
remote = args[:remote] || ENV['DEPLOY_REMOTE'] || 'origin'
server_url = args[:server_url] || ENV['DEPLOY_SERVER_URL'] || 'http://localhost/'

puts "II: Starting deployment..."

# Check for dirty repo
unless system("git diff --quiet")
puts "WW: Refusing to deploy on a dirty repo, exiting."
exit 1
end

# Update from remote so we can check for what to do
system("git fetch -n #{remote}")

# See if there's anything new at all
if system("git diff --quiet HEAD..#{remote}/#{branch} --")
puts "II: Nothing new, exiting"
exit
end

# Tag this revision...
tag = "prev-#{DateTime.now.strftime("%Y%m%dT%H%M%S")}"
system("git tag -f #{tag}")

# Pull in the changes
if ! system("git pull --ff-only #{remote} #{branch}")
puts "EE: Failed to fast-forward to #{branch}"
exit 1
end

# Base command to check for differences
cmd = "git diff --quiet #{tag}..HEAD"

if system("#{cmd} Gemfile Gemfile.lock")
puts "II: No updates to bundled gems"
else
puts "II: Running bundler..."
Bundler.with_clean_env do
if ! system("bundle install")
puts "EE: Error running bundle install"
exit 1
end
end
end

if system("#{cmd} db/schema.rb db/migrate/")
puts "II: No db changes"
else
puts "II: Running db migrations..."
# We run this as a sub process to avoid errors
if ! system("rake db:migrate")
puts "EE: Error running db migrations"
exit 1
end
end

if system("#{cmd} Gemfile.lock app/assets/")
puts "II: No changes to assets"
else
puts "II: Running asset updates..."
if ! system("rake assets:precompile")
puts "EE: Error precompiling assets"
exit 1
end
system("rake assets:clean")
end

puts "II: Restarting Passenger..."
FileUtils.touch("tmp/restart.txt")

puts "II: Checking HTTP response code..."

uri = URI.parse(server_url)
res = nil

Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
req = Net::HTTP::Head.new(uri, {'User-Agent' => 'deploy/net-check'})
res = http.request req
end

if res.code != "200"
puts "EE: Server returned #{res.code}!!!"
exit 1
else
puts "II: Everything appears to be ok"
end
end
end
Here's an example of the command output:

$ rake myapp:deploy
II: Starting deployment...
remote: Counting objects: 15, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 6), reused 0 (delta 0)
Unpacking objects: 100% (8/8), done.
From /home/user/myapp
efee45c..e5468c1 master -> origin/master
From /home/user/myapp
* branch master -> FETCH_HEAD
Updating efee45c..e5468c1
Fast-forward
app/views/users/_display.html.erb | 7 +++++--
public/svg/badges/caretakers-club.svg | 1 -
2 files changed, 5 insertions(+), 3 deletions(-)
delete mode 100644 public/svg/badges/caretakers-club.svg
II: No updates to bundled gems
II: No db changes
II: No changes to assets
II: Restarting Passenger...
II: Checking HTTP response code...
II: Everything appears to be ok
Time: 3.031 secs

Daniel Pocock: With Facebook, everybody can betray Jesus

Planet Ubuntu - Fri, 03/25/2016 - 10:43

It's Easter time again and many of those who are Christian will be familiar with the story of the Last Supper and the subsequent betrayal of Jesus by his friend Judas.

If Jesus was around today and didn't immediately die from a heart attack after hearing about the Bishop of Bling (who spent $500,000 just renovating his closet and flew first class to visit the poor in India), how many more of his disciples would betray him and each other by tagging him in selfies on Facebook? Why do people put the short term indulgence of social media ahead of protecting privacy in the long term? Is this how you treat your friends?

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator