Feed aggregator

The Fridge: Ubuntu Weekly Newsletter Issue 485

Planet Ubuntu - Mon, 10/31/2016 - 17:30

Ubuntu Weekly Newsletter Issue 485

The Fridge - Mon, 10/31/2016 - 17:30

Ubuntu Insights: Webinar: Managing diverse workloads with Juju

Planet Ubuntu - Mon, 10/31/2016 - 04:00


Watch On Demand 

In this webinar we explore the common challenges organisations face in deploying and managing Windows workloads at scale on OpenStack. We look at how Windows Charms can integrate seamlessly with existing Juju Charms to provide a powerful way to reduce workload deployments from days to minutes on both bare metal and public or private clouds. We also cover the benefits of Microsoft on OpenStack, Hyper-V integration and Microsoft’s support of Hyper-V in OpenStack.

Canonical’s award-winning model-driven operations system Juju enables reusable, open source operations across hybrid cloud and physical infrastructure. Integration and operations are encoded in “charms” by vendors and the community of experts familiar with an app. These charms are reused by ops teams as standardised operations code that evolves along with the software itself. Our partner Cloudbase has created many Windows charms to take advantage of the benefits Juju has to offer.

Webinar Details

Title: Deploying Microsoft technologies on OpenStack
Presented By: Alessandro Pilotti, CEO Cloudbase Solution and Mark Baker, Ubuntu Server and Cloud Product Manager at Canonical.
Time/Date: available now on-demand

Watch this on-demand webinar to learn…
  • How to use Windows and Juju
  • Recognizing the value of integrating these technologies into OpenStack
  • Learn about the type of Windows workloads available with Juju

Watch on Demand

Eric Hammond: AWS Lambda Static Site Generator Plugins

Planet Ubuntu - Mon, 10/31/2016 - 02:41

starting with Hugo!

A week ago, I presented a CloudFormation template for an AWS Git-backed Static Website stack. If you are not familiar with it, please review the features of this complete Git + static website CloudFormation stack.

This weekend, I extended the stack to support a plugin architecture to run the static site generator of your choosing against your CodeCommit Git repository content. You specify the AWS Lambda function at stack launch time using CloudFormation parameters (ZIP location in S3).

The first serious static site generator plugin is for Hugo, but others can be added with or without my involvement and used with the same unmodified CloudFormation template.

The Git-backed static website stack automatically invokes the static site generator whenever the site source is updated in the CodeCommit Git repository. It then syncs the generated static website content to the S3 bucket where the stack serves it over a CDN using https with DNS served by Route 53.

I have written three AWS Lambda static site generator plugins to demonstrate the concept and to serve as templates for new plugins:

  1. Identity transformation plugin - This copies the entire Git repository content to the static website with no modifications. This is currently the default plugin for the static website CloudFormation template.

  2. Subdirectory plugin - This plugin is useful if your Git repository has files that should not be included as part of the static site. It publishes a specified subdirectory (e.g., “htdocs” or “public-html”) as the static website, keeping the rest of your repository private.

  3. Hugo plugin - This plugin runs the popular Hugo static site generator. The Git repository should include all source templates, content, theme, and config.

You are welcome to use any of these plugins when running an AWS Git-backed Static Website stack. The documentation in each of the above plugin repositories describes how to set the CloudFormation template parameters on stack create.

You may also write your own AWS Lambda function static site generator plugin using one of the above as a starting point. Let me know if you write plugins; I may add new ones to the list above.

The sample AWS Lambda handler plugin code takes care of downloading the source, and uploading the resulting site and can be copied as is. All you have to do is fill in the “generate_static_site” code to generate the site from the source.

The plugin code for Hugo is basically this:

def generate_static_site(source_dir, site_dir, user_parameters): command = "./hugo --source=" + source_dir + " --destination=" + site_dir if user_parameters.startswith("-"): command += " " + user_parameters print(os.popen(command).read())

I have provided build scripts so that you can build the sample AWS Lambda functions yourself, because you shoudn’t trust other people’s blackbox code if you can help it. That said, I have also made it easy to use pre-built AWS Lambda function ZIP files to try this out.

These CloudFormation template and AWS Lambda functions are very new and somewhat experimental. Please let me know where you run into issues using them and I’ll update documentation. I also welcome pull requests, especially if you work with me in advance to make sure the proposed changes fit the vision for this stack.

Original article and comments: https://alestic.com/2016/10/aws-static-site-generator-plugins/

Ubuntu Insights: Dirty COW was Livepatched in Ubuntu within Hours of Publication

Planet Ubuntu - Mon, 10/31/2016 - 02:00

If you haven’t heard about last week’s Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves…

Why? Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between — including all versions of Ubuntu since 2007 — was vulnerable to this face-palming critical security vulnerability.

Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds. Watch…

Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS. The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!

If you haven’t already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:

  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    Install the canonical-livepatch snap

    $ sudo snap install canonical-livepatch
  2. Enable the service with your token $ sudo canonical-livepatch enable [TOKEN]

And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let’s retry that same vulnerability, on the same system, but this time, having been livepatched…

Aha! Thwarted!

So that’s the Ubuntu 16.04 LTS kernel space… What about userspace? Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.

As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers. If you don’t already have it installed, you can install it with:

$ sudo apt install unattended-upgrades

And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default. Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:

$ sudo dpkg-reconfigure unattended-upgrades

With that combination enabled — (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates — Ubuntu 16.04 LTS is the most secure Linux distribution to date. Period.

If you want to enable the Canonical Livepatch Service on more than three machines, please purchase an Ubuntu Advantage support package from buy.ubuntu.com or get in touch.

Dustin Kirkland: Dirty COW was Livepatched in Ubuntu within Hours of Publication

Planet Ubuntu - Sat, 10/29/2016 - 01:46
If you haven't heard about last week's Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves...

Why?  Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between -- including all versions of Ubuntu since 2007 -- was vulnerable to this face-palming critical security vulnerability.
Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds.  Watch...

Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS.  The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!
If you haven't already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
  2. Install the canonical-livepatch snap
    $ sudo snap install canonical-livepatch 
  3. Enable the service with your token
    $ sudo canonical-livepatch enable [TOKEN]
And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let's retry that same vulnerability, on the same system, but this time, having been livepatched...

Aha!  Thwarted!
So that's the Ubuntu 16.04 LTS kernel space...  What about userspace?  Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.
As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers.  If you don't already have it installed, you can install it with:
$ sudo apt install unattended-upgrades
And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default.  Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:
$ sudo dpkg-reconfigure unattended-upgrades

With that combination enabled -- (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates -- Ubuntu 16.04 LTS is the most secure Linux distribution to date.  Period.
Mooooo,:-Dustin

Cesar Sevilla: 1er Festival Universitario de Tecnologías Libres

Planet Ubuntu - Fri, 10/28/2016 - 15:09

A petición del público traemos para todos ustedes a través de la Unidad Territorial – Fundacite Zulia (UTZ) el “Festival Universitario de Tecnologías Libres”, el mismo tiene como objetivo: Impulsar el crecimiento y desarrollo del Software Libre y Código Abierto dentro del ámbito Educativo Institucional en la Región Occidental, para lograr la independencia Científica–Tecnológica del software, apoyándose en los Decretos Nacionales y leyes que ampara el estado venezolano.

Este Festival, se llevará a cabo en el Auditorio de la Universidad José Gregorio Hernández los días viernes 11/11/16 de 8:00 am / 12:00 m – 2:00 pm / 5:00pm y el sábado 12/11/16 de 9:00 am / 12:00 m.

El día Viernes realizaremos 5 Charlas técnicas y para el día Sábado realizaremos 2 charlas y una Jornada de Instalación de Sistemas Operativos Libres y/o Herramientas Libres.

Es importante resaltar que contaremos con varios representantes a nivel nacional, entre ellos Representante de la Cámara Venezolana de la industria Tecnológica, del Laboratorio Vivencial, de la Comunidad de Ubuntu-ve, y de diferentes organizaciones con el propósito de presentar los diferentes proyectos tecnológicos hechos con herramientas libres.

Si deseas participar en dicha actividad recuerda que debes registrarte a través del siguiente enlace: https://www.eventbrite.es/e/entradas-festival-universitario-de-tecnologias-libres-2016-27283438499

Si deseas obtener mayor información, puedes contactarme a través del correo csevilla@fundacite-zulia.gob.ve


Salih Emin: How to recover deleted photos and files from your smartphone’s SD card

Planet Ubuntu - Fri, 10/28/2016 - 09:06
In this tutorial, let's assume that we have accidentally deleted our files, from an sd card, or a USB thumb drive. Then we will try to recover them using photorec app.

Costales: A new uWriter for Ubuntu Phone

Planet Ubuntu - Fri, 10/28/2016 - 08:57
uWriter, an offline text editor for our Ubuntu Phone/Tablet.

#productivity
In the new release, all your new documents will be stored in: ~/.local/share/uwp.costales/*.html
And it will have a full OS integration!

More nice UILoad/Save on local filesMenu
Enjoy it from the Ubuntu Store |o/

Alessio Treglia: The logical contradictions of the Universe

Planet Ubuntu - Fri, 10/28/2016 - 05:16

Ouroboros

Is Erwin Schrödinger’s wave function – which did in the atomic and subatomic world an operation altogether similar to the one performed by Newton in the macroscopic world – an objective reality or just a subjective knowledge? Physicists, philosophers and epistemologist have debated at length on this matter. In 1960, theoretical physicist Eugene Wigner has proposed that the observer’s consciousness is the dividing line that triggers the collapse of the wave function[1], and this theory was later taken up and developed in recent years. “The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse[2].

xt-align: justify;”>The English mathematical physicist and philosopher of science Roger Penrose developed the hypothesis called Orch-OR (Orchestrated objective reduction) according to which consciousness originates from processes within neurons, rather than from the connections between neurons (the conventional view). The mechanism is believed to be a quantum physical process called objective reduction which is orchestrated by the molecular structures of the microtubules of brain cells (which constitute the cytoskeleton of the cells themselves). Together with the physician Stuart Hameroff, Penrose has suggested a direct relationship between the quantum vibrations of microtubules and the formation of consciousness.

<Read More…[by Fabio Marzocca]>

St&eacute;phane Graber: Network management with LXD (2.3+)

Planet Ubuntu - Thu, 10/27/2016 - 20:53

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0 Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.150.19.1/24 ipv4.nat: "true" ipv6.address: fd42:474b:622d:259d::1/64 ipv6.nat: "true" managed: true type: bridge usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.0.3.1/24 ipv4.nat: "true" ipv6.address: none managed: true type: bridge usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1 Creating c1 root@yak:~# lxc network attach testbr0 c1 eth0 root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123 root@yak:~# lxc start c1 root@yak:~# lxc list c1 +------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+-------------------+------+------------+-----------+ | c1 | RUNNING | 10.0.3.123 (eth0) | | PERSISTENT | 0 | +------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan Network testbr0 created root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan Network testbr0 created root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2 root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2 Network testbr0 created root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10 root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10 root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Ubuntu Podcast from the UK LoCo: S09E35 – Red Nun - Ubuntu Podcast

Planet Ubuntu - Thu, 10/27/2016 - 07:00

It’s Episode Thirty-Five of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Paul Tansom are connected and speaking to your brain.

We are four, made whole by a new guest presenter.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Ubuntu Insights: Travel-friendly Lemur Ubuntu Laptop Updated to Kaby Lake

Planet Ubuntu - Thu, 10/27/2016 - 05:11

This is a guest post by Ryan Sipes, community manager at System76. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

We would like to introduce you to the newest version of the extremely portable Lemur laptop. Like all System76 laptops the Lemur ships with Ubuntu, and you can choose between 16.04 LTS or the newest 16.10 release.

About System76

System76 is based out of Denver, Colorado and has been making Ubuntu computers for ten years. Creating great machines born to run Linux is our sole purpose. Members of our team are contributors to many different open source projects and we send our work enabling hardware on our computers upstream, to the benefit of everyone running our favorite operating system.

Our products have been praised as the best machines born to run Linux by fans including Chris Fisher of The Linux Action Show and Leo Laporte of This Week in Tech. We pride ourselves in offering fantastic products and providing first-class support to our users. Our support staff themselves are Linux/Ubuntu users and open source contributors, like Emma Marshall who is a host on the Ubuntu podcast.

About the Lemur

This is our 7th generation release of the Lemur, and it’s now 10% faster with the 7th gen Intel processor (Kaby Lake). Loaded with the newest Intel graphics, up to 32GB of DDR4 memory, and USB type-C port, this Lemur enables more powerful multitasking on the go.

Weighing in at 3.6 lbs, this beauty is light enough to carry from meeting to meeting, or across campus. The Lemur design is thin, built with a handle grip at the back of the laptop, allowing you to easily grasp your Lemur and rush off to your next location.

The Lemur retains its reputation, as the perfect option for those who want a high-quality portable Linux laptop at an affordable price (starting at only $699 USD).

You can see the full tech specs and other details about the Lemur here.

About the author
Ryan Sipes is the Community Manager at System76. He is a regular guest on podcasts over at Jupiter Broadcasting, like The Linux Action Show and Linux Unplugged. He helped organize the first Kansas Linux Fest and the Lawrence Linux User Group. Ryan is also a longtime Ubuntu user (since Warty Warthog), and an enthusiastic open source evangelist.

Ubuntu Insights: Snapping Cuberite

Planet Ubuntu - Thu, 10/27/2016 - 04:17

This is a guest post by James Tait, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

I’m a father of two pre-teens, and like many kids their age (and many adults, for that matter) they got caught up in the craze that is Minecraft. In our house we adopted Minetest as a Free alternative to begin with, and had lots of fun and lots of arguments! Somewhere along the way, they decided they’d like to run their own server and share it with their friends. But most of those friends were using Windows and there was no Windows client for Minetest at the time. And so it came to pass that I would trawl the internet looking for Free Minecraft server software, and eventually stumble upon Cuberite (formerly MCServer), “a lightweight, fast and extensible game server for Minecraft”.

Cuberite is an actively developed project. At the time of writing, there are 16 open pull requests against the server itself, of which five are from the last week. Support for protocol version 1.10 has recently been added, along with spectator view and a steady stream of bug fixes. It is automatically built by Jenkins on each commit to master, and the resulting artefacts are made available on the website as .tar.gz and .zip files. The server itself runs in-place; that is to say that you just unpack the archive and run the Cuberite binary and the data files are created alongside it, so everything is self-contained. This has the nice side-effect that you can download the server once, copy or symlink a few files into a new directory and run a separate instance of Cuberite on a different port, say for testing.

All of this sounds great, and mostly it is. But there are a few wrinkles that just made it a bit of a chore:

  • No formal releases. OK, while there are official build artifacts, there are no milestones, no version numbers
  • No package management! No version numbers means no managed package. We just get an archive with a self-contained build directory
  • No init scripts. When I restart my server, I want the Minecraft server to be ready to play, so I need init scripts

Now none of these problems is insurmountable. We can put the work in to build distro packages for each distribution from git HEAD. We can contribute upstart and systemd and sysvinit scripts. We can run a cron job to poll for new releases. But, frankly, it just seems like a lot of work.

In truth I’d done a lot of manual work already to build Cuberite from source, create a couple of independent instances, and write init scripts. I’d become somewhat familiar with the build process, which basically amounted to something like:

$ cd src/cuberite $ git pull $ git submodule update --init $ cd Release $ cmake -DCMAKE_BUILD_TYPE=RELEASE -DNO_NATIVE_OPTIMIZATION=ON .. $ make

This builds the release binaries and copies the plugins and base data files into the Server subdirectory, which is what the Jenkins builds then compress and make available as artifacts. I’d then do a bit of extra work: I’ve been running this in a dedicated lxc container, and keeping a production and a test instance running so we could experiment with custom plugins, so I would:

$ cd ../Server $ sudo cp Cuberite /var/lib/lxc/miners/rootfs/usr/games/Cuberite $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/production $ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/testing $ sudo cp -r favicon.png lang Plugins Prefabs webadmin /var/lib/lxc/miners/rootfs/usr/share/games/cuberite

Then in the container, /srv/cuberite/production and /srv/cuberite/testing contain symlinks to everything we just copied, and some runtime data files under /var/lib/cuberite/production and /var/lib/cuberite/testing, and we have init scripts to chdir to each of those directories and run Cuberite.

All this is fine and could no doubt be moulded into packages for the various distros with a bit of effort. But wouldn’t it be nice if we could do all of that for all the most popular distros in one fell swoop? Enter snaps and snapcraft. Cuberite is statically linked and already distributed as a run-in-place archive, so it’s inherently relocatable, which means it lends itself perfectly to distribution as a snap.
This is the part where I confess to working on the Ubuntu Store and being more than a little curious as to what things looked like coming from the opposite direction. So in the interests of eating my own dogfood, I jumped right in.
Now snapcraft makes getting started pretty easy:

$ mkdir cuberite $ cd cuberite $ snapcraft init

And you have a template snapcraft.yaml with comments to instruct you. Most of this is straightforward, but for the version here I just used the current date. With the basic metadata filled in, I moved onto the snapcraft “parts”.

Parts in snapcraft are the basic building blocks for your package. They might be libraries or apps or glue, and they can come from a variety of sources. The obvious starting point for Cuberite was the git source, and as you may have noticed above, it uses CMake as its build system. The snapcraft part is pretty straightforward:

parts: cuberite: plugin: cmake source: https://github.com/cuberite/cuberite.git configflags: - -DCMAKE_BUILD_TYPE=RELEASE - -DNO_NATIVE_OPTIMIZATION=ON build_packages: - gcc - g++ snap: - -include - -lib

That last section warrants some explanation. When I built Cuberite at first, it included some library files and header files from some of the bundled libraries that are statically linked. Since we’re not interested in shipping these files, they just add bloat to the final package, so we specify that they are excluded.

That gives us our distributable Server directory, but it’s tucked away under the snapcraft parts hierarchy. So I added a release part to just copy the full contents of that directory and locate them at the root of the snap:

release: after: [cuberite] plugin: dump source: parts/cuberite/src/Server filesets: "*": "."

Some projects let you specify the output directory with a –prefix flag to a configure script or similar methods, and won’t need this little packaging hack, but it seems to be necessary here.

At this stage I thought I was done with the parts and could just define the Cuberite app – the executable that gets run as a daemon. So I went ahead and did the simplest thing that could work:
apps:

cuberite: command: Cuberite daemon: forking plugs: - network - network-bind

But I hit a snag. Although this would work with a traditional package, the snap is mounted read-only, and Cuberite writes its data files to the current directory. So instead I needed to write a wrapper script to switch to a writable directory, copy the base data files there, and then run the server:

1 #!/bin/bash 2 for file in brewing.txt crafting.txt favicon.png furnace.txt items.ini 3 monsters.ini README.txt; do 4 if [ ! -f "$SNAP_USER_DATA/$file" ]; then 5 cp --preserve=mode "$SNAP/$file" "$SNAP_USER_DATA" 6 fi 7 done 8 9 for dir in lang Plugins Prefabs webadmin; do 10 if [ ! -d "$SNAP_USER_DATA/$dir" ]; then 11 cp -r --preserve=mode "$SNAP/$dir" "$SNAP_USER_DATA" 12 fi 13 done 14 15 cd "$SNAP_USER_DATA" 16 exec "$SNAP"/Cuberite -d

Then add the wrapper as a part:

wrapper: plugin: dump source: . organize: Cuberite.wrapper: bin/Cuberite.wrapper

And update the snapcraft app:

cuberite: command: bin/Cuberite.wrapper daemon: forking plugs: - network - network-bind

And with that we’re done! Right? Well, not quite…. While this works in snap’s devmode, in strict mode it results in the server being killed. A little digging in the output from snappy-debug.security scanlog showed that seccomp was taking exception to Cuberite using the fchown system call. Applying some Google-fu turned up a bug with a suggested workaround, which was applied to the two places (both in sqlite submodules) that used the offending system call and the snap rebuilt – et voilà! Our Cuberite server now happily runs in strict mode, and can be released in the stable channel.

My build process now looks like this:

$ vim snapcraft.yaml $ # Update version $ snapcraft pull cuberite $ # Patch two fchown calls $ snapcraft I can then push it to the edge channel: $ snapcraft push cuberite_20161023_amd64.snap --release edge Revision 1 of cuberite created. And when people have had a chance to test and verify, promote it to stable: $ snapcraft release cuberite 1 stable

There are a couple of things I’d like to see improved in the process:

  • It would be nice not to have to edit the snapcraft.yaml on each build to change the version. Some kind of template might work for this
  • It would be nice to be able to apply patches as part of the pull phase of a part

With those two wishlist items fixed, I could fully automate the Cuberite builds and have a fresh snap released to the edge channel on each commit to git master! I’d also like to make the wrapper a little more advanced and add another command so that I can easily manage multiple instances of Cuberite. But for now, this works – my boys have never had it so good!

Download the Cuberite Snap

Chris Glass: Making LXD fly on Ubuntu!

Planet Ubuntu - Wed, 10/26/2016 - 23:00

Since my last article, lots of things happened in the container world! Instead of using LXC, I find myself using the next great thing much much more now, namely LXC's big brother, LXD.

As some people asked me, here's my trick to make containers use my host as an apt proxy, significantly speeding up deployment times for both manual and juju-based workloads.

Setting up a cache on the host

First off, we'll want to setup an apt cache on the host. As is usually the case in the Ubuntu world, it all starts with an apt-get:

sudo apt-get install squid-deb-proxy

This will setup a squid caching proxy on your host, with a specific apt configuration listening on port 8000.

Since it is tuned for larger machines by default, I find myself wanting to make it use a slightly smaller disk cache, using 2Gb instead of the default 40Gb is way more reasonable on my laptop.

Simply editing the config file takes care of that:

$EDITOR /etc/squid-deb-proxy # Look for the "cache_dir aufs" line and replace with: cache_dir aufs /var/cache/squid-deb-proxy 2000 16 256 # 2 gb

Of course you'll need to restart the service after that:

sudo service squid-deb-proxy restart Setting up LXD

Compared to the similar procedure on LXC, setting up LXD is a breeze! LXD comes with configuration templates, and so we can conveniently either create a new template if we want to use the proxy selectively, or simply add the configuration to the "default" template, and all our containers will use the proxy, always!

In the default template

Since I never turn the proxy off on my laptop I saw no reason to apply the proxy selectively, and simply added it to the default profile:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set default user.user-data -

Of course the first part of the first command line automates the discovery of your IP address, conveniently, as long as your LXD bridge is called "lxdbr0".

Once set in the default template, all LXD containers you start now have an apt proxy pointing to your host set up!

In a new template

Should you not want to alter the default template, you can easily create a new one:

export PROFILE_NAME=proxy lxc profile create $PROFILE_NAME

Then substitute the newly created profile in the previous command line. It becomes:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}') echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set $PROFILE_NAME user.user-data -

Launching a new container needs to add this configuration template, so that the container benefits form the proxy configuration:

lxc launch ubuntu:xenial -p $PROFILE_NAME -p default Reverting

If for some reason you don't want to use your host as a proxy anymore, it is quite easy to revert the changes to the template:

lxc profile set <template> user.user-data That's it!

As you can see it is trivial to set an apt proxy on LXD, and using squid-deb-proxy on the host makes that configuration trivial.

Hope this helps!

St&eacute;phane Graber: LXD 2.0: LXD and OpenStack [11/12]

Planet Ubuntu - Wed, 10/26/2016 - 18:10

This is the eleventh blog post in this series about LXD 2.0.

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch" lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y lxc exec openstack -- apt-add-repository ppa:juju/stable -y lxc exec openstack -- apt update lxc exec openstack -- apt dist-upgrade -y lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Matthew Helmke: Ubuntu Unleashed 2017

Planet Ubuntu - Wed, 10/26/2016 - 15:51

I was the sole editor and contributor of new content for Ubuntu Unleashed 2017 Edition. This book is intended for intermediate to advanced users.

Daniel Pocock: FOSDEM 2017 Real-Time Communications Call for Participation

Planet Ubuntu - Tue, 10/25/2016 - 23:39

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs
Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

Julian Andres Klode: Introducing DNS66, a host blocker for Android

Planet Ubuntu - Tue, 10/25/2016 - 09:20

I’m proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It’s been around since last Thursday on F-Droid, but it never really got a formal announcement.

DNS66 creates a local VPN service on your Android device, and diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked.

You can find DNS66 here:

F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version.

Implementation Notes

DNS66’s core logic is based on another project,  dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit:

All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN’s tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe).

We literally redirect your DNS servers. Meaning if your DNS server is 1.2.3.4, all traffic to 1.2.3.4 is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN.

We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.

 


Filed under: Android, Uncategorized

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator