Feed aggregator

Sergio Schvezov: Making your snaps available to the store using snapcraft

Planet Ubuntu - Fri, 11/04/2016 - 04:58

Now that Ubuntu Core has been officially released, it might be a good time to get your snaps into the Store.

Delivery and Store Concepts

So let’s start with a refresher on what we have available on the Store side to manage your snaps.

Every time you push a snap to the store, the store will assign it a revision, this revision is unique in the store for this particular snap.

However to be able to push a snap for the first time, the name needs to be registered which is pretty easy to do given the name is not already taken.

Any revision on the store can be released to a number of channels which are defined conceptually to give your users the idea of a stability or risk level, these channel names are:

  • stable
  • candidate
  • beta
  • edge

Ideally anyone with a CI/CD process would push daily or on every source update to the edge channel. During this process there are two things to take into account.

The first thing to take into account is that at the beginning of the snapping process you will likely get started with a non confined snap as this is where the bulk of the work needs to happen to adapt to this new paradigm. With that in mind, your project gets started with a confinement set to devmode. This makes it possible to get going on the early phases of development and still get your snap into the store. Once everything is fully supported with the security model snaps work in, this confinement entry can be switched to strict. Given the confinement level of devmode this snap is only releasable on the edge and beta channels which hints your users on how much risk they are taking by going there.

So let’s say you are good to go on the confinement side and you start a CI/CD process against edge but you also want to make sure in some cases that early releases of a new iteration against master never make it to stable or candidate and for this we have a grade entry. If the grade of the snap is set to devel the store will never allow you to release to the most stable channels (stable and candidate). not be possible.

Somewhere along the way we might want to release a revision into beta which some users are more likely want to track on their side (which given good release management process should be to some level more usable than a random daily build). When that stage in the process is over but want people to keep getting updates we can choose to close the beta channel as we only plan to release to candidate and stable from a certain point in time, by closing this beta channel we will make that channel track the following open channel in the stability list, in this case it is candidate, if candidate is tracking stable whatever is in stable is what we will get.

Enter Snapcraft

So given all these concepts how do we get going with snapcraft, first of all we need to login:

$ snapcraft login Enter your Ubuntu One SSO credentials. Email: sxxxxx.sxxxxxx@canonical.com Password: ************** Second-factor auth: 123456

After logging in we are ready to get our snap registered, for examples sake let’s say we wanted to register awesome-database, a fantasy snap we want to get started with:

$ snapcraft register awesome-database We always want to ensure that users get the software they expect for a particular name. If needed, we will rename snaps to ensure that a particular name reflects the software most widely expected by our community. For example, most people would expect ‘thunderbird’ to be published by Mozilla. They would also expect to be able to get other snaps of Thunderbird as 'thunderbird-sergiusens'. Would you say that MOST users will expect 'a' to come from you, and be the software you intend to publish there? [y/N]: y You are now the publisher for 'awesome-database'.

So assuming we have the snap built already, all we have to do is push it to the store. Let’s take advantage of a shortcut and --release in the same command:

$ snapcraft push awesome-databse_0.1_amd64.snap --release edge Uploading awesome-database_0.1_amd64.snap [=================] 100% Processing.... Revision 1 of 'awesome-database' created. Channel Version Revision stable - - candidate - - beta - - edge 0.1 1 The edge channel is now open.

If we try to release this to stable the store will block us:

$ snapcraft release awesome-database 1 stable Revision 1 (devmode) cannot target a stable channel (stable, grade: devel)

We are safe from messing up and making this available to our faithful users. Now eventually we will push a revision worthy of releasing to the stable channel:

$ snapcraft push awesome-databse_0.1_amd64.snap Uploading awesome-database_0.1_amd64.snap [=================] 100% Processing.... Revision 10 of 'awesome-database' created.

Notice that the version is just a friendly identifier and what really matters is the revision number the store generates for us. Now let’s go ahead and release this to stable:

$ snapcraft release awesome-database 10 stable Channel Version Revision stable 0.1 10 candidate ^ ^ beta ^ ^ edge 0.1 10 The 'stable' channel is now open.

In this last channel map view for the architecture we are working with, we can see that edge is going to be stuck on revision 10, and that beta and candidate will be following stable which is on revision 10. For some reason we decide that we will focus on stability and make our CI/CD push to beta instead. This means that our edge channel will slightly fall out of date, in order to avoid things like this we can decide to close the channel:

$ snapcraft close awesome-database edge Arch Channel Version Revision amd64 stable 0.1 10 candidate ^ ^ beta ^ ^ edge ^ ^ The edge channel is now closed.

In this current state, all channels are following the stable channel so people subscribed to candidate, beta and edge would be tracking changes to that channel. If revision 11 is ever pushed to stable only, people on the other channels would also see it.

This listing also provides us with a full architecture view, in this case we have only been working with amd64.

Getting more information

So some time passed and we want to know what was the history and status of our snap in the store. There are two commands for this, the straightforward one is to run status which will give us a familiar result:

$ snapcraft status awesome-database Arch Channel Version Revision amd64 stable 0.1 10 candidate ^ ^ beta ^ ^ edge ^ ^

We can also get the full history:

$ snapcraft history awesome-database Rev. Uploaded Arch Version Channels 3 2016-09-30T12:46:21Z amd64 0.1 stable* ... ... ... 2 2016-09-30T12:38:20Z amd64 0.1 - 1 2016-09-30T12:33:55Z amd64 0.1 - Closing remarks

I hope this gives an overview of the things you can do with the store and more people start taking advantage of it!

Stuart Langridge: OnePlus are great at customer service

Planet Ubuntu - Fri, 11/04/2016 - 04:06

Around a year ago, I bought a OnePlus X phone. I was given a whole bunch of warnings by a whole bunch of people that their customer support was terrible and if the phone broke, I stood no chance of getting it repaired.

Well, it broke. Specifically, it stopped charging. I had to wiggle the connector, charge it by holding the lead at different angles, the works. That was really annoying. So I thought: they should fix this. Here’s what happened.

Tuesday evening, at eleven pm, I used the “Live Chat” thing on OnePlus’s website, and was put in touch with a service engineer named “Irish”. I explained the problem. Irish said: OK, we’ll fix that; if it turns out you broke it you’ll be charged, if not we’ll fix it. A pleasant conversation, in which he talked me through setting up an RMA request on their support site. Shortly thereafter, I received by email a DHL dispatch note, which I printed out and glued on the outside of a jiffy bag with my phone in it. DHL rang me up the next morning to arrange pickup; I arranged that for this Monday. A chap arrived and picked up the phone. Today, four days later, I’ve just got the phone back, delivered back to me by DHL again, and it works.

It is hard to imagine how customer service could be any better than this. My thing broke, they picked it up from my flat, repaired it, and delivered it back to my flat at their expense, with no complaints, in four days. This is excellent. Thank you OnePlus, thank you Regenersis the OnePlus service centre, thank you Irish. Maybe I’m unusual; maybe OnePlus have upped their game in the last year; maybe this is just luck. But this is a sterling example of how interaction with a company should go. OnePlus: exemplars of customer service. I am impressed.

Kubuntu: Kubuntu 16.04.1 LTS Update Out

Planet Ubuntu - Thu, 11/03/2016 - 15:24

The first point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes.

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.1 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file.

See the 16.04.1 release announcement.

Download 16.04.1 images.

Ubuntu Online Summit: Call for sessions

The Fridge - Thu, 11/03/2016 - 06:48

As announced a few weeks ago, Ubuntu Online Summit is going to happen

15-16 November 2016

and all details are going to be up on http://summit.ubuntu.com/

Now is a good time to register and add your sessions.

If you have any questions, reach out to Daniel Holbach, Michael Hall or Alan Pope.

Thanks a lot in advance and see you soon at UOS!

Originally posted to the community-announce mailing list on Wed Nov 2 16:21:11 UTC 2016 by Daniel Holbach

Raphaël Hertzog: My Free Software Activities in October 2016

Planet Ubuntu - Wed, 11/02/2016 - 04:09

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

Last month I started to work on tiff3 but had not enough time to complete an update, it turns out the issues were hairy enough that nobody else picked up the package. So this month I started again with tiff3 and tiff and I ended up spending my 13h on those two packages.

I filed bugs for issues that were not yet reported to the BTS (#842361 for CVE-2016-5652, #842046 for CVE-2016-5319/CVE-2016-3633/CVE-2015-8668). I marked many CVE as not affecting tiff3 as this source package does not ship the tools (the “tiff” source package does).

Since upstream decided to drop many tools instead of fixing the corresponding security issues, I opted to remove the tools as well. Before doing this, I looked up reverse dependencies of libtiff-tools to ensure that none of the tools removed are used by other packages (the maintainer seems to agree too).

I backported upstream patches for CVE-2016-6223 and CVE-2016-5652.

But the bulk of the time, I spent on CVE-2014-8128, CVE-2015-7554 and CVE-2016-5318. I believe they are all variants of the same problem and upstream seems to agree since he opened a sort of meta-bug to track them. I took inspiration from a patch suggested in ticket #2499 and generalized it a bit by trying to add the tag data for all tags manipulated by the various tools. It was a tiresome process as there are many tags used in multiple places. But in the end, it works as expected. I can no longer reproduce any of the segfaults with the problematic files.

I asked for review/test on the mailing list but did not get much feedback. I’m going to upload the updated packages soon.

Distro Tracker

I noticed a sudden raise in the number of email addresses being automatically unsubscribed from the Debian Package Tracker and I got a few request of bounces. It turns out the BTS has been relaying lots of spam with executables files and those are bounced by Google (and not silently discarded). This is all very unfortunate… the spam flood is unlikely to stop soon and I can’t expect Google to change either, so I had little choice except trying to make the bounce handler smarter. That’s what I did: I have a list of regular expression that will discard a bounce. In other words, once matched the bounce won’t count towards the limit that triggers the automatic unsubscription.

Misc Debian work

Bugs filed. In #839403, I suggest the possibility to set the default pin priority for a source in the sources.list file directly. In #840436 I ask the selenium-firefoxdriver maintainer to do what is required to get this non-free package auto-built.

Packaging. I sponsored puppet-lint 2.0.2-0.1 and I reviewed the rozofs package (wihch I just sponsored into experimental for a start).

Publicity. I’m maintaining the Debian account on Twitter and Facebook. I have been using twitterfeed.com up to now but it’s closing down. I followed their recommendations and switched to dlvr.it to automatically post entries out of the micronews.debian.org feed. In #841165, I reported that the chroots created by sbuild-createchroot are lacking the usual IPv6 entries created by netbase. In #841503, I report a very common cryptsetup upgrade failure that I saw multiple times (both in Debian and in Kali).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Ubuntu Insights: Static code analyzer for Ubuntu

Planet Ubuntu - Wed, 11/02/2016 - 03:37

This is a guest post by Ekaterina Milovidova, from viva64. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Static code analysis is the process of detecting errors and flaws in the source code of programs. There is a large number of static analysis tools created for various programming languages that allow detecting a big number of errors on the development stage reducing the cost of development of the whole project.

The developers of PVS-Studio software, intended for bug detection in C, C++, and C# projects in Windows environment have released a new version for Linux systems that supports C/C++ compilers from GNU Compiler Collection (GCC) and LLVM Clang.

PVS-Studio detects potential errors of three main groups: general analysis, optimizations and 64-bit issues. The diagnostic set of general analysis allows detecting logic errors, typos, code fragments, causing access violation, incorrect usage of algorithms from STL libraries and a lot more.
The new version of PVS-Studio for GNU/Linux OS, in addition to the native version of the analyzer offers convenient variants of integration to the projects, using CMake and QMake and the display of the analysis results in the QtCreator and CLion IDE. Also there is now a possibility to check any project that uses one of the compilers, supported by the analyzer, with the help of the universal compilation tracking system.

For Ubuntu users, PVS-Studio developers have created Deb packages to install the analyzer and also deployed a custom repository for easy installation and updates. The analyzer can be downloaded as a deb/rpm or .tgz archive by the link here.

The code can be checked on the build server and on the machine of the developer. Developers recommend using both. In case the bug is found on the machine of the programmer before it got to the version control system, the analyzer is a real helper. You shouldn’t also forget about the check on the build server. Sometimes developers “miss” the analyzer warnings due to various reasons. Then regular overnight checks help to make sure that there are no bugs in the code.

Xubuntu: Presenting the Xubuntu status tracker

Planet Ubuntu - Tue, 11/01/2016 - 03:43

Status tracking is useful for all kinds of projects, including Xubuntu. Amongst other things, it allows contributors to quickly see what’s left to do and what others are working on. When the tracking data is kept up to date, the resulting data can be immensely helpful.

Until 2015, the Xubuntu team had been using the common status tracker for Ubuntu teams. For a reason or another, it suddenly stopped working as tracking data from Launchpad didn’t make it into the tracker database. That was unfortunate, but on the other hand, it helped the team make an important decision which had been floating around for quite some time already; we need our own status tracker that is ideally better than the common one used this far.

Today, we want to present you the Xubuntu status tracker. For the impatient, head down to dev.xubuntu.org to see what it looks like.

For the rest of us (and the impatient when they come back), continue reading to get an idea what the tracker can do and how it can help the Xubuntu team – and potentially, motivate people to start contributing to Xubuntu.

The views and the benefits

In the current state, the status tracker has four main views. The first one of them is the overview, which lets contributors see how different specifications are coming along. This view also allows the visitor to look at the whiteboards of each specification easily and quickly without visiting Launchpad.

The other view is all about the work items and their details. In this view, you can filter the work items with various filters, as well as sort them by assignee, work item status or specification. The filtering is a new feature specifically built for the Xubuntu status tracker and has already proven useful for the team members.

For example, if you wanted to see all work items related to GTK, you could simply type gtk in the Text filter – the results are shown to you immediately. If you further wanted to filter the results, you could select any assignees, specifications and statuses. Yes, that’s right, you can select multiple values for all of the filters.

As another example that has a more useful real world use case, you could show only all of the open work items by selecting To Do, In Progress and Blocked from the Status dropdown. Finally, you can create a handy shortcut for this view by dragging the Permalink into your bookmarks. Following this link you can always get to the same filter state.

The third view is the burndown chart. This view shows the history for the work item statuses. In the Xubuntu status tracker, the burndown chart also shows events that the team considers important during the cycle, mostly different freezes.

In addition to showing the team whether we are in good pace to finish all work items in time, it can also point out useful and interesting information for the testers and end users; for example, the amount of work items closed between Beta 1 and 2 is huge. While this means that the quality of the product should have gone up, it also means that tests ran against Beta 1 do not have any validity during the Beta 2 testing – it all needs to be ran again to make sure the fixes are actually working and that there are no regressions.

The final view – the timeline – simply shows which work items have been completed and when. This is also a new feature for the new tracker. The timeline is useful for the testers – when they can see what has been changed and when, they’ll know what they need to test. It also helps the contributors to gather the release notes for the releases, especially the milestones, which has previously been laborious work digging through changelogs and much more.

Finally, it serves as an automatic team updates chart for the team itself as well as other teams. This way we can let everybody know what we have been working on without actively needing to take extra effort.

In addition to the main views, the tracker has an menu that is integrated with the Xubuntu wiki and additionally links to the team calendar, IRC channel, mailing list and the new contributor documentation.

The future

As the common Ubuntu status tracker, the Xubuntu status tracker gets most of the data from Launchpad blueprints. While this means we don’t have to take on some of the maintaining burden, it has it’s problems. It’s possible that we start storing the work items internally to avoid a lot of API calls and the caching issues related with them.

There are also some plans for the future to start digging more information to the tracker from other tools like the QA trackers and Jenkins to get an overview of how the quality assurance is running.

If you are interested in contributing to the tracker, be in touch with us via the developer IRC channel (#xubuntu-devel on Freenode) or the Xubuntu developer mailing list.

Costales: How to print a document created with uWriter

Planet Ubuntu - Tue, 11/01/2016 - 03:42
New update :) And now you can print your documents created in uWriter from your PC (uWriter 0.18+).

0.18 Print documents!Enable access to your documents from PC in an easy way:In you Terminal phone run this command:
ln -s /home/phablet/.local/share/uwp.costales/ /home/phablet/Documents/uWriter


This in the phoneThis will create a link in your ~/Documents/uWriter to /home/phablet/.local/share/uwp.costales
You'll do this step just one time ;) Now is easy to navigate to that folder in your PC.


Print a documentConnect your phone to the PC by USB and navigate to the uWriter folder (~/Documents/uWriter):

Your documents are the *.html
Open one document with web browserYou document opened in the PCPrint it!Nothing more :) Enjoy it!

The Fridge: Ubuntu Weekly Newsletter Issue 485

Planet Ubuntu - Mon, 10/31/2016 - 17:30

Ubuntu Weekly Newsletter Issue 485

The Fridge - Mon, 10/31/2016 - 17:30

Ubuntu Insights: Webinar: Managing diverse workloads with Juju

Planet Ubuntu - Mon, 10/31/2016 - 04:00


Watch On Demand 

In this webinar we explore the common challenges organisations face in deploying and managing Windows workloads at scale on OpenStack. We look at how Windows Charms can integrate seamlessly with existing Juju Charms to provide a powerful way to reduce workload deployments from days to minutes on both bare metal and public or private clouds. We also cover the benefits of Microsoft on OpenStack, Hyper-V integration and Microsoft’s support of Hyper-V in OpenStack.

Canonical’s award-winning model-driven operations system Juju enables reusable, open source operations across hybrid cloud and physical infrastructure. Integration and operations are encoded in “charms” by vendors and the community of experts familiar with an app. These charms are reused by ops teams as standardised operations code that evolves along with the software itself. Our partner Cloudbase has created many Windows charms to take advantage of the benefits Juju has to offer.

Webinar Details

Title: Deploying Microsoft technologies on OpenStack
Presented By: Alessandro Pilotti, CEO Cloudbase Solution and Mark Baker, Ubuntu Server and Cloud Product Manager at Canonical.
Time/Date: available now on-demand

Watch this on-demand webinar to learn…
  • How to use Windows and Juju
  • Recognizing the value of integrating these technologies into OpenStack
  • Learn about the type of Windows workloads available with Juju

Watch on Demand

Eric Hammond: AWS Lambda Static Site Generator Plugins

Planet Ubuntu - Mon, 10/31/2016 - 02:41

starting with Hugo!

A week ago, I presented a CloudFormation template for an AWS Git-backed Static Website stack. If you are not familiar with it, please review the features of this complete Git + static website CloudFormation stack.

This weekend, I extended the stack to support a plugin architecture to run the static site generator of your choosing against your CodeCommit Git repository content. You specify the AWS Lambda function at stack launch time using CloudFormation parameters (ZIP location in S3).

The first serious static site generator plugin is for Hugo, but others can be added with or without my involvement and used with the same unmodified CloudFormation template.

The Git-backed static website stack automatically invokes the static site generator whenever the site source is updated in the CodeCommit Git repository. It then syncs the generated static website content to the S3 bucket where the stack serves it over a CDN using https with DNS served by Route 53.

I have written three AWS Lambda static site generator plugins to demonstrate the concept and to serve as templates for new plugins:

  1. Identity transformation plugin - This copies the entire Git repository content to the static website with no modifications. This is currently the default plugin for the static website CloudFormation template.

  2. Subdirectory plugin - This plugin is useful if your Git repository has files that should not be included as part of the static site. It publishes a specified subdirectory (e.g., “htdocs” or “public-html”) as the static website, keeping the rest of your repository private.

  3. Hugo plugin - This plugin runs the popular Hugo static site generator. The Git repository should include all source templates, content, theme, and config.

You are welcome to use any of these plugins when running an AWS Git-backed Static Website stack. The documentation in each of the above plugin repositories describes how to set the CloudFormation template parameters on stack create.

You may also write your own AWS Lambda function static site generator plugin using one of the above as a starting point. Let me know if you write plugins; I may add new ones to the list above.

The sample AWS Lambda handler plugin code takes care of downloading the source, and uploading the resulting site and can be copied as is. All you have to do is fill in the “generate_static_site” code to generate the site from the source.

The plugin code for Hugo is basically this:

def generate_static_site(source_dir, site_dir, user_parameters): command = "./hugo --source=" + source_dir + " --destination=" + site_dir if user_parameters.startswith("-"): command += " " + user_parameters print(os.popen(command).read())

I have provided build scripts so that you can build the sample AWS Lambda functions yourself, because you shoudn’t trust other people’s blackbox code if you can help it. That said, I have also made it easy to use pre-built AWS Lambda function ZIP files to try this out.

These CloudFormation template and AWS Lambda functions are very new and somewhat experimental. Please let me know where you run into issues using them and I’ll update documentation. I also welcome pull requests, especially if you work with me in advance to make sure the proposed changes fit the vision for this stack.

Original article and comments: https://alestic.com/2016/10/aws-static-site-generator-plugins/

Ubuntu Insights: Dirty COW was Livepatched in Ubuntu within Hours of Publication

Planet Ubuntu - Mon, 10/31/2016 - 02:00

If you haven’t heard about last week’s Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves…

Why? Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between — including all versions of Ubuntu since 2007 — was vulnerable to this face-palming critical security vulnerability.

Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds. Watch…

Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS. The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!

If you haven’t already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:

  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    Install the canonical-livepatch snap

    $ sudo snap install canonical-livepatch
  2. Enable the service with your token $ sudo canonical-livepatch enable [TOKEN]

And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let’s retry that same vulnerability, on the same system, but this time, having been livepatched…

Aha! Thwarted!

So that’s the Ubuntu 16.04 LTS kernel space… What about userspace? Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.

As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers. If you don’t already have it installed, you can install it with:

$ sudo apt install unattended-upgrades

And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default. Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:

$ sudo dpkg-reconfigure unattended-upgrades

With that combination enabled — (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates — Ubuntu 16.04 LTS is the most secure Linux distribution to date. Period.

If you want to enable the Canonical Livepatch Service on more than three machines, please purchase an Ubuntu Advantage support package from buy.ubuntu.com or get in touch.

Dustin Kirkland: Dirty COW was Livepatched in Ubuntu within Hours of Publication

Planet Ubuntu - Sat, 10/29/2016 - 01:46
If you haven't heard about last week's Dirty COW vulnerability, I hope all of your Linux systems are automatically patching themselves...

Why?  Because every single Linux-based phone, router, modem, tablet, desktop, PC, server, virtual machine, and absolutely everything in between -- including all versions of Ubuntu since 2007 -- was vulnerable to this face-palming critical security vulnerability.
Any non-root local user of a vulnerable system can easily exploit the vulnerability and become the root user in a matter of a few seconds.  Watch...

Coincidentally, just before the vulnerability was published, we released the Canonical Livepatch Service for Ubuntu 16.04 LTS.  The thousands of users who enabled canonical-livepatch on their Ubuntu 16.04 LTS systems with those first few hours received and applied the fix to Dirty COW, automatically, in the background, and without rebooting!
If you haven't already enabled the Canonical Livepatch Service on your Ubuntu 16.04 LTS systems, you should really consider doing so, with 3 easy steps:
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
  2. Install the canonical-livepatch snap
    $ sudo snap install canonical-livepatch 
  3. Enable the service with your token
    $ sudo canonical-livepatch enable [TOKEN]
And you’re done! You can check the status at any time using:

$ canonical-livepatch status --verbose

Let's retry that same vulnerability, on the same system, but this time, having been livepatched...

Aha!  Thwarted!
So that's the Ubuntu 16.04 LTS kernel space...  What about userspace?  Most of the other recent, branded vulnerabilities (Heartbleed, ShellShock, CRIME, BEAST) have been critical vulnerabilities in userspace packages.
As of Ubuntu 16.04 LTS, the unattended-upgrades package is now part of the default package set, so you should already have it installed on your Ubuntu desktops and servers.  If you don't already have it installed, you can install it with:
$ sudo apt install unattended-upgrades
And moreover, as of Ubuntu 16.04 LTS, the unattended-upgrades package automatically downloads and installs important security updates once per day, automatically patching critical security vulnerabilities and keeping your Ubuntu systems safe by default.  Older versions of Ubuntu (or Ubuntu systems that upgraded to 16.04) might need to enable this behavior using:
$ sudo dpkg-reconfigure unattended-upgrades

With that combination enabled -- (1) automatic livepatches to your kernel, plus (2) automatic application of security package updates -- Ubuntu 16.04 LTS is the most secure Linux distribution to date.  Period.
Mooooo,:-Dustin

Cesar Sevilla: 1er Festival Universitario de Tecnologías Libres

Planet Ubuntu - Fri, 10/28/2016 - 15:09

A petición del público traemos para todos ustedes a través de la Unidad Territorial – Fundacite Zulia (UTZ) el “Festival Universitario de Tecnologías Libres”, el mismo tiene como objetivo: Impulsar el crecimiento y desarrollo del Software Libre y Código Abierto dentro del ámbito Educativo Institucional en la Región Occidental, para lograr la independencia Científica–Tecnológica del software, apoyándose en los Decretos Nacionales y leyes que ampara el estado venezolano.

Este Festival, se llevará a cabo en el Auditorio de la Universidad José Gregorio Hernández los días viernes 11/11/16 de 8:00 am / 12:00 m – 2:00 pm / 5:00pm y el sábado 12/11/16 de 9:00 am / 12:00 m.

El día Viernes realizaremos 5 Charlas técnicas y para el día Sábado realizaremos 2 charlas y una Jornada de Instalación de Sistemas Operativos Libres y/o Herramientas Libres.

Es importante resaltar que contaremos con varios representantes a nivel nacional, entre ellos Representante de la Cámara Venezolana de la industria Tecnológica, del Laboratorio Vivencial, de la Comunidad de Ubuntu-ve, y de diferentes organizaciones con el propósito de presentar los diferentes proyectos tecnológicos hechos con herramientas libres.

Si deseas participar en dicha actividad recuerda que debes registrarte a través del siguiente enlace: https://www.eventbrite.es/e/entradas-festival-universitario-de-tecnologias-libres-2016-27283438499

Si deseas obtener mayor información, puedes contactarme a través del correo csevilla@fundacite-zulia.gob.ve


Salih Emin: How to recover deleted photos and files from your smartphone’s SD card

Planet Ubuntu - Fri, 10/28/2016 - 09:06
In this tutorial, let's assume that we have accidentally deleted our files, from an sd card, or a USB thumb drive. Then we will try to recover them using photorec app.

Costales: A new uWriter for Ubuntu Phone

Planet Ubuntu - Fri, 10/28/2016 - 08:57
uWriter, an offline text editor for our Ubuntu Phone/Tablet.

#productivity
In the new release, all your new documents will be stored in: ~/.local/share/uwp.costales/*.html
And it will have a full OS integration!

More nice UILoad/Save on local filesMenu
Enjoy it from the Ubuntu Store |o/

Alessio Treglia: The logical contradictions of the Universe

Planet Ubuntu - Fri, 10/28/2016 - 05:16

Ouroboros

Is Erwin Schrödinger’s wave function – which did in the atomic and subatomic world an operation altogether similar to the one performed by Newton in the macroscopic world – an objective reality or just a subjective knowledge? Physicists, philosophers and epistemologist have debated at length on this matter. In 1960, theoretical physicist Eugene Wigner has proposed that the observer’s consciousness is the dividing line that triggers the collapse of the wave function[1], and this theory was later taken up and developed in recent years. “The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse[2].

xt-align: justify;”>The English mathematical physicist and philosopher of science Roger Penrose developed the hypothesis called Orch-OR (Orchestrated objective reduction) according to which consciousness originates from processes within neurons, rather than from the connections between neurons (the conventional view). The mechanism is believed to be a quantum physical process called objective reduction which is orchestrated by the molecular structures of the microtubules of brain cells (which constitute the cytoskeleton of the cells themselves). Together with the physician Stuart Hameroff, Penrose has suggested a direct relationship between the quantum vibrations of microtubules and the formation of consciousness.

<Read More…[by Fabio Marzocca]>

St&eacute;phane Graber: Network management with LXD (2.3+)

Planet Ubuntu - Thu, 10/27/2016 - 20:53

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0 Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.150.19.1/24 ipv4.nat: "true" ipv6.address: fd42:474b:622d:259d::1/64 ipv6.nat: "true" managed: true type: bridge usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0 name: testbr0 config: ipv4.address: 10.0.3.1/24 ipv4.nat: "true" ipv6.address: none managed: true type: bridge usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1 Creating c1 root@yak:~# lxc network attach testbr0 c1 eth0 root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123 root@yak:~# lxc start c1 root@yak:~# lxc list c1 +------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+-------------------+------+------------+-----------+ | c1 | RUNNING | 10.0.3.123 (eth0) | | PERSISTENT | 0 | +------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan Network testbr0 created root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan Network testbr0 created root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2 root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2 Network testbr0 created root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10 root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10 root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator