Feed aggregator

Ubuntu Weekly Newsletter Issue 426

The Fridge - Mon, 07/20/2015 - 18:11

Welcome to the Ubuntu Weekly Newsletter. This is issue #426 for the week July 13 – 19, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Charles Profitt
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Zygmunt Krynicki: Using Snappy Ubuntu Core to test-boot anything

Planet Ubuntu - Mon, 07/20/2015 - 09:56
Lab-As-A-Service Inception OSToday morning I realized that it is quite possible to use chain-loading to boot into a test OS from within a Snappy Ubuntu Core system. I've decided to try to implement that and you can give it a try now. LAAS inception 0.1 [snap].

Update: I uploaded an older, broken version, please make sure you have this version installed:
a1bc56b7bc114daf2bfac3f8f5345b84  laas-inception_0.1_all.snap

Inception OS is a small Ubuntu Snappy Core-based system with a one more snap for programmatic control over boot process. Using the inception OS, any x86 or amd64 system (both in UEFI and legacy bios mode) can be converted into a remotely controlled web API that lets anyone reflash the OS and reboot into the fresh image.

The system always reboots into the Inception OS so this can be used to run unreliable software as long as the machine can be power-cycled remotely (which seems to be a solved problem with off-the-shelf networked power strips).
Installing the Inception OS
  1. Get a laptop, desktop or server that you can run Snappy Ubuntu Core on enough so that you have working networking (including wifi if that is what you wish to use) and working storage (so that you can see the primary disk. In general, the pre-built Snappy Ubuntu Core for amd64 can be used on many machines without any modification.
  2. Copy the image to a separate boot device. This can be a USB hard drive or flash memory of some sort. In my case I just dd'ed the uncompressed image to a 8GB flash drive (the image was 4GB but that's okay).
  3. Plug the USB device into your test device.
  4. In the boot loader, set the device to always boot from the USB device you just plugged in. Save the setting and reboot to test this.
  5. Use snappy-remote --url=ssh:// install-remote laas-inception*.snap to install the Inception snap.
  6. SSH to the test system and ensure that laas-inception.laas-inception command exists. Run it with the install argument. This will modify the boot configuration to make sure that the inception features are available.
  7. At this stage, the device can be power-cycled, rebooted, etc. Thanks to Snappy Ubuntu Core it should be resilient to many types of damage that can be caused by rebooting at the wrong moment.

Installing Test OSesTo install any new OS image for testing follow those steps.
  1. Get a pre-installed system image. This is perfect for snappy (snappy-device-install core can be used to create one) and many traditional operating systems can be converted to pre-installed images one can just copy to the hard drive directly.
  2. Use ssh to connect to the Inception OS. From there you can download and copy the OS image onto the primary storage (hard drive or SSD) of the machine. Currently this is not implemented but later versions of the Inception OS will simply offer this as a web API with simple tools for remote administration from any platform.
  3. Use the laas-inception.laas-inception boot command to reboot into the test OS. This will restart the machine and boot from the primary storage exactly once. As soon as the machine restarts or is power cycled you will regain control as inception OS will boot again.
How it worksInception OS is pretty simple. It uses GRUB chain-loading to boot anything that is installed on the primary storage. It uses a few tricks to set everything in motion but the general idea is simple enough that it should work on any machine that can be booted with GRUB. The target OS can be non-Linux (inception can boot Windows, for example, though reboots will kick back into the Inception OS).

Eric Hammond: TimerCheck.io - Countdown Timer Microservice Built On Amazon API Gateway and AWS Lambda

Planet Ubuntu - Mon, 07/20/2015 - 04:26

deceptively simple web service with super powers

TimerCheck.io is a fully functional, fully scalable microservice built on the just-released Amazon API Gateway and increasingly popular AWS Lambda platforms.

TimerCheck.io is a public web service that maintains a practically unlimited number of countdown timers with one second resolution and no practical limit to the number of seconds each timer can run.

New timers can be created on a whim and each timer can be reset at any time to any number of seconds desired, whether it is still running or has already expired.


Let’s begin with an example to demonstrate the elegant simplicity of the TimerCheck.io interface.

1. Set timer - Any request of the following URL sets a timer named “YOURTIMERNAME” to start counting down immediately from 60 seconds:


You may click on that link now, or hit a URL of the same format with your own timer name and your chosen number of seconds. You may use a browser, a command like curl, or your favorite programming language.

2. Poll timer - The following URL requests the status of the above timer. Note that the only difference in the URL is that we have dropped the seconds count.


If the named timer is still running, TimerCheck.io will return HTTP Status code 200 OK, along with a JSON structure containing information like how many seconds are left.

If the timer has expired, TimerCheck.io will return an HTTP status code 504 Timeout.

That’s it!

No, really. That’s the entire API.

And the whole service is implemented in about 60 lines of code, on top of a handful of powerful infrastructure services managed, protected, maintained, and scaled by Amazon.

Not Included

The TimerCheck.io service does not perform any action when a timer expires. The timer should be polled to find out if it has expired.

On first thought, this may cause you to wonder if this service might, in fact, be completely useless. Instead of polling TimerCheck.io, why not just have your code keep its own timer records or look at a clock and see if it’s time yet?

The answer is that TimerCheck.io is not created for situations where you can depend on your own code to be running and keeping track of things.

TimerCheck.io is designed for integration with existing third party software packages and services that already support a polling mechanism, but do not implement timers.

For example…

Event Monitoring

There are many types of monitoring software packages and free/commercial services that poll resources to see if they are healthy and alert you if there is a problem, but they have no way to alert you if an expected event does not occur. For example, you may want to ensure that a batch job runs every hour, or a message is posted to an SNS topic at least every 15 minutes.

The TimerCheck.io service can be the glue between the existing events you wish to monitor and your existing monitoring system. Here’s how it works:

1. Set timer - When your event runs, trigger a ping of TimerCheck.io to reset the timer. In the URL, specify the name of the timer and the number of seconds when your monitoring system should consider it a problem if no further event has run.

2. Poll timer - Add the TimerCheck.io polling URL for the same timer to your monitoring software, configuring it to alert you if the web request returns anything but success.

If your events keep resetting the timer before the timer expires, your monitoring system will stay happy and quiet, as the polling URL will always return success.

If the monitoring system polls the timer when no event has run in the specified number of seconds, then alarms sound, you will be woken up, and you can investigate why your batch job did not run on its expected schedule.

This is all possible using your existing monitoring system’s standard web check service, without any additional plugins or feature development.


TimerCheck.io has no registration, no authentication, and no authorization. If you don’t want somebody else resetting your timer accidentally or on purpose, you should pick a timer name that is unguessable even with brute force.

For example:

# A sensible timer name with some unguessable random bits timer=https://timercheck.io/sample-timer-$(pwgen -s 22 1) echo $timer # (OR) timer=https://timercheck.io/sample-timer-$(uuid -v4 -FSIV) echo $timer # Set the timer to 1 hour seconds=3600 curl -s $timer/$seconds | jq . # Check the timer curl -s $timer | jq . Cron Jobs

Say I have a cron job that runs once an hour. I don’t mind if it fails to compelete successfully once, but if it fails to check in twice in a row, I want to be alerted.

This example will use a random number for the timer name. You should generate your own unique timer names (see previous section).

Here’s a sample crontab entry that runs my job, then resets the countdown timer using TimerCheck.io:

0 * * * * $HOME/bin/create-snapshots && curl -s https://timercheck.io/sample-cron-4/8100 >/dev/null

The timer is being reset at the end of each job to 8100 seconds which is two hours plus 15 minutes. The extra minutes gives the hourly cron job some extra time to complete before we start sounding alarms.

All that’s left is to add the monitor poll URL to my monitoring service:



Though you can ignore the response content from the TimerCheck.io web service, here are samples of what it returns.

If the timer has not yet expired because your events are running on schedule and resetting the countdown, then the monitoring URL returns a 200 success code along with the current state of the timer. This includes things like when the timer set URL was last requested, and how many seconds remain before the timer goes into an error state.

{ "timer": "YOURTIMERNAME", "request_id": "501abe10-2dad-11e5-80c1-35cdcb449e41", "status": "ok", "now": 1437265810, "start_time": 1437265767, "start_seconds": 60, "seconds_elapsed": 43, "seconds_remaining": 17, "message": "Timer still running" }

If the timer has expired and no event has been run to reset it, then the monitor URL returns a 504 timeout error code and an error message. Once I figure out how to get the API Manager to return both an error code and some JSON content, I will expand this to include more details about when the timer expired.

{ "errorMessage": "504: timer timed out" }

When you call the event URL, passing in the number of seconds for resetting the timer, the API returns the previous state of the timer (as in the first example above) along with a note that it has set the new values.

{ "timer": "YOURTIMERNAME", "request_id": "36a764b6-2dad-11e5-9318-f3b076dd2a3a", "status": "ok", "now": 1437265767, "start_time": 1437263674, "start_seconds": 60, "seconds_elapsed": 2093, "seconds_remaining": -2033, "message": "Timer countdown updated", "new_start_time": 1437265767, "new_start_seconds": 60 }

If this is the first time you have set the particular timer, the previous state keys will be missing.


There are none.

TimerCheck.io a free public service intended, but not guaranteed, to be useful. It may return unexpected results. At any time and with no warning, it may become unavailable for short periods or forever.

Terms of Use

Don’t use TimerCheck.io in an abusive manner. If you are unsure if your use case might be considered abusive, ask.


I am not aware of any services that operate the same way as TimerCheck.io, with the ability to add dead man switch features to existing polling-based monitoring services, but here are a few services that are targeted specifically at event monitoring.

What do you use for monitoring and alerting? Are you using monitoring to make sure scheduled events are not missed?

Original article and comments: https://alestic.com/2015/07/timercheck-scheduled-events-monitoring/

Jonathan Riddell: Voy a ir Akademy-ES

Planet Ubuntu - Mon, 07/20/2015 - 02:10

Voy a ir Akademy-ES el jueves para dar una charla se llama “Plugfest Conferencia Protocolos”. Es un revisión de esta conferencia an Marzo y un corto versión de mi charla se llama “interoperabilidad del escritorio Linux”.


Aaron Honeycutt: My first Ubuntu Hour!

Planet Ubuntu - Sun, 07/19/2015 - 12:22

Hello Internet! I’m very excited to say that the first Ubuntu Hour in South Florida went off very well! Here are some pictures for proof

Ubuntu Hour 7-18

We had 5 people (including myself) show up, have amazing donuts, coffee and talk about Ubuntu. We also discussed about how we should do about future Ubuntu Hour events, with the conclusion of having them the 3rd Saturday of the month. Which means the next event is August 15, more details can be found here.  We also talked about the upcoming Ubuntu Global Jam and what we want to do doing it. Alan and I have thought of doing a bug triage but I’d like to take a spin on that by doing it for the LibreOffice QA rather then Ubuntu itself.

LibreOffice Bug List


Charles Profitt: Dell XPS 13 Developer Edition On Pause

Planet Ubuntu - Sun, 07/19/2015 - 09:06

I noticed a thread on Reddit today where people were speculating about why the Dell XPS 13 Developer Edition was no longer available on the store. There are some rather wild guesses, but I wanted to try and find out the real reason behind the change. I communicated with Barton George via Twitter and received the response below.

Barton George Response

It would appear that Dell is pausing production to put in some of the fixes listed on their knowledge base. No need for panic folks.

Charles Profitt: Canonical’s Revised Intellectual Property Policy

Planet Ubuntu - Sat, 07/18/2015 - 13:48

I would like to add my own, personal, thoughts to the new IP policy released by Canonical on July 15th 2015. The first thing I keep in mind is that Canonical is trying to balance the needs of a for-profit company with the ideals of free software. Based on the fact that Mark Shuttleworth is contributing large amounts of capital to Canonical and the Ubuntu Project I do not question that this effort is anything less than genuine. I will concede that there is reason to ensure that licenses, copyright and trademark language that would secure open source ideals if the ownership of Canonical ever change hands.

What is Canonical trying to protect?

Ubuntu is a trusted open source platform. To maintain that trust we need to manage the use of Ubuntu and the components within it very carefully. This way, when people use Ubuntu, or anything bearing the Ubuntu brand, they can be assured that it will meet the standards they expect.

This passage was unmodified in the new release, but describes the core of what Canonical is trying to protect. They are trying to protect the Ubuntu brands (Ubuntu, Kubuntu, Edubuntu, Xubuntu, JuJu, Landscape).

What Canonical is not trying to do.

Canonical is not trying to change the licenses of any existing software they are distributing. While I thought this was clear in the original policy Canonical has modified the language to make this more clear. The original policy dealt with this under the Your use of copyright, patent and design materials and your use of Ubuntu sections.

Your use of copyright, patent and design materials:

The disk, CD, installer and system images, together with Ubuntu packages and binary files, are in many cases copyright of Canonical (which copyright may be distinct from the copyright in the individual components therein) and can only be used in accordance with the copyright licences therein and this IPRights Policy.

My interpretation of this policy was, and still is, that Canonical claims that it has copyrights over the Disk, installer, system images, Ubuntu packages and binary files. They make special note that the copyright may be distinct from the copyright in the individual components. Canonical specifies that the use of these must be in accordance with the copyright licensees therein. In other words, binary blob A with a GPLv2 copyright must still be in compliance with GPLv2 to be used. I took this to mean that no Canonical copyright would override or supersede the GPLv2 copyright.

Your Use of Ubuntu:

Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu.

I want to stress that this test is the same in both the new and original versions. This specifically talks about trademarks and only calls for recompiling the binaries if you want to distribute a modified version of Ubuntu that you do not want to associate with the trademark. My interpretation is that the recompile is necessary since the compiled binary contains the protected trademark. This policy specifically calls out that it does not affect rights that are under any open source license that is applicable to the components.

You can redistribute Ubuntu in its unmodified form, complete with the installer images and packages provided by Canonical (this includes the publication or launch of virtual machine images).

This language is the same in both the current and previous versions. It specifically addresses using virtual machine images that have been unmodified. This is the one section that I feel is a bit unclear. I am not sure about what would happen if a company wanted to use an unmodified version of Ubuntu with a proprietary component added on top. The real world application that I have seen is Aruba Wireless Airwave appliance that runs on top of CentOS. Would running this on top of Ubuntu be allowed? To be fair Aruba did not choose to use Redhat and this is most likely due to the restrictions that Redhat has on redistribution of Redhat binaries.

To further clarify what this language was intended to mean Canonical has added the following:

A bullet point in the summary section that reads:

Ubuntu is an aggregate work; this policy does not modify or reduce rights granted under licences which apply to specific works in Ubuntu.

An entire section immediately following the summary.

Ubuntu is an aggregate work of many works, each covered by their own licence(s). For the purposes of determining what you can do with specific works in Ubuntu, this policy should be read together with the licence(s) of the relevant packages. For the avoidance of doubt, where any other licence grants rights, this policy does not modify or reduce those rights under those licences.

I think both of these sections clarify what Canonical’s intent is. It is apparent that the FSF agrees with this as well.

This update now makes Canonical’s policy unequivocally comply with the terms of the GNU General Public License (GPL) and other free software licenses.

However, I do not see this clarification addressing the one concern I noted above about virtual machine appliances or containers that use unmodified Ubuntu with proprietary bits added on top as is the case with the Aruba Airwave Appliance. I see the same concern being raised by Matthen Garrett.

The apparent aim here is to avoid situations where people take Ubuntu, modify it and continue to pass it off as Ubuntu. But it reaches far further than that. Cases where this may apply include (but are not limited to):

  • Anyone producing a device that runs an operating system based on Ubuntu, even if it’s entirely invisible to the user (eg, an embedded ARM device using Ubuntu as its base OS)
  • Anyone producing containers based on Ubuntu
  • Anyone producing cloud images (such as AMIs) based on Ubuntu

Garrett goes on to make a claim that, for me, is unclear. He could be correct with his interpretation, but I am not positive.

In each of these cases, a strict reading of the policy indicates that you are distributing a modified version of Ubuntu and therefore must either get it approved by Canonical or remove the trademarks and rebuild everything. The strange thing is that this doesn’t limit itself to rebuilding packages that include Canonical’s trademarks – there’s a requirement that you rebuild all binaries.

The IP Policy states:

Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries.

This does not specify all nor does it specify only those affected binaries. I see this language as being unclear as to which binaries need to be recompiled. I also agree with Garrett on the issue of confusion over what constitutes a trademark. Does this include the word ‘ubuntu’ in version strings or in maintainers email addresses.

Frustrating Process

For many this has been a long drawn out and frustrating process, but I would like to advert your attention to some comments by Bradley M. Kuhn to keep this in perspective.

First of all, I think it’s important to note the timeline: it took two years of work by two charities to get this change done. The scary thing is that compared to their peers who have also violated the GPL, Canonical, Ltd. acted rather quickly. As Conservancy pointed out regarding the VMware lawsuit, it’s not uncommon for these negotiations to take even four years before we all give up and have to file a lawsuit. So, Canonical, Ltd. resolved the matter at least twice as fast as VMware, and they deserve some credit for that — even if other GPL violators have set the bar quite low.

It should be noted that not only did Canonical take less time to comply than VMWare, but the VMWare case is about VMWare actually changing the license terms on code taken from the Linux kernel for use in their own kernel. From what I can see in the case of Canonical it was about the wording and possible interpretations of the old policy. The only situation that I am aware of that might rise to this level is the case of Canonical requiring Mint to obtain a license. Since I am not privy to the details of that license I do not know if it was related to the trademark or is similar to the VMWare violation. Based on the details I do have access too it is my belief that it was related to trademarks and was not an attempt to take GPL licensed code and violate the terms of the GPL license.

Moving Forward

It should also be noted that the FSF statement on the negotiations noted that Canonical repeatedly that their intention was to liberally allow use of their trademarks and patents by community projects.

Canonical, in our conversations, repeatedly expressed that it is their full intention to liberally allow use of their trademarks and patents by community projects, and not to interfere with the exercise of rights under any copyleft license covering works within Ubuntu.

I also agree with the FSF statement about the need for clarity for user to know their rights in advance.

While we appreciate today’s development and do see it as a big step in that direction, we hope they will further revise the policy so that users, to the greatest extent possible, know their rights in advance rather than having to inquire about them or negotiate them.

The inclusion of the wording ‘greatest extent possible’ highlights what I perceive to be the difficulty of balancing the needs of a for-profit company with the ideals of free software. While Redhat use a subscription model for restricting access to binaries and updates that has been found to be in compliance with the GPL I am glad that Canonical is trying to find a different model to monetize their efforts. I appreciate being able to use the same distribution in production as I use at home instead of having to use Fedora/CentOS vs Redhat unless I want to pay to be a subscriber. It is also interesting to note that despite the difference in models that Redhat has been seen by some to violate the spirit of GPL licensing and moved their appliances from CentOS to Scientific Linux when Redhat acquired CentOS.

While the topic is charged and can lead to heated debate all members of the Ubuntu Community should act with humanity towards others while discussing the policy and the changes. Progress is not made while belittling or berating others just because they do not agree with your position. While I normally do not moderate comments on my blog beyond removing spam I will do so on this post with regard to any comments which I find abrasive and rude.

As a member of the Ubuntu Community and the larger Free Software Movement I urge people to avoid using sensationalized language like landing punches or slap downs. Leave such phrases to websites that are looking to generate traffic. Bombast is not a basis for working collaboratively to improve the current wording.

I also want to stress, again, that this is my opinion and interpretation of both versions of Canonical’s policy. I am not a legal expert and my opinions should not be used as legal advice.

edit: Clarified that my interpretation of this policy was, and still is, that Canonical claims that it has copyrights over the disk, installer, system images, Ubuntu packages and binary files. I am not making assertion that this is a legally valid claim.

Ubuntu Podcast from the UK LoCo: S08E19 – The Creeping Terror - Ubuntu Podcast

Planet Ubuntu - Thu, 07/16/2015 - 12:18

It’s Episode Nineteen of Season Eight of the Ubuntu Podcast! Mark Johnson, Laura Cowen, and Martin Wimpress are together with guest presenter Joe Ressington and speaking to your brain.

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Elizabeth K. Joseph: Ubuntu at the upcoming Community Leadership Summit

Planet Ubuntu - Thu, 07/16/2015 - 11:59

This weekend I have the opportunity to attend the Community Leadership Summit. While there, I’ll be able to take advantage of an opportunity that’s rare now: meeting up with my fellow Ubuntu Community Council members Laura Czajkowski and Michael Hall, along with David Planella of the community team at Canonical. At the Community Council meeting today, I was able to work with David on narrowing down a few topics that impact us and we think would be of interest to other communities and we’ll propose for discussion at CLS:

  1. Declining participation
  2. Community cohesion
  3. Barriers related to [the perception of] company-driven control and development
  4. Lack of a new generation of leaders

As an unconference, we’ll be submitting these ideas for discussion and so we’ll see how many of them gain interest of enough people to have a discussion.


Since we’ll all be together, we also managed to arrange some time together on Monday afternoon and Tuesday to talk about how these challenges impact Ubuntu specifically and get to any of the topics mentioned above that weren’t selected for discussion at CLS itself. By the end of this in person gathering we hope to have some action items, or at least some solidified talking points and ideas to bring to the ubuntu-community-team mailing list. I’ll also be doing a follow-up blog post where I share some of my takeaways.

What I need from you:

If you’re attending CLS join us for the discussions! If you just happen to be in the area for OSCON in general, feel free to reach out to me (email: lyz@ubuntu.com) to have a chat while I’m in town. I fly home Wednesday afternoon.

If you can’t attend CLS but are interested in these discussions, chime in on the ubuntu-community-team thread or send a message to the Community Council at community-council at lists.ubuntu.com with your feedback and we’ll work to incorporate it into the sessions. You’re also welcome to contact me directly and I’ll pass things along (anonymously if you’d like, just let me know).

Finally, a reminder that this time together is not a panacea. These are complicated concerns in our community that will not be solved over a weekend and a few members of the Ubuntu Community Council won’t be able to solve them alone. Like many of you, I’m a volunteer who cares about the Ubuntu community and am doing my best to find the best way forward. Please keep this in mind as you bring concerns to us. We’re all on the same team here.

Canonical Design Team: The monochromatic makeover

Planet Ubuntu - Thu, 07/16/2015 - 05:35

We have given our monochromatic icons a small facelift to make them more elegant, lighter and consistent across the platform by incorporating our Suru language and font style.

The rationale behind the new designs are similar to that of our old guidelines, where we have kept to our recurring font patterns but made them more streamlined and legible with lighter strokes, negative spaces, and a minimal solid shape.

What we have changed:
  • Reduced and standardized the strokes width from 6 or 8 pixels to 4.
  • Less solid shapes and more outlines.
  • The curvature radius of rectangles and squares has been slightly reduced (e.g message icon) to make them less ‘clumsy’.
  • Few outlines are ‘broken’ (e.g bookmark, slideshow, contact, copy, paste, delete) for more personality. This negative space can also represent a shadow cast.


Less solid shapes



Lighter strokes




Negative spaces






Font patterns 

Oblique lines are slightly curved

Arcs are not perfectly rounded but rather curved


Uppercase letters use right or sharp angles

Vertical lines have oblique upper terminations.

Nice soft curves






Benjamin Kerensa: What the Ubuntu IP Announcement means

Planet Ubuntu - Wed, 07/15/2015 - 21:01

The announcement by the FSF and Software Freedom Conservancy has a lot of jargon in it so to help people better understand I am going to do an analysis. Mind you, back in 2012, I reached out to the FSF on these very licensing concerns which no doubt combined with other developers contacts set in motion these discussions.


In July 2013, the FSF, after receiving numerous complaints from the free software community, brought serious problems with the policy to Canonical’s attention. Since then, on behalf of the FSF, the GNU Project, and a coalition of other concerned free software activists, we have engaged in many conversations with Canonical’s management and legal team proposing and analyzing significant revisions of the overall text. We have worked closely throughout this process with the Software Freedom Conservancy, who provides their expert analysis in a statement published today.


So this is about a year after the time I exchanged emails with Dr. Richard Stallman not only about privacy issues that Canonical was trying to wave off but also these licensing issues. We (myself and other Ubuntu Developers) had been hearing that other distros had been essentially bullied into signing contracts and licenses pursuant to Canonical’s IP Policy for Ubuntu at the time.


While the FSF acknowledges that the first update emerging from that process solves the most pressing issue with the policy — its interference with users’ rights under the GNU GPL and potentially other copyleft licenses covering individual works within Ubuntu — the policy remains problematic in ways that prevent us from endorsing it as a model for others. The FSF will continue to provide feedback to Canonical in the days ahead, and urge them to make additional changes.


In a nutshell, the FSF is making it clear while some progress was made that the Ubuntu IP Policy is still not a good example of a policy that protects the freedoms you have to using code under the licenses of software Ubuntu bundles into the distro we use and love. This is concerning because Canonical has essentially made some concessions but put its foot down and not made as much change as it needs to.


Today’s “trump clause” makes clear that, for example, Canonical’s requirement that users recompile Ubuntu packages from source code before redistributing them is not intended to and does not override the GPL’s explicit permission for users to redistribute covered packages in binary form (with no recompilation requirement) as long as they also provide the corresponding source.


As an example, Canonical was through its legal team asking some distros including Mint that they needed a license to redistribute Ubuntu but this is not true because the underlying licenses already set the rights individuals and groups have in redistributing code.


While this change handles the situation for works covered by the GPL, it does not help works covered by lax permissive licenses (such as the X11 license) that do allow such additional restrictions. With that in mind, the FSF has urged Canonical to not only respect the GPL but to also change its terms to remove restrictions on any of the free works it distributes, no matter which license covers that software. In the meantime, this is a useful reminder that developers are nearly always better off choosing copyleft licenses like the GPL in order to prevent others from imposing arbitrary restrictions on users.


It is clear since the FSF with its ally, the Software Freedom Conservancy in tow, was only able to achieve some success on the GPL front. The FSF being a good steward of the greater open source community realizes this and notes that the policy still has restrictions on freedoms other licenses entitled to you. As such, the FSF is calling on Canonical to do more and do the right thing and not just make concessions but follow all the licenses of software it uses.


Further, the patent language in the current policy should be replaced with a real pledge to only make defensive use of patents and to not initiate litigation against other free software developers. The trademark policy should be revised to provide better guidance to downstream distributors so that they can be confident they know exactly where and when trademarks need to be removed in order to comply with the policy.


This is a very important bit because it protects open source developers and ironically if you read the IP Policy it has some foolish statement like “Canonical has made a significant investment in the Open Invention Network, defending Linux, for the benefit of the open source ecosystem.” which is laughable because here the FSF and Software Freedom Conservancy is having to ask Canonical to respect the licenses of not only Linux but thousands of other pieces of open source software it claims it invests in defending.


Canonical, in our conversations, repeatedly expressed that it is their full intention to liberally allow use of their trademarks and patents by community projects, and not to interfere with the exercise of rights under any copyleft license covering works within Ubuntu. While we appreciate today’s development and do see it as a big step in that direction, we hope they will further revise the policy so that users, to the greatest extent possible, know their rights in advance rather than having to inquire about them or negotiate them. To this end, it will be important to choose language and terms that emphasize freedom over power and avoid terms like intellectual property, which spread bias and confusion.


This is perhaps the most important part because basically the FSF is making it clear the IP Policy still continues to confuse some users and that confusion may chill users into not exercising the freedoms they have to use the software that is freely licensed. Also it is concerning because the IP Policy as it stands violates the community values of the Ubuntu project.

In closing, Canonical should be thanked for making some concessions after so many years but should also, on the same token, be encouraged to fix the document entirely and protect the rights and freedoms of users and respect the licenses of the software Ubuntu ships. Additionally, this makes it clear that Jonathan Ridell, another Ubuntu Community Member who advocated time and time again on this matter and was shut down by the Ubuntu Community Council, really deserves at the very least a formal apology from the Ubuntu Community Council. When individuals ability to speak freely on important issues of advocacy are chilled in Open Source projects, it creates an unwelcoming environment. Jonathan Ridell is by no means the first person to be shut down by leaders in the community or Canonical itself. Over the past few years, there has been a trickle of departures because of people being silenced. In fact, Ubuntu Contributors and LoCo participation is at an all time low, as is participation in the Ubuntu Developer Summit which can only be linked to these attacks on advocates over the years.

FSF Statement / SFC Statement / Jonathan Ridell Blog Post / Matthew Garrett’s Blog Post

Canonical has yet to release any statement in their press centre and neither has the Ubuntu Community Council which said it would wait until
it learned of the outcome of the FSF and SFC asking Canonical to adjust its infringing IP Policy.


Jonathan Riddell: Ubuntu Policy Complies With GPL But Fails To Address Other Important Software Freedom Issues

Planet Ubuntu - Wed, 07/15/2015 - 09:53

Today Canonical published an update to their IP Policy by adding in a “trump clause” paragraph saying that “where any other licence grants rights, this policy does not modify or reduce those rights under those licences”.

I’ve been going on about this IP Policy (which is Canonical’s but confusingly appears on the Ubuntu website) and how it was incompatible with the Ubuntu policy for years.  I’ve been given a lot of grief for querying it having been called a fucking idiot, aggressiveswearer of oaths and disingenuous, dishonest, untrustworthy and unappreciative.  It really shows Ubuntu at its worst, and is really amazing that such insults should come from the body which should be there to prevent them. And I’ve heard from numerous other people who have left the project over the years because of similar treatment.  So it’s nice to see both the FSF and the SFC put out statements today saying there were indeed problems, but sad to see they say there still are.

Canonical, Ltd.’s original policy required that redistributors needed to recompile the source code to create [their] own binaries” says SFC, and “the FSF, after receiving numerous complaints from the free software community, brought serious problems with the policy to Canonical’s attention“.  Adding the trump clause makes any danger of outright violation go away. 

But as they both say there’s still dangers of it being non free by restricting non-GPL code and using patents and trademarks.  The good news is that doesn’t happen, the Ubuntu policy forbids it and there’s a team of crack archive admins to make sure everything in the archive can be freely shared, copied and modified.  But the worry still exists for people who trust corporate sayings over community policy.  It’s why the SFC still says “Therefore, Conservancy encourages Canonical, Ltd. to make the many changes and improvements to their policy recommended during the FSF-led negotiations with them” and the FSF say “we hope they will further revise the policy so that users, to the greatest extent possible, know their rights in advance rather than having to inquire about them or negotiate them“.  Well we can but hope but if it took two years and a lot of insults to get a simple clarifying paragraph added and stuff like this happen “After a few months working on this matter, Conservancy discovered that the FSF was also working on the issue” (did nobody think to tell them?), I don’t see much progress happening in future.

Meanwhile the Ubuntu Developer Membership Board wonders why nobody wants to become a developer any more and refuses to put two and two together.  I hope Ubuntu can re-find it’s community focus again, but from today’s announcement all I can take from it is that the issues I spoke about were real concerns, even if no more than that, and they haven’t gone away.


Charles Butler: Continuous Integration with Juju and Drone CI

Planet Ubuntu - Wed, 07/15/2015 - 07:33

Delivering your Charms to the community can seem like an uphill climb when you have minimal and manual testing around your project. The ~charmer review process is pretty rigerous and as anyone who has run the Gauntlet to attain ~recommended status can attest, we really stress the service before approval. One of the ways to have your review expedited is by including a full suite of tests that deploy, configure, and stress the workload it is deploying.

A little known fact about Juju Charms, is each Charm is a software project. Often involving many developers from different focus groups, committing against different features. The fact you can write a prototype charm in about 20 minutes is a great byproduct of our language freedom - but when it comes to delivering a consistent high quality experience, you really need to adopt a delivery pipeline that rigerously tests your charm quality through fast running individual unit tests, but also you should be testing a full deployment scenario on the cloud.

I had a requirement of my CI Service being rapid to deploy, always configured for all my projects (so version controlled CI config like Travis-CI), hosted on premise so I can take my toys with me without worrying about data ownership, and be flexible enough for me to extend via an API.

Introducing Drone

Drone is a Continous Integration server written in GoLang by a team of dedicated individuals centered around this Open Source project. Its light weight, built on top of Docker to provide test isolation, and supports delivery of artifacts to several providers, as well as deployment options for Modern PAAS providers as well as git based delivery mechanisms.

It comes in 2 flavors, a hosted version over at http://drone.io as well as a deployable on premise edition, which will be covered in this post today.


In order to follow along, you will need to gather a few things.

  • Cloud Credentials for your Integration Server (per project preferrable)
  • A Juju environment
  • A charm with Unit Tests and Amulet tests
Standing up the CI Service

I'll be keeping the versions of Drone published in lock step with all the tags present in the Github Repository.

juju deploy cs:~lazypower/drone

This charm will pull and configure the latest docker daemon, install the Drone-CI binaries, and expose the DroneCI service on port 80.

What you will be greeted with when loading the Drone application URL is a single button link to Documentation.

Configuring Authorization

Drone requires an Authorization Provider in order to 'activate' itself. Drone fully integrates with the API's as a consumer leveraging the service you login from.

Setting up GitHub Generate Client and Secret

You must register your application with GitHub in order to generate a Client and Secret. Navigate to your account settings and choose Applications from the menu, and click Register new application.

Please use /api/auth/github.com as the Authorization callback URL path.

Once you have your application configured in GitHub, set these API credentials on the charm

juju set drone github_client=XXX github_secret=XXX github_enabled=true Config Helper

This is beta, has very little error checking, and may or may not work given the input you feed the script. Please use with caution. The charm ships with a script to assist in configuring jobs. This is best run locally

git clone https://github.com/chuckbutler/drone-ci-charm drone cd drone/scripts ./config -e {{environment}} -r {{repository https clone url}} -c {{charm name}}

You will receive output that is copy/pasteable to both the drone-ci repository configuration, and the .drone.yaml to be embedded in the git repository.

Run the config helper

Populate the project secrets

Populate .drone.yml in your repository

As an example, i'll paste in some pseudo configuration. This is not 1:1, and I did give a stellar breakdown over the format of this job in the explainer video.

To investigate further on your own, consult the format doc for the .drone.yml in the upstream docs

image: jujusolutions/charmbox env: - JUJU_TEST_CHARM='{{charm}}' - JUJU_REPOSITORY='/var/cache/drone/src/{{provider}}/{{username}}/' git: path: {{provider}} script: - sudo apt-get update - juju init - echo $ENVYAML | base64 --decode > ~/.juju/environments.yaml - juju switch $CIENV - mkdir ../trusty - cd .. && mv {{repo}} trusty/{{charm}} && cd trusty/{{charm}} - bundletester -F -l DEBUG -v Profit!

Simply make a commit and enjoy the results of using the cloud to run integration suites of your project against the cloud provider configured.

Juju + Drone sitting in a tree :)


This is still very much beta quality in terms of exploration of what we can do here. As in, how do you white list contributors so only certain PRs / Master branch is run with a full integration suite, otherwise - only unit tests are run?

Matrix builds against multiple cloud providers (v0.4 pending, will enable this)

If you've got any feedback, questions, comments about this - I'm happy to talk with you about leveraging this setup for your own projects. You can find me in #system-zoo on irc.freenode.net, as well as #juju.

All issues found should be filed against the project on GitHub.

Deploy Happy!

Unity Team: What’s new in OTA-5

Planet Ubuntu - Wed, 07/15/2015 - 06:06


with OTA-5 being around the corner I thought I’d give a glimpse on what to expect from it. We’ve been very busy adding some key features in this cycle that will improve the user experience on the phone and also bring us closer to the overall goal, a converged device.

The first thing you will notice after upgrading to OTA-5 is that the complete shell will start rotating when you rotate your device. Finally the bottom edge of apps will be usable in landscape and rotating the device won’t mess up with your gestures any more. The right edge spread will now always be at the right edge, the launcher always on the left.

I’d like to take the opportunity to remind app developers to test their apps with the new rotation feature. There are some that don’t look as good as they could in landscape. Please make sure to fix them up. If you can’t do that for whatever reason, you can still lock them to some orientation by adding this entry to your desktop file:


The possible values are “portrait”, “landscape” and “primary”. Please note that portrait does mean portrait. If your app will be running on, lets say the Nexus 7, which is a native-landscape device, it will rotate to portrait when launching your app. To simply restrict it to not rotating, you might want to use “primary” instead, which would instruct Unity to stay locked to the device’s native orientation.

Shell rotation, without a doubt, has been the biggest thing we landed into Unity during this cycle. Obviously, apart from that there was the usual amount of fixes and smaller updates behind the scenes. I’ll leave it to you to go hunting them, except for one, which I personally have been waiting on for quite a while and now it’s there. You can now edit reviews in the app store! So if you’ve left a review for an app that used to be bad but now it’s good, this is the time to go and update your review!

Finally there’s another cool feature that landed and many of you probably won’t notice it for a while, if ever on the current phone. On our way towards convergence we’ve added a sense for input devices to Unity. That means, Unity will now keep track of all the attached input devices like mice and keyboards. This results in a pretty cool feature which in its whole is not ready for prime time yet, but will offer you a preview into the converged world: Try connecting a Bluetooth mouse to your phone and see what happens!

Currently there’s not much real use you can get out of this, there’s no cursor painted still and parts of the user interface is not working properly with with mouse input (as opposed to touch input) yet. However, as I said, this should give you an impression of our current and future roadmap.

Enjoy OTA-5!

Colin King: stress-ng adds more features

Planet Ubuntu - Wed, 07/15/2015 - 05:30
Since I last wrote about perf being added to stress-ng back in the end of May I have been busy in my spare time adding more features to stress-ng.

New stressors include:
  • ptrace - traces a child process performing many simple system calls
  • sigsuspend - sends SIGUSR1 signals to multiple children waiting on sigsuspend(2)
  • sigpending - checks if SIGUSR1 signals are pending on a process that alternatively masks and umasks this signal
  • mmapfork - rapidly spawn multiple child processes that try to allocate a chunk of free memory (and try to avoid swapping). Each process then uses  madvise(2) to hints before and after the memory is memset and then the child dies.
  • quota - exercise various quotactl(2) Q_GET* commands
  • sockpair - client/server socket I/O using socket pair and random sized I/O
  • getrandom - exercise the new getrandom(2) system call
  • numa -  migrates a memory mapped buffer and processes around NUMA modes, exercising migrate_pages(2), mbind(2) and move_pages(2).
  • wcs - exercises libc wide character string functions (thanks to Christian Ehrhardt for this contribution).
 ..and I have added some improvements too:
  • --yaml option to dump stats from --metrics, --perf, -tz into a YAML structured log.
  • made the --aggressive option more aggressive by forcing more CPU migrations and context switches.
I have also added a thermal zone stats gathering option --tz to see how warm the machine is getting when running a test.  For example:

... where x86_pkg_temp is the CPU package temperature and acpitz are the two ACPI thermal zones on my desktop.

Stress-ng is being used to run stress test various kernels across a range of Ubuntu devices, such as phone, desktop and server.   Thrashing a system with hundreds of processes and a lot of low memory pressure is just one method of checking that kernel and daemons can handle a mix of demanding work loads.

stress-ng 0.04.12 is now available in Ubuntu Wily.   See the stress-ng project page for more details.

Eric Hammond: Simple New Web Service: Testers Requested

Planet Ubuntu - Tue, 07/14/2015 - 21:54

Interested in adding scheduled job monitoring (dead man’s switch) to the existing monitoring and alerting framework you are already using (Nagios, Sensu, Zenoss, Zabbix, Monit, Pingdom, Montastic, Ruxit, and the like)?

Last month I wrote about how I use Cronitor.io to monitor scheduled events with an example using an SNS Topic and AWS Lambda.

This week I spent a few hours building a simple web service that enables any polling-based monitor software or service to automatically support alerting when a target event has not occurred in a desired timeframe.

The new web service is built on infrastructure technologies that are reliably maintained and scaled by Amazon:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • CloudFront
  • Route53
  • CloudWatch

The source code is about a page long and the web service API is as trivial as it gets; but the functionality it adds to monitoring services is quite powerful and hugely scalable.

Integration requires these simple steps:

Step 1: There is no step one! There is no registration, no setup, and no configuration of the new web service for your use.

Step 2: Hit one URL when your target event occurs.

Step 3: Tell your existing monitoring system to poll another URL and to alert you when it fails.


I’m still working on the blog post to introduce the web service, but would love to have some folks test it out this week and give feedback.

If you are interested, drop me an email and mention:

  • The monitoring/alerting frameworks you currently use

  • The type of scheduled activities you would like to monitor (cron job, SNS topic, Lambda function, web page view, email receipt, …)

  • The frequency of the target events (every 10 seconds, every 10 years, …)

Even if you don’t want to do testing this week, I’d love to hear your answers to the above three points, through email or in the comments below.

Original article and comments: https://alestic.com/2015/07/timercheck-testers-requested/

Sebastian Kügler: thoughts on being merciful binary gods

Planet Ubuntu - Tue, 07/14/2015 - 17:28

“Since when has the world of computer software design been about what people want? This is a simple question of evolution. The day is quickly coming when every knee will bow down to a silicon fist, and you will all beg your binary gods for mercy.” Bill Gates

For the sake of the users, let’s assume Bill was either wrong or (||) sarcastic.

Let’s say that we want to deliver Freedom and privacy to the users and that we want to be more effective at that. We plan to do that through quality software products and communication — that’s how we reach new users and keep them loving our software.

We can’t get away with half-assed software that more or less always shows clear signs of “in progress”, we need to think our software through from a users point of view and then build the software accordingly. We need to present our work at eye-level with commercial software vendors, it needs to be clear that we’re producing software fully reliable on a professional level. Our planning, implementation, quality and deployment processes need to be geared towards this same goal.

We need processes that allow us to deliver fixes to users within days, if not hours. Currently in most end-user scenario, it often takes months and perhaps even a dist-upgrade for a fix for a functional problem with our software.

The fun of all this lies in a more rewarding experience of making successful software, and learning to work together across the whole stack (including communication) to work together on this goal.

So, with these objectives in mind, where do we go from here? The answer is of course that we’re already underway, not at a very fast speed, but many of us have good understanding of many of the above structural goals and found solutions that work well.

Take tighter and more complete quality control, being at the heart of the implementation, as an example. We have adopted better review processes, more unit testing, more real-world testing and better feedback cycles with the community, especially the KDE Frameworks and Plasma stacks are well maintained and stabilized at high speeds. We can clearly say that the Frameworks idea worked very well technically but also from an organizational point of view, we have spread the maintainership over many more shoulders, and have been able to vastly simplify the deployment model (away from x.y.z releases). This works out because we test especially the Frameworks automatically and rather thoroughly through our CI systems. Within one year of Frameworks 5, our core software layer has settled into a nice pace of stable incremental development.

On the user interaction side, the past years have accompanied our interaction designers with visual artists. This is clearly visible when comparing Plasma 4 to Plasma 5. We have help from a very active group of visual designers now for about one and a half year, but have also adopted stricter visual guidelines in our development process and forward-thinking UI and user interaction design. These improvements in our processes have not just popped up, they are the result of a cultural shift towards opening the KDE also to non-coding contributors, and creating an atmosphere where designers feel welcome and where they can work productively in tandem with developers on a common goal. Again, this shows in many big and small usability, workflow and consistency improvements all over our software.

To strengthen the above processes and plug the missing holes in the big picture to make great products, we have to ask ourselves the right questions and then come up with solutions. Many of them will not be rocket science, some may take a lot of effort by many. This should not hold us back, as a commonly shared direction and goal is needed anyway, regardless of ability to move. We need to be more flexible, and we need to be able to move swiftly on different fronts. Long-standing communities such as KDE can sometimes feel to have the momentum of an ocean liner, which may be comfortable but takes ages to move, while it really should have the velocity, speed and navigational capabilities of a zodiak.

By design, Free Culture communities such as ours can operate more efficiently (through sharing and common ownership) than commercial players (who are restricted, but also boosted by market demands), so in principle, we should be able to offer competitive solutions promoting Freedom and privacy.

Our users need merciful binary source code gods and deserve top-notch silicon fists.

Benjamin Kerensa: 10 things I want Firefox OS to do for me

Planet Ubuntu - Tue, 07/14/2015 - 16:00

I’ve dogfooded Firefox OS since its early beginnings and have some of the early hardware  (hamachi, unagi, One Touch Fire, ZTE Open, Geeksphone Keon, Flame and ZTE Open C). It was good to hear some of the plans for Firefox OS 2.5 that were discussed at Whistler, but I wanted to take the time and model of this post and remix it for Firefox OS. Firefox OS you are great and free but you are not perfect and you can be the mobile OS that I need.

#1  Voice Control

Just like Apple has Siri and Google has Ok Google, Firefox OS too needs a voice command system that will let me search the web, send a text, open apps, navigate to places. Not only is this good for a smartphone, but when I buy a TV running Firefox OS, voice commands will be very useful.

#2 Notifications

Let’s face it: notifications on Firefox OS are not a world class experience. Most of the big apps (Facebook, Twitter etc) do not integrate with Firefox OS so when someone messages you or tags you in a photo, you won’t know unless you open the app. There is a bug for this to fix this in Facebook app but the developer left Facebook so it got abandoned. There was never any progress on this for Twitter. In order for Firefox OS to be able to be sustainable and see good adoption, people will need to have notifications this is not negotiable.

#3 LTE

While Firefox OS has never shipped in the U.S. yet plenty of Firefox OS developers do live here and so do a good portion of Mozilla Developers. LTE needs to be supported in the stack but also needs to be a requirement for reference devices going forward in the Foxfooding program.

#4  App Ecosystem

There is much talk about how Mozilla is going to invest big into Firefox OS and that is great and very exciting but one of the biggest things Mozilla could invest in for Firefox OS that would increase adoption is expanding the app ecosystem. Without apps, a platform fails and this is obvious. Right now as things stand, even Ubuntu Phone is ahead of Firefox OS in the app ecosystem race. If Mozilla has to pay companies to port their apps to Firefox OS, well that would be a good investment because random low-quality apps are not going to fill the gap.

#5 U2F (Universal 2nd Factor)

I believe Fido Alliance’s U2F is the future of strong authentication on the desktop and mobile so it would be nice to see support for this.

#6 Local / Contextual Results

Firefox OS needs to have a foot in the producing local results game since Firefox OS does not have an equivalent of Google Now or a Yelp app. I need something to help me find local businesses and places and ratings. This should be a smart feature that uses my actual location.

#7 Weather

We need a WeatherUnderground App or something really slick that delivers the most accurate weather forecasting available.

#8 Transit

We need a transit app, not a bunch of local ones that can use my location and tell me available transit options like when trains and buses arrive. The data is out there and most of it is open so let’s build this into the OS or maybe Mozilla should make an app for that.

#9 Better OEM Update Expectations

The updates offered by OEM partners has been deplorable mostly with many devices left behind on versions which leaves users with bugs and stability issues. Mozilla should set the bar high and take OEM’s out of the updates equation much like Ubuntu has done with their Mobile OS. OEM’s cannot be trusted to give regular OS updates and when they don’t the reputation of the platform is blamed for this not the OEM.

#10 Uber or Lyft

Firefox OS will need a Uber or Lyft app to get any kind of non-niche foothold in more westernized countries. I don’t really care if Uber or Lyft is offered as both will work. Uber already allows booking through their website so perhaps a little nudge could get them to package that into an app.

This summarizes ten things I would love to see happen for Firefox OS not all are hard requirements for me but consider this a wish list. Do you have a wish list of 10 things you want in Firefox OS? If so I encourage you to blog about it and dream big!

Xubuntu: Xubuntu at Colegio Hispano Americano

Planet Ubuntu - Tue, 07/14/2015 - 12:35

The Xubuntu team hears stories about how it is used in organizations all over the world. In this “Xubuntu at..” series of interviews, we seek to interview organizations who wish to share their stories. If your organization is using Xubuntu and you want to share what you’re doing with us please contact Elizabeth K. Joseph at lyz@ubuntu.com to discuss details about your organization.

As we’ve covered in other articles in this series, Xubuntu is being used world wide to serve a variety of communities with users of of all ages. In this interview we spoke with Jose M. Torres Ortiz about how he uses Xubuntu where he teaches, at Colegio Hispano Americano in Puerto Rico.

Can you tell us a bit about your role at the Private School Colegio Hispano Americano and the students/community you serve?

My role at the Colegio Hispano Americano is a teacher but also I’m in charge of the computer lab. This private school has Pre-K to 12th grade students.

What influenced your decision to use Open Source Software at the school?

With Open Source Software I can freely use, and share to anyone. So it was big part of my decision. Now my students can have the same software without charging them. And they can practice at their home. Also there are a lots of great educational software for the learning process.

What made you select Xubuntu specifically for your lab?

I’ve been a Ubuntu user before. My brother has been an Ubuntu user for many years! He taught me the benefits of this OS. But now, I needed an OS to be lighter, faster and user friendly for the teachers and students. Something easier to adapt to, coming from Windows. Also, because Xubuntu is an Ubuntu flavor it has all the administration, education and office software that I need at this time, for example: LibreOffice, Tux Paint, GCompris, TuxMath, Epoptes, etc.

Can you tell us a bit about your Xubuntu setup – customizations made, how installs are done, etc.

I installed Xubuntu to 20 used computers Intel Core 2 Duo with 2GB RAM. I used a router and 3 switches to connect them to the network and to my main computer which I monitor, see and administrate each one from my master PC using Epoptes administration software. With it I can see, control or command to open any application in each students computer. (Epoptes is in the Ubuntu Software Center) We painted or classroom with blue and white (for Xubuntu colors). We drew and painted a Linux Penguin Logo in the wall! We still need to draw the mouse from the Xubuntu logo. It’s a Xubuntu Lab!

Is there anything else you wish to share with us about your organization or how you use Xubuntu?

Colegio Hispano Americano is a private school that uses the Cueto Method to teach children.

The school is scientifically arranged for the child to learn the basic skills of reading and writing in a reasoned manner, by composition. By using this method, lawyer Jose M. Cueto noticed that destroyed, sullen, self-conscious children suddenly began to respond spontaneously, smile, lost their shyness and wanted to show what they knew. When being actively involved in their own learning, children ultimately learn to reason and deduce in a reasoned manner.

Furthermore, we use the Python Whiteboard in the software repositories. We use it to connect a Nintendo Wiimote to the PC via Bluetooth. This has allowed us to create a Digital Whiteboard with out buying one.

Materials needed:

  • Xubuntu on a Bluetooth capable computer
  • Python Whiteboard software to setup the Wiimote via Bluetooth (package python-whiteboard)
  • Open Sankoré as the whiteboard software
  • Infrared Pen (buy for $20.00 or less or make one by yourself)
  • Nintendo Wiimote
  • Projector

Last year I used the above setup with Xubuntu in my class. It is excellent for learning process; on the whiteboard we can write, use pictures, games and more. The following is a video of it in use: https://www.youtube.com/watch?v=N1XzGk43njc (in Spanish).

If anyone from Puerto Rico wants to know more about Cueto Method or to how build a low cost Xubuntu Computer Lab, feel free to contact us by phone: 1-787-884-0276 (school phone number) or me 1-787-201-1750. I am also available via email at torres_jose10 at hotmail dot com.

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 14, 2015

Planet Ubuntu - Tue, 07/14/2015 - 10:14
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20150714 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt

Status: Wily Development Kernel

We have rebased the master-next branch of our Wily kernel to 4.1 and are
working through fixing up kernel test failures as well as failing DKMS
packages with this newer kernel. We continue to track the 4.2 kernel in
our unstable repo.
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs July 30 – Alpha 2 (~2 weeks away)
    Thurs Aug 6 – 14.04.3 (~3 weeks away)
    Thurs Aug 20 – Feature Freeze (~5 weeks away)
    Thurs Aug 27 – Beta 1 (~6 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html

Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html


    cycle: 04-Jul through 25-Jul
    03-Jul Last day for kernel commits for this cycle
    05-Jul – 11-Jul Kernel prep week.
    12-Jun – 25-Jul Bug verification; Regression testing; Release
    ** NOTE: This cycle produces the kernel that will be in the 14.04.3
    point release.

Open Discussion or Questions? Raise your hand to be recognized

No open Discussions.


Subscribe to Ubuntu Arizona LoCo Team aggregator