Feed aggregator

Ronnie Tucker: Ubuntu Touch Music App Is Proof That Total Ubuntu Convergence Is Getting Closer – Gallery

Planet Ubuntu - Mon, 11/17/2014 - 00:17

While other platforms like Windows or iOS are still working towards their convergence goal, Canonical is already there and the developers now have applications that work both on the mobile and on the desktop platform without any major modifications. One such example is the Ubuntu Touch Music App, which looks and feels native on both operating systems.

For now, Canonical is working on Ubuntu for phones and Ubuntu for desktop. Before long, however, the projects will be folded into a single one, probably in a couple of years. Until then, the biggest change that we’re seeing due to this convergence policy is the fact that applications for Ubuntu Touch don’t really have a problem running on the desktop.

The Ubuntu Touch Music App 2.0 is the same as the one you can find on the mobile platform, but there are some perks if you run it on the desktop. Users can resize it and work much more easily with the playlist, which is a nice thing to have. In any case, it only runs on Ubuntu 14.10 (Utopic Unicorn), so that’s the only way to test it.



Submitted by: Silviu Stahie

Benjamin Kerensa: LoCo stands for Local Community

Planet Ubuntu - Sun, 11/16/2014 - 22:05

The other day there was a trivial blog post that came across Planet Ubuntu which proclaimed that a certain LoCo in the Ubuntu Community was no longer going to use the LoCo term because they felt it was offensive in spanish.

I want to point out if there is any confusion around what LoCo means that LoCo means Local Community and is not a spanish word. There is no Ubuntu ENTERLOCALEHERE Loco or loco but only Ubuntu ENTERLOCALEHERE LoCo. If you somehow missed the meaning of this abbreviation, you now know that LoCo is a positive abbreviation and one that has been used by our Local Communities since the inception of the Local Community Program.

That being said, I would encourage people to not get so hung up on words because despite what you think Users, Distros, Linux for Human Beings, Debian are all excellent words to use and the Old Ubuntu Community you know the roots of where this project came from still means a lot to people.

Lets not forget why Ubuntu exists and its roots!

Paul Tagliamonte: BOS -> DC

Planet Ubuntu - Sun, 11/16/2014 - 19:06

Hello, World

Been a while since my last blog post - things have been a bit hectic lately, and I’ve not really had the time.

Now that things have settled down a bit — I’m in DC! I’ve moved down south to join the rest of my colleagues at Sunlight to head up our State & Local team.

Leaving behind the brilliant Free Software community in Boston won’t be easy, but I’m hoping to find a similar community here in DC.

Stuart Langridge: Ubuntu Component Store, redux

Planet Ubuntu - Sun, 11/16/2014 - 08:44

A while back I proposed an “Ubuntu Component Store” and built a noddy implementation of the command line utility for it. Recently, Nekhelesh Ramanathan revived the idea and did a bunch of work to implement it, and we had a discussion session at the Ubuntu Online Summit about it. This was most interesting. In it, I believe we concluded two things:

Thing 1: It is a good idea to have a place where developers can publish components for Ubuntu apps and an easy way for other developers to get and use those components.

We can argue in detail about exactly what “an easy way” entails, and there is no more bikesheddable project in existence, but I think we were all agreed that the basic idea is this: there is a ucs command line utility which is roughly like pip or npm or apt, that you can type ucs install SomeComponentName, and that component will be downloaded and installed into the project that your command line directory is in (not into some central repository; this is project-specific). This will be great; there are obviously a bunch of other implied commands here, such as ucs list, ucs search, and ucs update to update components to the latest upstream versions. Everyone agrees on this part.

Thing 2: Some people think that a component store should be composed of basically-complete excellently-written well-documented components which are essentially candidates for direct inclusion in the core SDK. Some people think that a component store should be composed of whatever anyone wants to add, and developers can use or not use components as they choose.

Specifically, Nekhelesh is in camp 1 (only good components, like the Ubuntu SDK) and I’m in camp 2 (anyone can publish anything and market forces sort things out, like PyPI or NPM). Both of these camps have some pros and cons, which I will attempt to lay out below without bias; I am, as noted, in camp 2, but I’ll try to be objective here, and please excuse me if I misrepresent camp 1.

First, a brief description of how each of these approaches would work, which will help make sense of the benefits and deficits list.

Nekhelesh proposes that components are part of one single Launchpad branch; to add a component to the store, propose a merge to that source tree with your component source, tests, and documentation. This will be manually reviewed and if suitable, included; a developer can also apply for membership of the component store team, who own the branch and can make changes to it and accept merges. The component is then available from that Launchpad branch. Documentation from the branch is then published to a central location as a manual, similar to the core Ubuntu SDK documentation.

I propose that a UCS web API server is set up which accepts submissions of component metadata (name, version, remote URL, and similar), and that server provides this info via an API. Components themselves are not stored on the UCS server; one puts a component wherever one likes (github, launchpad, one’s own website) and just tells the UCS server about the component and where it is. Providing good documentation and tests is obviously encouraged and will make your component more attractive than alternatives, but is not required; developers can choose to not publish such things, and other developers can choose to use or not use an ill-documented component as a result.

Curated Component Store: benefits
  • Developers can be sure that every component in the store is decent.
  • Components will have good documentation and be well-integrated.
  • If a component maintainer disappears, others can take over the component.
  • The store is a good “breeding ground” for new entrants to the SDK; indeed, the ultimate destiny of any component in the curated store is to be elevated into the core SDK eventually.
Curated Component Store: deficits
  • Developers publishing components are required to have a Launchpad account.
  • Manual review of proposed changes is slow and annoying.
  • A component author will have their component manually reviewed for every change, unless they join the component store team.
  • Any member of the component store team (i.e., any component author who does not want to be blocked by manual review) can edit any component, meaning that they must be trusted.
Community Component Store: benefits
  • Anyone can publish a component instantly without bureaucracy getting in the way.
  • Publishing updates to a component is also instant.
  • This is how everyone else manages library packages etc.
Community Component Store: deficits
  • Developers have no guarantee that components in the store are high-quality or have documentation or tests.
  • A server has to be built and run.

So, there are upsides and downsides to both proposals.

I think there’s a progression here. There are community components, which might become curated components, which themselves might become core components. That is: it’s perfectly reasonable to have both approaches. Here’s my updated proposal.

We have those three levels of components: Core, Curated, and Community.

Core components are those that are already in the Ubuntu SDK: you can use them automatically in every Ubuntu SDK application.

Curated components are in the Launchpad branch, exactly as Nekhelesh proposes. To add a curated component, you propose a merge to the LP branch, and it’s manually reviewed; your component needs to have good test coverage, good documentation which matches the style and format of the other curated components, and provide a stable API. Curated components have a component name: BottomEdgeMenu, or WelcomeWizard. They are installed with ucs, as ucs install BottomEdgeMenu.

Community components are hosted wherever you want, and you tell the UCS Community Server about the URL where they can be obtained. They do not require manual review and approval, and they do not require great test coverage or documentation, although obviously you should provide those things because it makes your component better! Community components have an owner and a name: sil/GenericPodcastWidget. They are installed with ucs, as ucs install sil/ColourSlider, and are published by uploading them somewhere and then using ucs submit username/ComponentName.

One can tell the difference between a curated component and a community component because community components have the developer’s name right there in the component name; it shows you that you’re using something from a particular developer, and so if a component name has a slash in it, it’s a community component. Components without a developer name are from the curated store, and the curated store team stand behind them and guarantee their quality.

All components, both curated and community, contain an ubuntu_community_store.json file which lists metadata about the component: its name, licence, description, version number, dependencies, and others.

How UCS works: technical

The UCS system, under this proposal, knows about two “repositories” of components: the curated store, and the community store. Saying ucs install ComponentName goes to the curated store and downloads and installs the component; ucs install developer/ComponentName goes to the community store. It will be useful to have two separate “fetch” back ends here, one which can resolve a curated ComponentName to a download URL (essentially by saying downloadurl = CURATED_LP_BRANCH_URL + COMPONENT_NAME), and one which can resolve a community ComponentName to a download URL (essentially by requesting http://communityserver/api/get/developer/ComponentName and reading the download URL out of the returned JSON).

ucs install some-component should write to ubuntu_component_store.json in the project folder, updating it to contain the name and version number of any installed component. It is up to developers whether to commit the downloaded component code to their source tree or not; if they do not, components can be installed at the correct version number with ucs install with no parameters, which reads ubuntu_component_store.json and installs those things mentioned therein.

The community server needs to provide a collection of API endpoints that the ucs tool can talk to:

/api/add input { download_url: url to download whole component, metadata_url: to download ubuntu_component_store.json }, requires auth output: { success: bool } Adds a component to the store. The server will fetch ubuntu_component_store.json and parse it to index the metadata. Future enhancement is allowing "special" URLs such as "lp:~user/project/branch" or "github:username/project" when the component is hosted on a well-known service. Adding a component which already exists requires you to be adding a higher version of it, and for it to be your component. /api/get/developerName/componentName input: none output: JSON metadata for component (not yet specified) Returns metadata for the most recent version of that specific component, including a download URL; ucs install will fetch this API entry and then use the contained download URL to fetch the component itself. /api/get/list input: none output: JSON list of (some) metadata for all components Returns a list of all components in the community store. /api/register input: { username: string, password: string } output: { success: bool } Registers an account. (Note: this is not definite; account registration may be done with email addresses, auth may be OAuth 2, etc. Needs discussion.) /api/get/search?queryfields input: none output: JSON list of (some) metadata for all components matching search For searching. How searching works, which fields are searchable, etc, is not yet specified, and until there are many components, /api/list may be enough.

There is a trivial example of a server (which is not complete) at lp:~sil/+junk/ucs-server; it is in Python and Django, mostly because we expect that many components will be hosted on Launchpad, interacting with Launchpad code hosting is easiest with bzr, and Python has bzrlib. Finding hosting for this should not be too difficult; at least initially traffic will not be high and so one of the existing Django hosting services should be able to easily cope for minimal or zero outlay.

Randall Ross: We Are Not Loco: Ubuntu Vancouver Loco's Last Day

Planet Ubuntu - Sat, 11/15/2014 - 12:53

Greetings Ubuntu Vancouver, and friends from around the world. It is with no regret that I make this announcement today, an announcement that has been in the works for months, and on many Ubuntu Vancouverites' minds for much longer.

Today will be the last day of the Ubuntu Vancouver Loco.

November 15th seems a fitting day to pull the trigger on this. It's election day in Vancouver, which means new possibilities and hopefully a brand new mandate for our fantastic Mayor Gregor. You see, Mayor Gregor has challenged the status quo in Vancouver, and continues to do so every single day. With his vision, Vancouver is on its way to become the "World's Greenest City" and is steadfastly committed to ending homelessness once and for all in our city. You would think that all people would like those goals. You'd be wrong.

It's also the day after Jono Bacon's fantastic post on why Ubuntu governance needs a reboot. Jono is an eloquent writer, and it's really an amazing read. I can tell that it is very heart-felt. His thesis: The Ubuntu governing bodies (Community Council and Loco Council) are out-of-step with what Ubuntu is today. He offers a "reboot" proposition as a means to help reinvigorate the community. You'd be tempted to think that all Ubuntu people would like that goal. You'd be wrong.

It's been five and a half years since "Ubuntu Vancouver Loco" hit the scene. What started a year earlier as a small handful of Ubuntu enthusiasts (humans really) that loved to get together to celebrate Ubuntu grew into a proverbial tour de force. I am still amazed at what we have done. When I look back at all the blood, sweat, and tears and the sacrifices that that I and the other core members of the group have made to get us where we are today, I am truly amazed. And, I am thankful that such a lovely group of people exists in this world. My friends. My Vancouver.

But, before we toss the thing that was called "Ubuntu Vancouver Loco" into the Georgia Strait at English Bay (what a fitting location for a ceremony!), let's recap our history:

    Ubuntu Vancouver Loco
  • Founded: March 18, 2009
  • Members: 541 ubuntuvancouver-ites
  • Events: 145 (or 2 events/month, on average)
  • We have never been "Approved" (whatever that means), and have never sought or wanted to be.

You would think that we got to this size and activity level by following the path (rules) set for us by Ubuntu's governance bodies and with their assistance. You'd be wrong.

We got this way by chasing our own dream: To make everyone in this city aware of Ubuntu, to create the largest group of Ubuntu enthusiasts in the world, and to make Ubuntu and Vancouver synonymous. We got this way by choosing our own path. And ever so occasionally, we reached out gently to our friends at Canonical, and guess what? They helped. So much for the conspiracy.

In the past several years, I've been thinking *hard* about ways to spread Ubuntu in our city. Eliminating the problems that are introduced by legacy terminology seems an easy thing to fix.

  • Loco has a bad connotation in Spanish.
  • (Yes, words do set perceptions.)

  • Using the term Loco carries with it a bureaucracy burden.
  • The inclusion of the word Loco confuses people (outside the group).

None of these are helpful.

So, on this day of November 15th, 2014, I hereby announce with the support of our members the "Ubuntu Vancouver Loco" is no longer.

May you rest in peace. Vancouver, we are *NOT* loco.

It's time for a change.

Charles Profitt: Ubuntu Community Health

Planet Ubuntu - Sat, 11/15/2014 - 10:01

Recently Jono Bacon, Senior Director of Community at the XPRIZE Foundation, talked about an Ubuntu Governance reboot. In his blog post he questioned the “purpose and effectiveness” of the governance structure; specifically the Community Council and Technical Board.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

Let us look at the word governance:

1. government; exercise of authority; control.
2. a method or system of government or management.

What Jono described fits the definition. The Ubuntu Governance structure are exercising authority, control and trying to manage a community. Jono notices that ‘leadership’ is missing, but by definition that is not part of governance.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

I can understand what Jono is saying in this quote as I have experienced putting forth ideas that I thought were great ideas that would provide transformational change leading to a better community. Oddly, Jono was one of the people who resisted the idea and showed a reluctance to ‘explore, experiment and try new ideas’. My purpose here is not to challenge Jono’s observations, but to point out that with the presentation of any ‘great idea’ there are two perspectives. If you believe the idea is a poor one and will not help the community you are not being reluctant, but prudent. As a person who has both challenged the Ubuntu Governance structure and been a member of two councils I can tell you that my perspective changed once I was sitting on a council. The vast majority of ‘disputes’ I was part of resolving involved two parties that had not come to a fundamental agreement that there was a problem to be fixed. Every potential change was painfully examined to ensure that the change had a high chance of improving the community and low chance to causing damage. Often there are multiple effect paths that were explored that were not envisioned by anyone when the change was first proposed. As a member of the Community Council I am much more cautious, because I know the decisions that I help to make can have unintended consequences. I feel it is my duty to consider things carefully and not ‘leap to conclusions’. I appreciate the cultural difference such as: the fact that many people from Europe do not truly appreciate how large Texas is or how spread out Alaska is. On the flip side, not many Americans understand some of the regional issues in Europe. They are unaware of the independence votes in Catalonia or Scotland. These differences, and others, make it challenging for the people who sit on Ubuntu Governance boards. Suggested changes that solve a problem for one problem may create more problems.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

I agree with Jono that there is a need for leadership and inspiration. I felt a malaise slip over a large portion of the existing Ubuntu community when Canonical’s focus adverted to the Ubuntu phone. I think a significant portion of the community feels at odds with Canonical’s direction as evidenced by some of the recent tension with Kubuntu, and discussions about copyright and trademark.

I think part of the issue is that the Community has primarily looked to Canonical employees (Jono and Mark) for inspiration and leadership. Another issue is that the current Ubuntu Governance depends on Canonical to provide answers to a great many questions. For example Mark promised that Canonical was going to publish clarifications on trademark, copyright and patent agreements. In June the Community Council was asked for an update and sent a quick message to Canonical asking for an update. They received confirmation that Canonical was currently working on an update. Each month the Community Council reached out to the same contact and the only information we have is that they are working on it and do not have an estimate as to completion. It is difficult to provide leadership or inspire when there is no ability to get better information than ‘trust us we are working on it’. This particular issue has great importance to the community and while I understand that the current Community Council does not have the legal background to craft an official statement I do think that it is reasonable that we should be able to see the work in progress and be involved in crafting the clarifications.

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This statement raises a few questions for me:

  • Do the community council and technical board require change or should their be a different structure for leadership and inspiration?
  • Is the current environment, produced by the relationship between Canonical and the community, conducive to fostering inspirational leaders?
  • Are there issues with the way Ubuntu events are taking place that inhibit or discourage the community?
  • Does the press cover the community or just Canonical?

Change in Structure:
Governance is not leadership. I do not think the need for governance and arbitration will go away so I think one should consider if one group should both lead, inspire and judge. As an example think of government structures where there is separation of powers (executive, judiciary, legislative). I do not have an answer, but I think it should be considered and discussed.

Inspirational Leadership:
Do Ubuntu community members have the ability to make inspirational statements that exert leadership? When Mark announces something ‘big and exciting’ it has often been planned and worked on over an extended period of time. The current community leadership is often finding out about these announcement at the same time the rest of the community is. The community is also focused on items that are less glamorous, but no less important like documentation and end user support. (Let us not get hung up on the use of the word user; OK Randall?)

Ubuntu Events:
UDS used to take a great deal of planning and effort when it was both physical and virtual. Now that it is virtual it seems to be less organized and people have less time to plan for the event. Most members of the community would benefit from having more time to plan for involvement with vUDS. Events like the Ubuntu Global Jam need to be designed to be more beneficial and more accessible to local teams. LoCo teams that are comprised of people with school work, jobs and families need time to secure a venue, advertise the event and ensure they have the necessary support to hold a quality event.

Examples of Press Coverage:

Headline: Canonical Drops Ubuntu 14.10 Dedicated Images for Apple Hardware
Body: The Ubuntu devs marked this interesting evolution in the official announcement for Ubuntu 14.10, but it went largely unnoticed.

Was the community involved in this decision? Was there technical leadership from the community involved? I do not know the answer to that question, but this does illustrate how press coverage can impact how people perceive things.

Moving Forward:
The first step is to agree there is an issue and then once there is an agreement on that work towards a solution. You can not jump to a solution without agreeing on the issue first. If you would like to help lead change in the Ubuntu Community please add your thoughts to the ongoing discussion in the Ubuntu Community Team email list. Let us all focus on positive outcomes and actions over words without action.

Jo Shields: mono-project.com Linux packages – an update

Planet Ubuntu - Sat, 11/15/2014 - 09:21

It’s been pointed out to me that many people aren’t aware of the current status of Linux packages on mono-project.com, so I’m here’s a summary:

Stable packages

Mono 3.10.0, MonoDevelop, NuGet 2.8.1 and F# packages are available. Plus related bits. MonoDevelop on Linux does not currently include the F# addin (there are a lot of pieces to get in place for this to work).

These are built for x86-64 CentOS 7, and should be compatible with RHEL 7, openSUSE 12.3, and derivatives. I haven’t set up a SUSE 1-click install file yet, but I’ll do it next week if someone reminds me.

They are also built for Debian 7 – on i386, x86-64, and IBM zSeries processors. The same packages ought to work on Ubuntu 12.04 and above, and any derivatives of Debian or Ubuntu. Due to ABI changes, you need to add a second compatibility extension repository for Ubuntu 12.04 or 12.10 to get anything to work, and a different compatibility extension repository for Debian derivatives with Apache 2.4 if you want the mod-mono ASP.NET Apache module (Debian 8+, Ubuntu 13.10+, and derivatives, will need this).

MonoDevelop 5.5 on Ubuntu 14.04

In general, see the install guide to get these going.


You may have seen Microsoft recently posting a guide to using ASP.NET 5 on Docker. Close inspection would show that this Docker image is based on our shiny new Xamarin Mono docker image, which is based on Debian 7.The full details are on Docker Hub, but the short version is “docker pull mono:latest” gets you an image with the very latest Mono.

directhex@desire:~$ docker pull mono:latest Pulling repository mono 9da8fc8d2ff5: Download complete 511136ea3c5a: Download complete f10807909bc5: Download complete f6fab3b798be: Download complete 3c43ebb7883b: Download complete 7a1f8e485667: Download complete a342319da8ea: Download complete 3774d7ea06a6: Download complete directhex@desire:~$ docker run -i -t mono:latest mono --version Mono JIT compiler version 3.10.0 (tarball Wed Nov 5 12:50:04 UTC 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen

The Dockerfiles are on GitHub.

Ronnie Tucker: Canonical Drops Ubuntu 14.10 Dedicated Images for Apple Hardware

Planet Ubuntu - Sat, 11/15/2014 - 00:16

Ubuntu 14.10 (Utopic Unicorn) has been available for a couple of weeks and the reception has been positive for the most part, but there is one small piece of interesting information that didn’t get revealed. It looks like the Ubuntu devs don’t need to build specific images for Apple hardware.

Many Ubuntu users will remember that, until the launch of Ubuntu 14.10, there was an image of the OS available labeled amd64+mac, which was technically aimed at Apple hardware.

The Ubuntu devs marked this interesting evolution in the official announcement for Ubuntu 14.10, but it went largely unnoticed.



Submitted by: Silviu Stahie

Benjamin Kerensa: Ubuntu Governance: Empower It

Planet Ubuntu - Fri, 11/14/2014 - 21:25

I was really saddened to see Jono Bacon’s post today because it really seems like he still doesn’t get the Ubuntu Community that he managed for years. In fact, the things he is talking about are problems that the Community Council and Governance Boards really have no influence over because Canonical and Mark Shuttleworth limit the Community’s ability to participate in those kind of issues.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

We need for Canonical to start caring about Community again and investing in things like a physical Ubuntu Developer Summit for contributors to come together and have a really valuable event where they can do work and build relationships that really cannot be built over Google Hangout or IRC alone.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone.

If this is what we need, then Canonical and Mark need to make it so Community Members and Ubuntu Governance have some real say in the project. Sure, right now the Governance Boards can give advice to Canonical or Mark but it should be more than advice. There should be a scenario where the Contributors and Governance are stakeholders.

I will add that one Ubuntu Community Council’s members remark to Jono on IRC about his blog post really made the most sense:

the board have no power to be inspirational and forging new directions, Canonical does

I really like that this council member spoke up on this and I agree with that assessment of things.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter. We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success. This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

Honestly, if this is the way Jono felt then I think he should have been going to bat for the Community and Ubuntu Governance when he was Community Manager because right now the Community and Governance cannot be inspirational leaders because Canonical controls the future of Ubuntu and the Community Council, Governance Boards and Ubuntu Members have very little say in the direction of the project.

I encourage folks to go read Jono’s post and share your thoughts with him but also read the comments in his blog post from current and former members of Ubuntu’s Governance and contributors to Ubuntu.  In closing I would like to also applaud the work of the current and former Community Councils and Governance Boards you all do great work!

Randall Ross: On Building Intentional Culture, With Words - A Small Refinement

Planet Ubuntu - Fri, 11/14/2014 - 18:17

Earlier, I wrote about how words shape our thoughts, and our culture. (You can read the original post here: http://randall.executiv.es/words-build-culture)

In that post, I introduced a graphic that I have since needed to revise. After further thought, I realized that there are not only words that originated in the "Dark Ages of Computing" but also ones that are rooted in the "Good Old Days" of Ubuntu. Those days of yore when the project was smaller, simpler, and less diverse.

Here it is:

Words from the "Good Old Days of Ubuntu" are also worthy of a firewall. Those words (or phrases) have either lost their original meaning, have become irrelevant, or have been subverted over time. In some cases they were just bad choices in the first place. So, let's leave them in the past too.

Here are some examples:

  • loco
  • ubuntah
  • linux for humans
  • distro
  • newbies

Do you have suggestions for others? I'm happy to add to the list.

Svetlana Belkin: UOS 14.11

Planet Ubuntu - Fri, 11/14/2014 - 12:26

During November 12 – 14 2014 was the Ubuntu Online Submit (UOS) 14.11.  I didn’t go to that many sessions as I did for the last one which was less tiring and I also had classes that I had to go to.  The only session that I really focused on was the session that I lead myself, which was the Ubuntu Women Vivid Goals.  I posted the summary HERE.

The track summaries video:

I have learned two lessons during this one:

  • Test hardware before the first session if you want to be in a Hangout.  My netbook wasn’t ready for doing Hangouts.
  • If no one can do the Hangout or host it, doing the session in IRC only is allowed

Randall Ross: On Building Intentional Culture, With Words

Planet Ubuntu - Fri, 11/14/2014 - 12:20

Our languages and the words they contain define us.

You don't have to believe me. You can go and convince yourself first. Here's an excerpt:

New cognitive research suggests that language profoundly influences the way people see the world...

  • Russian speakers, who have more words for light and dark blues, are better able to visually discriminate shades of blue.
  • Some indigenous tribes say north, south, east and west, rather than left and right, and as a consequence have great spatial orientation.
  • The Piraha, whose language eschews number words in favor of terms like few and many, are not able to keep track of exact quantities.
  • In one study, Spanish and Japanese speakers couldn't remember the agents of accidental events as adeptly as English speakers could. Why? In Spanish and Japanese, the agent of causality is dropped: "The vase broke itself," rather than "John broke the vase."

So where are you going with this, Randall?...

I blogged about my strong distaste for the term "user" a few days back, and it generated a lively discussion (see the comments). It also triggered some further thinking and I have now realized that my initial post was just the tip of a large iceberg. Please allow me to describe what lurks beneath the water line.

We're building something new with Ubuntu. We're building a participatory culture adjacent to a place (the computer industry) that has been the antithesis of participatory. Think parched desert: a place where inclusiveness is forbidden. If that industry were to include all humans, it would break their business model. You see, the old model requires that more than 90% of humans be "obedient subjects" and "consumers". I call this the "Dark Ages of Computing".

Remember Mark's question and answer session this week at the Ubuntu Online Summit (UOS)? He opened with and emphasized these points:

  • We are a project for human beings, and that's a strong part of our ethos.
  • Ubuntu benefits our communities.
  • People care about helping humanity get over its challenges and griefs.

That's exactly what I admire about Ubuntu, and about Mark.

Yet, as we try to build this new world some of us are bringing elements of a language that forbids, or at least inhibits the realization of a dream. Words leak in.

So you might be asking, "What's to be done?" Here is my proposal:

The above diagram is meant to represent a flow (or transition) from the old to the new. See that block in the middle? That's a wall, a firewall to be precise. Imaging the language (words) from the "Dark Ages of Computing" (the cloud on the left) trying to get to the world we are trying to build, with Ubuntu (the cloud on the right). Think of the wall as the thing that keeps the language of the past firmly in the past. Words that at best are no longer useful, and at worst no longer helpful. Think of that wall as one that can help you select words that help build Ubuntu.

So, what words are part of the language of the past? here is my initial list:

  • user
  • consumer
  • permission
  • unapproved
  • linux (in certain contexts)

(Dont worry, I have many, many more... I'll share them soon. I may even pick on a few of them.)

As you talk about or write about Ubuntu, I hope that you will always remember my drawing. Are the words that you are using today helping or hurting the world that Ubuntu is trying to build?

Did you come from the cloud on the left? Don't feel bad. Many of us did.

But please, for the love of humanity, it's time to leave that world and those words behind. We are not there any more. Let's let words from the dark ages remain there.

Ted Gould: Tracking Usage

Planet Ubuntu - Fri, 11/14/2014 - 11:35

One of the long standing goals of Unity has been to provide an application focused presentation of the desktop. Under X11 this proves tricky as anyone can connect into X and doesn't necessarily have to give information on what applications they're associated with. So we wrote BAMF, which does a pretty good job of matching windows to applications, but it could never be perfect because there simply wasn't enough information available. When we started to rethink the world assuming a non-X11 display server we knew there was one thing we really wanted, to never ever have something like BAMF again.

This meant designing, from startup to shutdown, a complete tracking of an application before it started creating windows in the display server. We then were able to use the same mechanisms to create a consistent and secure environment for the applications. This is both good for developers and users as their applications start in a predictable way each and every time it's started. And we also setup the per-application AppArmor confinement that the application lives in.

Enough backstory, what's really important to this blog post is that we also get an event when an application starts and stops which is a reliable event. So I wrote a little tool that takes those events out of the log and presents them as usage data. It is cleverly called:

$ ubuntu-app-usage

And it presents a list of all the applications that you've used on the system along with how long you've used them. How long do you spend messing around on the web? Now you know. You're welcome.

It's not perfect in that it uses all the time that you've used the device, it'd be nice to query the last week or the last year to see that data as well. Perhaps even a percentage of time. I might add those little things in the future, if you're interested you can beat me too it.

Svetlana Belkin: Tip: Inviting People on Google Hangouts

Planet Ubuntu - Fri, 11/14/2014 - 11:22

There are three main ways to invite people into a Google Hangout and Google Hangout on Air: e-mail invite, invite via link, or invite within the Hangout.  I will be talking about the third one.  There are some people that only have a phone or a tablet and doing the other two ways doesn’t really work in my experience. But the third way works!

It’s easy to do when you are in a Hangout and can be done by anyone in it, not just the host.  There is a “add person” button where you can (un)mute the mic, turn on or off the cam, ect as in the screenshot below:


Jono Bacon: Ubuntu Governance: Reboot?

Planet Ubuntu - Fri, 11/14/2014 - 11:16

For many years Ubuntu has had a comprehensive governance structure. At the top of the tree are the Community Council (community policy) and the Technical Board (technical policy).

Below those boards are sub-councils such as the IRC, Forum, and LoCo councils, and developer assessment boards.

The vast majority of these boards are populated by predominantly non-Canonical folks. I think this is a true testament to the openness and accessibility of governance in Ubuntu. There is no “Canonical needs to have people on half the board” shenanigans…if you are a good leader in the Ubuntu community, you could be on these boards if you work hard.

So, no-one is denying the openness of these boards, and I don’t question the intentions or focus of the people who join and operate them. They are good people who act in the best interests of Ubuntu.

What I do question is the purpose and effectiveness of these boards.

Let me explain.

From my experience, the charter and role of these boards has remained largely unchanged. The Community Council, for example, is largely doing much of the same work it did back in 2006, albeit with some responsibility delegated elsewhere.

Over the years though Ubuntu has changed, not just in terms of the product, but also the community. Ubuntu is no longer just platform contributors, but there are app and charm developers, a delicate balance between Canonical and community strategic direction, and a different market and world in which we operate.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

Ubuntu is at a critical point in it’s history. Just look at the opportunity: we have a convergent platform that will run across phones, tablets, desktops and elsewhere, with a powerful SDK, secure application isolation, and an incredible developer community forming. We have a stunning cloud orchestration platform that spans all the major clouds, making the ability to spin up large or small scale services a cinch. In every part of this the code is open and accessible, with a strong focus on quality.

This is fucking awesome.

The opportunity is stunning, not just for Ubuntu but also for technology freedom.

Just think of how many millions of people can be empowered with this work. Kids can educate themselves, businesses can prosper, communities can form, all on a strong, accessible base of open technology.

Ubuntu is innovating on multiple fronts, and we have one of the greatest communities in the world at the core. The passion and motivation in the community is there, but it is untapped.

Our inspirational leader has typically been Mark Shuttleworth, but he is busy flying around the world working hard to move the needle forward. He doesn’t always have the time to inspire our community on a regular basis, and it is sorely missing.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone. This will make our community stronger, more empowered, and have that important dose of inspiration that is so critical to focus our family on the most important reasons why we do this: to build a world of technology freedom across the client and the cloud, underlined by a passionate community.

To achieve this will require awkward and uncomfortable change. It will require a discussion to happen to modify the charter and purpose of these boards. It will mean that some people on the current boards will not be the right people for the new charter.

I do though think this is important and responsible work for the Ubuntu community to be successful: if we don’t do this, I worry that the community will slowly degrade from lack of inspiration and empowerment, and our wider mission and opportunity will be harmed.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This, I believe with transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

I have kicked off a discussion on ubuntu-community-team where we can discuss this. Please share your thoughts and solutions there!

Eric Hammond: AWS Lambda Walkthrough Command Line Companion

Planet Ubuntu - Fri, 11/14/2014 - 11:15

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name: source_bucket=alestic-lambda-example # Do not change this. Walkthrough code assumes this name target_bucket=${source_bucket}resized function=CreateThumbnail lambda_execution_role_name=lambda-$function-execution lambda_execution_access_policy_name=lambda-$function-execution-access lambda_invocation_role_name=lambda-$function-invocation lambda_invocation_access_policy_name=lambda-$function-invocation-access log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702 wget -q -OHappyFace.jpg \ https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg aws s3 cp HappyFace.jpg s3://$source_bucket/ Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough wget -q -O $function.js \ http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \ --role-name "$lambda_execution_role_name" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' \ --output text \ --query 'Role.Arn' ) echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:*" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::'$source_bucket'/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::'$target_bucket'/*" } ] }' Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience aws lambda upload-function \ --function-name "$function" \ --function-zip "$function.zip" \ --role "$lambda_execution_role_arn" \ --mode event \ --handler "$function.handler" \ --timeout 30 \ --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM { "Records":[ { "eventVersion":"2.0", "eventSource":"aws:s3", "awsRegion":"us-east-1", "eventTime":"1970-01-01T00:00:00.000Z", "eventName":"ObjectCreated:Put", "userIdentity":{ "principalId":"AIDAJDPLRKLG7UEXAMPLE" }, "requestParameters":{ "sourceIPAddress":"" }, "responseElements":{ "x-amz-request-id":"C3D13FE58DE4C810", "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD" }, "s3":{ "s3SchemaVersion":"1.0", "configurationId":"testConfigRule", "bucket":{ "name":"$source_bucket", "ownerIdentity":{ "principalId":"A3NL1KOZZKExample" }, "arn":"arn:aws:s3:::$source_bucket" }, "object":{ "key":"HappyFace.jpg", "size":1024, "eTag":"d41d8cd98f00b204e9800998ecf8427e", "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko" } } } ] } EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \ --function-name "$function" \ --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \ --output text \ --query 'logGroups[*].[logGroupName]' log_stream_names=$(aws logs describe-log-streams \ --log-group-name "$log_group_name" \ --output text \ --query 'logStreams[*].logStreamName') echo log_stream_names="'$log_stream_names'" for log_stream_name in $log_stream_names; do aws logs get-log-events \ --log-group-name "$log_group_name" \ --log-stream-name "$log_stream_name" \ --output text \ --query 'events[*].message' done | less Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \ --role-name "$lambda_invocation_role_name" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringLike": { "sts:ExternalId": "arn:aws:s3:::*" } } } ] }' \ --output text \ --query 'Role.Arn' ) echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:InvokeFunction" ], "Resource": [ "*" ] } ] }' Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \ --function-name "$function" \ --output text \ --query 'FunctionARN' ) echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \ --bucket "$source_bucket" \ --notification-configuration '{ "CloudFunctionConfiguration": { "CloudFunction": "'$lambda_function_arn'", "InvocationRole": "'$lambda_invocation_role_arn'", "Event": "s3:ObjectCreated:*" } }' Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=... aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \ --output text \ --query 'Functions[*].[FunctionName]' aws lambda get-function \ --function-name "$function" aws iam list-roles \ --output text \ --query 'Roles[*].[RoleName]' aws iam get-role \ --role-name "$lambda_execution_role_name" \ --output json \ --query 'Role.AssumeRolePolicyDocument.Statement' aws iam list-role-policies \ --role-name "$lambda_execution_role_name" \ --output text \ --query 'PolicyNames[*]' aws iam get-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" \ --output json \ --query 'PolicyDocument' aws iam get-role \ --role-name "$lambda_invocation_role_name" \ --output json \ --query 'Role.AssumeRolePolicyDocument.Statement' aws iam list-role-policies \ --role-name "$lambda_invocation_role_name" \ --output text \ --query 'PolicyNames[*]' aws iam get-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" \ --output json \ --query 'PolicyDocument' aws s3api get-bucket-notification \ --bucket "$source_bucket" Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg aws s3 rm s3://$source_bucket/HappyFace.jpg aws s3 rb s3://$target_bucket/ aws s3 rb s3://$source_bucket/ aws lambda delete-function \ --function-name "$function" aws iam delete-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" aws iam delete-role \ --role-name "$lambda_execution_role_name" aws iam delete-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" aws iam delete-role \ --role-name "$lambda_invocation_role_name" log_stream_names=$(aws logs describe-log-streams \ --log-group-name "$log_group_name" \ --output text \ --query 'logStreams[*].logStreamName') && for log_stream_name in $log_stream_names; do echo "deleting log-stream $log_stream_name" aws logs delete-log-stream \ --log-group-name "$log_group_name" \ --log-stream-name "$log_stream_name" done aws logs delete-log-group \ --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

Chuck Short: nova-compute-flex: Introduction and getting started

Planet Ubuntu - Fri, 11/14/2014 - 09:45
What is nova-compute-flex?

For the past couple of months I have been working on the OpenStack PoC called nova-compute-flex. Nova-compute-flex allows you to run native LXC containers using the python-lxc calls to liblxc. It creates small, fast, and reliable LXC containers on OpenStack. The main features of nova-compute-flex are the following:

  • Secure by default (unprivileged containers, apparmor, etc)
  • LXC 1.0.x
  • python-lxc (python2 version)
  • Uses btrfs for instance creation.

Nova-compute-flex (n-c-flex) is a new way of running native LXC containers on OpenStack. It is currently designed with Juno in mind since Juno is the latest release of OpenStack.  This tutorial to get nova-compute-flex up and running assumes that you will be using Ubuntu 14.04 release and will be running devstack with it.

How does n-c-flex work?

N-c-flex works the same way as the other virt drivers in OpenStack. It will stop and start containers,  use neutron for networking, etc. However it does not use qcow2 or raw images, it uses an image that we call “root-tar”.

“Root-tar” images are simply a tarball of the container which is similar to the ubuntu-cloud templates in LXC. They are relatively small and contain just enough to get you running a LXC container. These images are published by Ubuntu as well, and they can be found here.. If you wish to use other distros you can simply tar up the directories found on a given qcow2 image. As well as you could use the templates found in LXC. Its just that simple.

The way that nova-compute-flex works is the following:

  1. Download the tar ball from the glance server.
  2. Create a btrfs snapshot.
  3. Use lxc-usernsexec to un-tar the tar ball into the snapshot.
  4. When the instance starts create a copy of the snapshot.
  5. Create the LXC configuration files.
  6. Create the network for the container.
  7. Start the container.

It just takes seconds to create a new instance since it is just doing a copy of the btrfs snapshot when the image was downloaded from the glance server.

When the instance is created, the container is an unprivileged LXC container. This means that nova-compute-flex uses user-name-spaces with apparmor built in (if you are using Ubuntu).  The instance behaves like a container, but it looks and feels like a normal OpenStack instance.

Getting Started with n-c-flex

Assuming that you already have btrfs-tools is installed and you don’t have a free partition. You will need to create the instances directory where your n-c-flex instances are going to live. To do that you simply have to do the following:

dd if=/dev/zero of=<name of your large file> bs=1024k count=2000
sudo mkfs.btrfs <name of your large file>
sudo mount -t brfs -o user_subvol_rm_allowed <name of your large file> <mount point>

To make the changes permanent, modify your /etc/fstab accordingly.

Installing devstack and n-c-flex

In your “/opt” directory,  run the following commands:

mkdir -p /opt/stack
git clone https://github.com/zulcss/nova-compute-flex
git clone https://github.com/openstack-dev/devstack

This will prepare your devstack to install software like LXC that been back ported to the Ubuntu Cloud Archive. The reason for the back port is that  some of the features that is needed in nova-compute-flex is not
found in the trusty version of LXC

After running the above commands you will have the following in your localrc:


To make your devstack more useful you should have the following in your localrc as well:

disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

DATA_DIR=<mount point>



This will allow you to use the stable branches of juno with neutron support. After modifying your localrc, you can proceed to install by running the “./stack.sh” script.

Running your first instance

As said before nova-compute-flex uses a different kind of image compared to regular nova. To upload the image to the glance server you have to do the following:

source openrc
wget http://cloud-images.ubuntu.com/utopic/current/utopic-server-cloudimg-amd64-root.tar.gz
glance image-create --name='lxc' --container-format=root-tar --disk-format=root-tar < utopic-server-cloudimg-amd64-root.tar.gz

After uploading the image you can run the regular way of creating instances by either using the python-novaclient tools or the euca2ools.

Looking forward

At the OpenStack Developrs Summit last week, Mark Shutleworth announced lxd (lex-dee). LXD is a container “hypervisor” that is built on top of the LXC project. LXD is meant to be used as system container, rather than application containers like Docker.

I will be using the knowledge that we have gained from working on  nova-compute-flex and applying it nova-compute-lxd. LXD will have a Rest-API to interact with the LXD containers, so nova-compute-lxd will be the lxd api to stop/start containers and other functions one expects to find in Nova. More discussion will be going on at the lxc-devel mailing list over the next couple of months.

However, if you want to use nova-compute-flex now go for it! If you wish to submit patches, the github project can be found at https://github.com/zulcss/nova-compute-flex. The work will be fed back into the nova-compute-lxd project as well.  It also has an issue tracker where you can submit bugs as well.

If you run into road blocks please let me know, and I will be happy to help.

Oli Warner: Python isn't always slower than C

Planet Ubuntu - Fri, 11/14/2014 - 09:08

If you ask developers to describe Python in a few words you'll probably hear easy, dynamic and slow but a recent impromptu game of Code Golf showed me that Python can actually be pretty competitive, even against compiled languages like C and C++ when you use the right interpreter: Pypy.

I use Python for its libraries. Django friends make building powerful websites really very simple, but I've never considered Python operationally fast. It's not really a requirement for me; as long as I can generate and return a page in 300ms from request, it's fast enough. That's common of most modern server-side languages.

But yesterday a Unix.SE text-processing question popped up. The problem was fairly simple. Read a file with variable length, numbered lines of DNA sequences:


And write the DNA part from each line to the a file using the first number and adding .seq extension.

All the usual suspects (awk, sed and bash loop) were already there making trouble so I decided to add some non-conventional implementations and a benchmark. The hypothesis being that when you're chunking through thousands of lines and making just as many write operations, it helps to stick to one environment and fork out less. Amongst my implementations was one for C and one for Python.

Python: It's compact and fairly self explanatory. Open the file, iterate the lines, split the line on whitespace and write accordingly.

with open("infile", "r") as f: for line in f: id, chunk = line.split() with open(id + ".seq", "w") as fw: fw.write(chunk)

C: It's been about a decade since I wrote any serious amount of C in anger. And even then it was silly University level coding. This opens and loops but instead of being able to split, we're using strtok to grab tokens from the line. And because C is C, just appending .seq becomes its own little memory reallocation nightmare.

#include <stdio.h> #include <string.h> FILE *fp; FILE *fpout; char line[100]; char *id; char *token; char *fnout; main() { fp = fopen("infile", "r"); while (fgets(line, 100, fp) != NULL) { id = strtok(line, "\t"); token = strtok(NULL, "\t"); char *fnout = malloc(strlen(id)+5); fnout = strcat(fnout, id); fnout = strcat(fnout, ".seq"); fpout = fopen(fnout, "w"); fprintf(fpout, "%s", token); fclose(fpout); } fclose(fp); } So is Python or C faster?

C obviously. Over a 100,000 line input file, C was 1.3x faster than Python...
And by Python, I mean CPython.

It was after I'd written all the other implementations that I remembered CPython wasn't the only Python I have installed on my desktop. I have Pypy. This is a highly optimised reimplementation of almost everything in the Python specs. For most people this means it's a drop-in alternative. Back to the benchmark...

Pypy was 1.03x faster than C

For all intents and purposes, the same speed but this is simple-to-write, easy to debug Python running at the speed of C. It's amazing.

I did go on to write a nice C++ option that was slightly faster again and wasn't too bad on the eye but it's still a lot more involved than the Python is. I know I'm obviously biased toward Python but it's something well worth going to as a first choice, especially for simple, scrappy text-processing jobs like this.

And if you already have Python code that you're considering switching to C (modules or full-on), give Pypy a shot first. Worst thing that'll happen is it won't work or it's still not fast enough.

Ronnie Tucker: Dropbox 2.11.34 Experimental Features a Rewritten UI for Linux Client

Planet Ubuntu - Fri, 11/14/2014 - 00:15

Dropbox, a client for an online service that lets you bring all your photos, docs, and videos anywhere, has been promoted to version 2.11.34 for the experimental branch.

The Dropbox developers don’t usually provide too many changes for the Linux platform and the latest update is not all that promising either. In fact, there is nothing specific for Linux, but the branch is an entirely different discussion. This will be a very interesting release when it becomes stable, but until then we can take a closer look at what’s coming.



Submitted by: Silviu Stahie

Bryan Quigley: Adobe Flash on Firefox/Linux EOL – Summary

Planet Ubuntu - Thu, 11/13/2014 - 20:51
We just ran a session [1] on what to do about the upcoming EOL for Firefox/Linux in 2017.  In short, we’re not planning to diverge from Mozilla’s direction.   The goal is to have Flash work today, and to become irrelevant over time.    Hopefully reaching the point of being irrelevant by 2017.  There are ways for you to help!  See below. Distributing Firefox and Chrom/ium plugins now possible A deal was reached with Adobe to distribute NPAPI and PPAPI Flash in Canonical Partners Repo!  (No more grabbing PPAPI from Chrome to get it to work in Chromium. No more “downloader” packages necessary for Firefox either.)  This should help make thin
How can you help make Flash go away? On any browser, any platform (that has Flash of course)

Use less Flash.  See if you can do step 1.  If you can proceed to step 2, etc.

  1. Make Flash Click to Play.
  2. Disable Flash.
  3. Uninstall Flash.

To do these on Chrome, browse to chrome://plugins/,  On Firefox go to Add-ons -> Plugins.

If their is a site that doesn’t work without Flash, see if you can load their site on a mobile device.   Either way contact them and ask them nicely about removing the Flash content to get more hits, or for enabling at least the mobile site for non Flash users.

Run a Beta browser

Generally both Firefox and Chrome will push new web technologies in their Beta browser.  Many of them have the potential to help make Flash less relevant.   Help make them more stable by testing them!



Run Firefox Nightly

Try running Firefox nightly.   We could always use more testers.  Specifically, we might get a more aggressive Mozilla when MSE is done being implemented (which should make youtube even more HTML5 video friendly).

Of course, there a bunch of other useful features Mozilla is working on to make browsing better.   Help would be welcome there too!  Report bugs on issues you have.



Other options considered.
  • We default to Chromium  – nope, let’s specifically NOT switch browsers over Flash. 
    • Outcome: That would send the completely wrong message.
  • We default to a compatible Flash alternative (Shumway, Gnash, Lightspark)
    • Outcome: That would just be a stop gap measure.  And we’ll always be playing catchup.
  • We add PPAPI support to Firefox ourselves / Hack it in
    • Outcome:  Non-starter.  Unless Mozilla adds it we don’t want the maintenance burden.
My Todo List
  • Investigate why Youtube Live videos sometimes don’t work without Flash. (Even in Chromium).
  • Figure out why my Nightly install doesn’t have working H264..
    UPDATE – because it’s not designed to yet!  See here – http://andreasgal.com/2014/10/14/openh264-now-in-firefox/
    If you have H264 working in Firefox it’s likely due to GStreamer support included in the Ubuntu (and many other distros) builds.  Upstream Gst1.0 support is waiting on infrastructure [3].

Hopefully I captured everything right.. but if I didn’t please let me know!

[1] http://summit.ubuntu.com/uos-1411/meeting/22354/adobe-flash-on-firefoxlinux-eol/ [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1083588 testers wanted, run Firefox nightly [3] https://bugzilla.mozilla.org/show_bug.cgi?id=973274  Have RHEL 6.2 experience?  Might be useful there.. [4] https://groups.google.com/forum/#!topic/mozilla.dev.tech.plugins/PK237Yk1oWM
  – thread discussing how Firefox can be more aggressive against Flash.


Subscribe to Ubuntu Arizona LoCo Team aggregator