Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 2 hours 36 min ago

Mark Shuttleworth: What Western media and polititians fail to mention about Iraq and Ukraine

Thu, 09/25/2014 - 01:01

Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.

 

Julian Andres Klode: APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

Wed, 09/24/2014 - 14:06

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.


Filed under: Uncategorized

Stephen Kelly: Grantlee 5.0.0 (codename Umstraßen) now available

Wed, 09/24/2014 - 09:23

The Grantlee community is pleased to announce the release of Grantlee version 5.0 (Mirror). Grantlee contains an implementation of the Django template system in Qt.

I invented the word ‘umstraßen’ about 5 years ago while walking to Mauerpark with a friend. We needed to cross the road, so I said ‘wollen wir umstraßen?’, because, well ‘umsteigen’ can be a word. Of course it means ‘die Straßenseite wechseln’ in common German, but one word is better than three, right? This one is generally popular with German native speakers, so let’s see if we can get it into the Duden :).

This is a source and binary compatibility break since the 0.x.y series of Grantlee releases. The major version number has been bumped to 5 in order to match the Qt major version requirement, and to reflect the maturity of the Grantlee libraries. The compatibility breaks are all minor, with the biggest impact being in the buildsystem, which now follows patterns of modern cmake.

The biggest change to the C++ code was removal of a lot of code which became obsolete in Qt 5 because of the QSequentialIterable as part of the type-erased iteration features.


Didier Roche: Ubuntu Developer Tools Center: how do we run tests?

Wed, 09/24/2014 - 04:30

We are starting to see multiple awesome code contributions and suggestions on our Ubuntu Loves Developers effort and we are eagerly waiting on yours! As a consequence, the spectrum of supported tools is going to expand quickly and we need to ensure that all those different targeted developers are well supported, on multiple releases, always delivering the latest version of those environments, at anytime.

A huge task that we can only support thanks to a large suite of tests! Here are some details on what we currently have in place to achieve and ensure this level of quality.

Different kinds of tests pep8 test

The pep8 test is there to ensure code quality and consistency checking. Tests results are trivial to interpret.

This test is running on every commit to master, on each release during package build as well as every couple of hours on jenkins.

small tests

Those are basically unit tests. They are enabling us to quickly see if we've broken anything with a change, or if the distribution itself broke us. We try to cover in particular multiple corner cases that are easy to test that way.

They are running on every commit to master, on each release during package build, every time a dependency is changed in Ubuntu thanks to autopkgtests and every couple of hours on jenkins.

large tests

Large tests are real user-based testing. We execute udtc and type in stdin various scenarios (like installing, reinstalling, removing, installing with a different path, aborting, ensuring the IDE can start…) and check that the resulting behavior is the one we are expecting.

Those tests enables us to know if something in the distribution broke us, or if a website changed its layout, the download links are modified, or if a newer version of a framework can't be launched on a particular Ubuntu version or configuration. That way, we are aware, ideally most of the time even before the user, that something is broken and can act on it.

Those tests are running every couple of hours on jenkins, using real virtual machines running an Ubuntu Desktop install.

medium tests

Finally, the medium tests are inheriting from the large tests. Thus, they are running exactly the same suite of tests, but in a Docker containerized environment, with mock and small assets, not relying on the network or any archives. This means that we ship and emulate a webserver delivering web pages to the container, pretending we are, for instance, https://developer.android.com. We then deliver fake requirements packages and mock tarballs to udtc, and running those.

Implementing a medium tests is generally really easy, for instance:

class BasicCLIInContainer(ContainerTests, test_basics_cli.BasicCLI):

"""This will test the basic cli command class inside a container"""

is enough. That means "takes all the BasicCLI large tests, and run them inside a container". All the hard work, wrapping, sshing and tests are done for you. Just simply implement your large tests and they will be able to run inside the container with this inheritance!

We added as well more complex use cases, like emulating a corrupted downloading, with a md5 checksum mismatch. We generate this controlled environment and share it using trusted containers from Docker Hub that we generate from the Ubuntu Developer Tools Center DockerFile.

Those tests are running as well every couple of hours on jenkins.

By comparing medium and large tests, as the first is in a completely controlled environment, we can decipher if we or the distribution broke us, or if a change from a third-party changing their website or requesting newer version requirements impacted us (as the failure will only occurs on the large tests and not in the medium for instance).

Running all tests, continuously!

As some of the tests can show the impact of external parts, being the distribution, or even, websites (as we parse some download links), we need to run all those tests regularly[1]. Note as well that we can experience different results on various configurations. That's why we are running all those tests every couple of hours, once using the system installed tests, and then, with the tip of master. Those are running on various virtual machines (like here, 14.04 LTS on i386 and amd64).

By comparing all this data, we know if a new commit introduced regressions, if a third-party broke and we need to fix or adapt to it. Each testsuites has a bunch of artifacts attached to be able to inspect the dependencies installed, the exact version of UDTC tested here, and ensure we don't corner ourself with subtleties like "it works in trunk, but is broken once installed".

You can see on that graph that trunk has more tests (and features… just wait for some days before we tell more about them ;)) than latest released version.

As metrics are key, we collect code coverage and line metrics on each configuration to ensure we are not regressing in our target of keeping high coverage. That tracks as well various stats like number of lines of code.

Conclusion

Thanks to all this, we'll probably know even before any of you if anything is suddenly broken and put actions in place to quickly deliver a fix. With each new kind of breakage we plan to back it up with a new suite of tests to ensure we never see the same regression again.

As you can see, we are pretty hardcore on tests and believe it's the only way to keep quality and a sustainable system. With all that in place, as a developer, you should just have to enjoy your productive environment and don't have to bother of the operation system itself. We have you covered!

As always, you can reach me on G+, #ubuntu-desktop (didrocks) on IRC (freenode), or sending any issue or even pull requests against the Ubuntu Developer Tools Center project!

Note

[1] if tests are not running regularly, you can consider them broken anyway

Robert Collins: what-poles-for-the-tent

Tue, 09/23/2014 - 22:11

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


Valorie Zimmerman: Candor and trust

Tue, 09/23/2014 - 19:13
Catmull uses the term candor in his book Creativity, Inc., because honesty is overloaded with moral overtones. It means forthrightness, frankness, and also indicates a lack of reserve. Of course reserve is sometimes needed, but we want to create a space where complete candor is invited, even if it means scrapping difficult work and starting over. [p. 86] Catmull discusses measures he put into place to institutionalize candor, by explicitly asking for it in some processes. He goes on to discuss the Braintrust which Pixar relies on to push us towards excellence and to root out mediocrity....[It] is our primary delivery system for straight talk.... Put smart, passionate people in a room together, charge them with identifying and solving problems, and encourage them to be candid with one another. [86-7] Does this sound at all familiar?

Naturally the focus is on constructive feedback. The members of such a group must not only trust one another, but see each other as peers. Catmull observes that it is difficult to be candid if you are thinking about not looking like an idiot! [89] He also says that this is crucial in Pixar because, in the beginning, all of our movies suck. [90] I'm not sure this is true with KDE software, but maybe it is. Not until the code is exposed to others, to testing, to accessibility teams, HIG, designers--can it begin to not suck.

I think that we do some of this process in our sprints, on the lists, maybe in IRC and on Reviewboard, but perhaps we can be even more explicit in our calls for review and testing. The key of course is to criticize the product or the process, not the person writing the code or documentation. And on the other side, it can be very difficult to accept criticism of your work even when you trust and admire those giving you that criticism. It is something we must continually learn, in my experience.

Catmull says,
People who take on complicated creative projects become lost at some point in the process....How do you get a director to address a problem he or she cannot see? ...The process of coming to clarity takes patience and candor. [91] We try to create an environment where people want to hear one another's notes [feedback] even where those notes are challenging, and where everyone has a vested interest in one another's success.[92]Let me repeat that, because to me, that is the key of a working, creative community: "where everyone has a vested interest in one another's success." I think we in KDE feel that but perhaps do not always live it. So let us ask one another for feedback, criticism, and strive to pay attention to it, and evaluate criticism dispassionately. I think we have missed this bit some times in the past in KDE, and it has come back to bite us. We need to get better.

Catmull makes the point that the Braintrust has no authority, and says this is crucial:
the director does not have to follow any of the specific suggestions given. .... It is up to him or her to figure out how to address the feedback....While problems in a movie are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess.[93]He continues,
We do not want the Braintrust to solve a director's problem because we believe that...our solution won't be as good....We believe that ideas....only become great when they are challenged and tested.[93]  More than once, he discusses instances where big problems led to Pixar's greatest successes, because grappling with these problems brought out their greatest creativity. While problems ... are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess.[93] How familiar does this sound to us working in software!? So, at Pixar,
the Braintrust's notes ...are intended to bring the true causes of the problems to the surface--not to demand a specific remedy. Moreover, we don't want the Braintrust to solve a director's problem because we believe that, in all likelihood, our solution won't be as good as the one the director and his or her creative team comes up with. We believe that ideas--and thus films--only become great when they are challenged and tested.[93]I've seen that often this last bit is a sticking point. People are willing to criticize a piece of code, or even the design, but want their own solution instead. Naturally, this way of working encounters pushback.

Frank talk, spirited debate, laughter, and love [99] is how Catmull sums up Braintrust meetings. Sound familiar? I've just come from Akademy, which I can sum up the same way. Let's keep doing this in all our meetings, whether they take place in IRC, on the lists, or face to face. Let's remember to not hold back; when we see a problem, have the courage to raise the issue. We can handle problems, and facing them is the only way to solve them, and get better.

Lubuntu Blog: Lubuntu 14.10 Utopic Unicorn Final Beta

Tue, 09/23/2014 - 12:41
Testing has begun for the Final Beta (Beta 2) of Lubuntu 14.10 Utopic Unicorn. Head on over to the ISO tracker to download images, view testcases, and report results. If you're new to testing, anyone can join and they don't have to be Linux Jedis or anything. You can find all the information you need to get started here.

Please note that we especially need testers for PPC chips and Intel Macs. We have a special section discussing it here. In particular, if you have an Intel Mac, I have a few questions for you that might help us trim down the workload of the testing team.

Also, if you have a PPC chip, we're about the only distro actively supporting this architecture. However, we are community supported, so without formal testing, the arch will lose more support. So please, join in testing!

Nicholas Skaggs: Final Beta testing for Utopic

Tue, 09/23/2014 - 10:24
Can you believe final beta is here for utopic already? Where has the summer gone? The milestone and images are already prepared for the final beta testing. This is the first round of image testing for ubuntu this cycle. A final image will also be tested next month, but now is the time to try out the image on your system. Be sure to report any bugs you may find. This will help ensure there is time to fix them before the release images.

To help make sure the final utopic image is in good shape, we need your help and test results! Please, head over to the milestone on the isotracker, select your favorite flavor and perform the needed tests against the images.

If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help. Happy Testing!

Ubuntu Kernel Team: Kernel Team Meeting Minutes – September 23, 2014

Tue, 09/23/2014 - 10:13
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140923 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.3 upstream stable kernel
and uploaded to the archive, ie. 3.16.0-17.23. Please test and let us
know your results.
Also, we’re approximately 2 weeks away from Utopic Kernel Freeze on
Thurs Oct 9. Any patches submitted after kernel freeze are subject to
our Ubuntu kernel SRU policy.
—–
Important upcoming dates:
Thurs Sep 25 – Utopic Final Beta (~2 days away)
Thurs Oct 9 – Utopic Kernel Freeze (~2 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~3 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 23):

  • Lucid – Kernel prep
  • Precise – Kernel prep
  • Trusty – Kernel prep

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 11-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Dustin Kirkland: An Elegant Weapon, for a More Civilized Age...

Tue, 09/23/2014 - 07:20

Before Greedo shot first...
Before a troubled young Darth Vader braided his hair...
Before midiclorians offered to explain the inexplicably perfect and perfectly inexplicable...
And before Jar Jar Binks burped and farted away the last remnants of dear Obi-Wan's "more civilized age"...

...I created something, of which I was very, very proud at the time.  Remarkably, I came across that creation, somewhat randomly, as I was recently throwing away some old floppy disks.

Twenty years ago, it was 1994.  I was 15 years old, just learning to program (mostly on my own), and I created a "trivia game" based around Star Wars.  1,700 lines of Turbo Pascal.  And I made every mistake in the book:
Of course I'm embarrassed by all of that!  But then, I take a look at what the program did do, and wow -- it's still at least a little bit fun today :-)

Welcome to swline.pas.  Almost unbelievably, I was able to compile it tonight on an Ubuntu 14.04 LTS 64-bit Linux desktop, using fpc, after three very minor changes:
  1. Running fromdos to remove the trailing ^M endemic of many DOS-era text files
  2. Replacing the (80MHz) CPU clock based sleep function with Delay()
  3. Running iconv to convert the embedded 437 code page ASCII art to UTF-8
Here's a short screen cast of the game in action :-)


Would you look at that!
  • 8-bit color!
  • Hand drawn ANSI art!
  • Scrolling text of the iconic Star Wars, Empire Strikes Back, and Return of the Jedi logos! 
  • Random stars and galaxies drawn on the splash screen!
  • No graphic interface framework (a la Newt or Ncurses) -- just a whole 'bunch of GotoXY().
  • An option for sound (which, unfortunately, doesn't seem to work -- I'm sure it was just 8-bits of bleeps and bloops).
  • 300 hand typed quotes (and answers) spanning all 3 movies!
  • An Easter Egg, and a Cheat Code!
  • Timers!
  • User input!
  • And an option at the very end to start all over again!
You can't make this stuff up :-)

But watching a video is boring...  Why don't you try it for yourself!?!

I thought this would be a perfect use case for a Docker.  Just a little Docker image, based on Ubuntu, which includes a statically built swline.pas, and set to run that one binary (and only that one binary when launched.  As simple as it gets, Makefile and Dockerfile.

$ cat Makefile
all:
fpc -k--static swline.pas

$ cat Dockerfile
FROM ubuntu
MAINTAINER Dustin Kirkland
ADD swline /swline
ENTRYPOINT /swline

I've pushed a Docker image containing the game to the Docker Hub registry.
Quick note...  You're going to want a terminal that's 25 characters high, and 160 characters wide (sounds weird, yes, I know -- the ANSI art characters are double byte wide and do some weird things to smaller terminals, and my interest in debugging this is pretty much non-existant -- send a patch!).  I launched gnome-terminal, pressed ctrl-- to shrink the font size, on my laptop.On an Ubuntu 14.04 LTS machine:

$ sudo apt-get install docker.io
$ sudo docker pull kirkland/swline
$ sudo docker run -t -i kirkland/swline
Of course you can find, build, run, and modify the original (horrible!) source code in Launchpad and Github.




Now how about that for a throwback Tuesday ;-)

May the Source be with you!  Always!
Dustin

p.s.  Is this the only gem I found on those 17 floppy disks?  Nope :-)  Not by a long shot.

The Fridge: Ubuntu Weekly Newsletter Issue 384

Mon, 09/22/2014 - 22:19

Welcome to the Ubuntu Weekly Newsletter. This is issue #384 for the week September 15 – 21, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Diego Turcios
  • John Mahoney
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Jono Bacon: Bringing Literacy to Millions of Kids With Open Source

Mon, 09/22/2014 - 22:00

Today we are launching the Global Learning XPRIZE complete with Indigogo crowdfunding campaign.

This is a $15 million competition in which teams are challenged to create Open Source software that will teach a child to read, write, and perform arithmetic in 18 months without the aid of a teacher. This is not designed to replace teachers but to instead provide an educational solution where little or none exists.

There are 57 million children aged 5 – 12 in the world today who have no access to education. There are 250 million children below basic literacy levels, even after several years of school. You may think the solution to this is to build more schools, but we would need an extra 1.6 million teachers by next year to provide universal primary education.

This is a tragedy.

This new XPRIZE is designed to help fix this.

Every child should have a right to the core ingredient that is literacy. It unlocks their potential and opens up opportunity. Just think of all the resources we depend on today for growth and education…the Internet, books, wikipedia, collaborative tools…without literacy all of these are inaccessible. It is time to change this. Too many suffer from a lack of literacy, and sadly girls bear much of the brunt of this too.

This prize is open to anyone to participate in. Professional developers, hobbyists, students, scientists, teachers…everyone is welcome to join in and compete. While the $15 million purse is attractive in itself, just think of the impact of potentially changing the lives of hundreds of millions of kids.

Coopetition For a Great Cause

What really excites me about this new XPRIZE is that it is the first Open Source XPRIZE. The winning team and the four runner-up teams will be expected to Open Source their entire code-base, assets, and material. This will create a solid foundation of education technology that can live on…long past the conclusion of this XPRIZE.

That isn’t the only reason why this excites me though. The Open Source nature of this prize provides an incredible opportunity for coopetition; where teams can collaborate around common areas of interest and problem-solving, while keeping their secret sauce to themselves. The impact of this could be profound.

I will be working hard to build an environment in which we encourage this kind of collaboration. It makes no sense for 100 teams to all solve the same problems privately in their own silo. Let’s get everyone up and running in GitHub, collaborating around these common challenges, so all the teams benefit from that pooling of resources.

Let’s also open this up so everyone can help us be successful. Let’s invite designers, translators, testers, teachers, scientists, musicians, artists and more…everyone has something they can bring to solve one of our grandest challenges, and help create a more literate and peaceful world.

Everyone Can Play a Part

As part of this new XPRIZE we are also launching a crowdfunding campaign that is designed to raise additional resources so we can do even more as part of the prize. We have already funded the $15 million prize purse and some field-testing, but this crowdfunding campaign will provide the resources for us to do so much more.

This will help us broaden the field-testing in more countries, with more kids, to grow a global community around solving these grand challenges, build a collaborative environment for teams to work together on common problems, and optimize this new XPRIZE to be as successful as possible. Every penny contributed helps us to do more and ultimately squeeze the most value out of this important XPRIZE.

There are ten things you can do to help:

  1. Contribute! - a great place to start is to buy one of our awesome perks from the crowdfunding campaign. Find out more here.
  2. Join the Community - come and meet the new XPRIZE community at http://forum.xprize.org and share ideas, brainstorm, and collaborate around new projects.
  3. Refer Friends and Win Prizes - want to win an expenses-paid trip to our Visioneering event where we create new XPRIZEs while also helping spread the word? To find out more click here.
  4. Download the Street Team Kit - head over to our Get Involved page and download a free kit with avatars, banners, posters, presentations, FAQs and more. The page also includes videos for how to get started!
  5. Create and Share Digital Content - we are encouraging authors, developers, content-creators and more to create content that will spread the word about literacy, the Global Learning XPRIZE, and more!
  6. Share and Tag Your Fave Children’s Book - which children’s books have been the most memorable for you? Share your favorite (and preferably post a picture of the cover), complete with a link to

    http://igg.me/at/learningxprize and tag 5 friends to share theirs too! When using social media, be sure to use the #learningprize hashtag.

  7. Show Your Pride -  go and download the Street Team Kit and use the images and avatars in there to change your profile picture and banner on your favorite social media networks (e.g. Twitter, Facebook, Google+).
  8. Create Your ‘Learning Moment’ Video - record a video and share on a video website (such as YouTube) about how learning has really impact your life. Give the video the title “Global Learning XPRIZE: My Learning Moment“. Be sure to share your video on social media too with the #learningprize hashtag!
  9. Put Posters up in Your Community - go and download the Street Team Kit, print the posters out and put them up in your local coffee shops, universities, colleges, schools, and elsewhere!
  10. Organize a Local Event - create a local event to share the Global Learning XPRIZE. Fortunately we have a video on our Get Involved page that explains how you can do this, and we have a presentation deck with notes ready for you to use!

I know a lot of people who read my blog are Open Source folks, and I believe this prize offers an incredible opportunity for us to come together to have a very real profound impact on the world. Come and join the community, support the crowdfunding campaign, and help us to all be successful in bringing literacy to millions.

Luis de Bethencourt: Now Samsung @ London

Mon, 09/22/2014 - 09:18


I just moved back to Europe, this time to foggy London town, to join the Open Source Group at Samsung. Where I will be contributing upstream to GStreamer and WebKit/Blink during the day and ironically mocking the local hipsters at night.

After 4 years with Collabora it is sad to leave behind the talented and enjoyable people I've grown fond of there, but it's time to move on to the next chapter in my life. The Open Source Group is a perfect fit: contribute upstream, participate in innovative projects and be active in the Open Source community. I am very excited for this new job opportunity and to explore new levels of involvement in Open Source.

I am going to miss Montreal. It's very particular joie de vivre. Will miss the poutine, not the winter.

For all of those in London, I will be joining the next GNOME Beers event or let me know if you want to meet up for a coffee/pint.

Charles Butler: Juju + Digital Ocean = Awesome!

Mon, 09/22/2014 - 08:35

Syndicators, there is a video above that may not have made it into syndication. Visit the source link to view the video.

Juju on Digital Ocean, WOW! That's all I have to say. Digital Ocean is one of the fastest cloud hosts around with their SSD backed virtual machines. To top it off their billing is a no-nonsense straight forward model. $5/mo for their lowest end server, with 1TB of included traffic. That's enough to scratch just about any itch you might have with the cloud.

Speaking of scratching itches, if you haven't checked out Juju yet, now you have a prime, low cost cloud provider to toe the waters. Spinning up droplets with Juju is very straight forward, and offers you a hands on approach to service orchestration thats affordable enough for a weekend hacker to whet their appetite. Not to mention, Juju is currently the #1 project on their API Integration listing!

In about 11 minutes, we will go from zero to deployed infrastructure for a scale-out blog (much like the one you're reading right now).

Text Instructions Below:

Pre-Requisits:

  • A Recent Ubuntu Installation (12.04 +)
  • A CreditCard (for DO)
Install Juju sudo add-apt-repository ppa:juju/stable sudo apt-get update sudo apt-get install juju Install Juju-Docean Plugin sudo apt-get install python-pip sudo pip install juju-docean juju generate-config Generate an SSH Key ssh-keygen cat ~/.ssh/id_rsa.pub Setup DO API Credentials in Environment vim ~/.bashrc

You'll want the following exports in $HOME/.bashrc

export DO_CLIENT_ID="XXXXXX" export DO_API_KEY="XXXXXX"

Then source the file so its in our current, active session.

source ~/.bashrc Setup Environment and Bootstrap

vim ~/.juju/environments.yaml

Place the following lines in the environments.yaml, under the environments: key (indented 4 spaces) - ENSURE you use 4 spaces per indentation block, NOT a TAB key.

digitalocean: type: manual bootstrap-host: null bootstrap-user: root Switch to the DigitalOcean environment, and bootstrap juju switch digitalocean juju docean bootstrap

Now you're free to add machines with constraints.

juju docean add-machine -n 3 --constraints="mem=2g region=nyc3" --series=precise

And deploy our infrastructure:

juju deploy ghost juju deploy mysql juju deploy haproxy juju add-relation ghost mysql juju add-relation ghost haproxy juju expose haproxy

From here, pull the status off the HAProxy node, copy/paste the public-address into your browser and revel in your brand new Ghost blog deployed on Digital Ocean's blazing fast SSD servers.

Caveats to Juju DigitalOcean as of Sept. 2014:

These are important things to keep in mind as you move forward. This is a beta project. Evaluate the following passages for quick fixes to known issues, and warnings.

Not all charms have been tested on DO, and you may find missing libraries. Most notably python-yaml on charms that require it. Most "install failed" charms is due to missing python-yaml.

A quick hotseat fix is:

juju ssh service/# sudo apt-get install python-yaml exit juju resolved -r service/#

And then file a bug against the culprit charm that it's missing a dependency for Digital Ocean.

While this setup is amazingly cheap, and works really well, the Docean plugin provider should be considered beta software, as Hazmat is still actively working on it.

All in all, this is a great place to get started if you're willing to invest a bit of time working with a manual environment. Juju's capable orchestration will certainly make most if not all of your deployments painless, and bring you to scaling nirvana.

Happy Orchestrating!

Stephen Kelly: Grantlee 5.0.0 release candidate available for testing

Mon, 09/22/2014 - 04:44

I have tagged, pushed and tarball‘d a release candidate for Grantlee 5.0.0, based on Qt 5. Please go ahead and test it, and port your software to it, because some things have changed.

Also, following from the release last week, I’ve made a new 0.5.1 release with some build fixes for examples with Qt 5 and a change to how plugins are handled. Grantlee will no longer attempt to unload plugins at any point.


Stuart Langridge: Reconnecting

Mon, 09/22/2014 - 03:38

After I took issue with some thoughts of Aaron Gustafson regarding JavaScript, Aaron has clarified his thoughts elegantly.

His key issue here is summed up by one sentence from his post: “The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.” Now, I could nit-pick various details about the argument he provides (I’ve had as many buggy modules from npm or PyPI as I’ve had from the Google jQuery CDN, and if you specify exact version numbers that’s less of a problem; writing software specifically for one client might allow you to run a carbon copy of their server, but server software for wide distribution, especially open source, doesn’t and shouldn’t have that luxury) but those sorts of pettifogging nit-picks are, I hope, beneath us. In short, I’ll say this: I agree with Aaron. He is right. However, this discussion uncovers some wider issues.

Now, I’ve built pure client-side apps. The WebUtils are a suite of small apps which one would expect to find on a particular platform (a calculator, a compass, a currency converter, that sort of thing), but built to be pure web apps in order that someone deciding to use the web as their platform has access to these things. (If you’re interested in this idea, get involved in the project.) I built two of them; a calculator and a currency converter. Both are pure-client-side, JavaScript-requiring, Angular-using apps. I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work. (And here by “without JavaScript” we must also include being with JavaScript but JavaScript which gets screwed up by your mobile carrier injecting extra scripts, or your ISP blocking CDNs, or your social media buttons throwing errors, or your ads stomping on your variables. All of these are real problems which go unacknowledged and Aaron is entirely right to bring them up.) However, the policy that apps should be robust to JS not working, by being server apps which are progressively enhanced, does ignore an elephant in the room.

It is this. I should not need an internet connection in order to make my calculator add 4 and 5.

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment. It doesn’t work when you’re in and out of train tunnels, or at a conference with bad wifi, or on a metered data connection. The offline first concept, which should be informing how we build apps, is right about this.

So, what to do? It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps. They might share a bit of code (the server being node.js and using JavaScript might help here, but that’s not what I was talking about last time), but in practice you’re building the same app twice in two different environments and then delivering both through one URL.

This is a problem. I am not sure how to square this circle. Aaron is right that you can’t build a robust Web experience that relies solely on client-side JavaScript, but the word robust is doing an awful lot of work in that sentence. In particular, it means “you can’t build a thing which you can be sure will work for everybody”. Most of the time, for most of the people, your experience can be robust — fine, it’ll break if your ad JavaScript sods you up, or if your ISP blocks your jQuery download, and these are problems. But going the other direction — explicitly not relying on your client-side JS — means that you’ll be robust until you’re somewhere without a good internet connection. Both of these approaches end up breaking in certain situations. And making something which handles both these camps is a serious amount of work, and we don’t really know how to do it. (I’d be interested in hearing examples of a web app which will work if I run it in an environment with a flaky internet connection and having redefined window.Array to be something else on the client. I bet there isn’t one.)

This needs to be a larger conversation. It needs discussion from both JavaScript people and server people. These two camps should not be separate, not be balkanised; there is an attitude still among “real programmers” that JavaScript is not a real language. Now, I will be abundantly clear here: Aaron is not one of the people who thinks this. But I worry that perpetuating the idea that JavaScript is an unstable environment will cause JS people to wall themselves off and stop listening to reasoned debate, which won’t help. As I say, I don’t know how to square this circle. The Web is an environment unlike any other, and as Aaron says, “building the Web requires more of us than traditional software development”. We get huge benefits from doing so — you build for the Web, and do it right, and your thing is available to everybody everywhere — but to “do it right” is a pretty big task, and it gets bigger every day. We need to be talking about that more so we can work out how to do it.

Robert Ancell: Using EGL with GTK+

Mon, 09/22/2014 - 02:58
I recently needed to port some code from GTK+ OpenGL code from GLX to EGL and I couldn't find any examples of how to do this. So to seed the Internet here is what I found out.

This is the simplest example I could make to show how to do this. In real life you probably want to do a lot more error checking. This will only work with X11; for other systems you will need to use equivalent methods in gdk/gdkwayland.h etc. For anything modern you should probably use OpenGL ES instead of OpenGL - to do this you'll need to change the attributes to eglChooseConfig and use EGL_OPENGL_ES_API in eglBindAPI.

Compile with:
gcc -g -Wall egl.c -o egl pkg-config --cflags --libs `gtk+-3.0 gdk-x11-3.0` -lEGL -lGL

#include <gtk/gtk.h>
#include <gdk/gdkx.h>
#include <EGL/egl.h>
#include <GL/gl.h>

static EGLDisplay *egl_display;
static EGLSurface *egl_surface;
static EGLContext *egl_context;

static void realize_cb (GtkWidget *widget)
{
EGLConfig egl_config;
EGLint n_config;
EGLint attributes[] = { EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE };

egl_display = eglGetDisplay ((EGLNativeDisplayType) gdk_x11_display_get_xdisplay (gtk_widget_get_display (widget)));
eglInitialize (egl_display, NULL, NULL);
eglChooseConfig (egl_display, attributes, &egl_config, 1, &n_config);
eglBindAPI (EGL_OPENGL_API);
egl_surface = eglCreateWindowSurface (egl_display, egl_config, gdk_x11_window_get_xid (gtk_widget_get_window (widget)), NULL);
egl_context = eglCreateContext (egl_display, egl_config, EGL_NO_CONTEXT, NULL);
}

static gboolean draw_cb (GtkWidget *widget)
{
eglMakeCurrent (egl_display, egl_surface, egl_surface, egl_context);

glViewport (0, 0, gtk_widget_get_allocated_width (widget), gtk_widget_get_allocated_height (widget));

glClearColor (0, 0, 0, 1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, 100, 0, 100, 0, 1);

glBegin (GL_TRIANGLES);
glColor3f (1, 0, 0);
glVertex2f (50, 10);
glColor3f (0, 1, 0);
glVertex2f (90, 90);
glColor3f (0, 0, 1);
glVertex2f (10, 90);
glEnd ();

eglSwapBuffers (egl_display, egl_surface);

return TRUE;
}

int main (int argc, char **argv)
{
GtkWidget *w;

gtk_init (&argc, &argv);

w = gtk_window_new (GTK_WINDOW_TOPLEVEL);
gtk_widget_set_double_buffered (GTK_WIDGET (w), FALSE);
g_signal_connect (G_OBJECT (w), "realize", G_CALLBACK (realize_cb), NULL);
g_signal_connect (G_OBJECT (w), "draw", G_CALLBACK (draw_cb), NULL);

gtk_widget_show (w);

gtk_main ();

return 0;
}

Zygmunt Krynicki: Announcing Morris 1.0

Sun, 09/21/2014 - 12:07
Earlier today I've released the first standalone version of Morris (source, documentation). Morris is named after Gabriel Morris, the inventor of Colonne Morris aka the advertisement column. Morris is a simple and proven Python event/signaling library (not for watching sockets or for doing IO but for generic, in-process broadcast messages).

Morris is the first part of the Plainbox project that I've released as a standalone, small library. We've been using that code for two years now. Morris is simple, well-defined and I'd dare to say, complete. Hence the 1.0 version, unlike the traditional 0.1 that many free software projects start with.
Morris works on python 2.7+ , pypy and python 3.2+. It comes with tests, examples and extensive docstrings. Currently you can install it from pypi but a Debian package is in the works and should be ready for review later today.
Here's a simple example on how to use the library in practice:
from __future__ import print_function
from morris import signal
class Processor(object):    def process(self, thing):        self.on_processing(thing)
    @signal    def on_processing(self, thing):        pass
def on_processing(thing):    print("Processing {}".format(thing))
proc = Processor()proc.on_processing.connect(on_processing)proc.process("foo")proc.process("bar")

For more information check out morris.readthedocs.org

Ronnie Tucker: Stephen Hawking Talks About the Linux-Based Intel Connected Wheelchair Project

Sat, 09/20/2014 - 22:07

Intel has revealed a new, interesting concept called the Connected Wheelchair, which takes data from users and allows people to share that info with the community and is powered by Linux.

When people say Intel, they usually think about processors, but the company also makes a host of other products, including very cool or useful concepts that might have some very important applications in everyday life.

The latest initiative is called the Connected Wheelchair and the guys from Intel even convinced the famous Stephen Hawking to help them spread the word about this amazing project. It’s still in the testing phases and it’s one of those products that might show a lot of promise but never go anywhere because there is no one to produce and sell it.

Source:

http://news.softpedia.com/news/Stephen-Hawking-Talks-About-the-Linux-Based-Intel-Connected-Wheelchair-Project-458539.shtml

Submitted by: Silivu Stahie

Ubuntu Podcast from the UK LoCo: S07E25 – The One Where the Monkey Gets Away

Sat, 09/20/2014 - 06:30

Just Laura Cowen and Alan Pope are in Studio L for Season Seven, Episode Twenty-Five of the Ubuntu Podcast!

Apologies for the terrible audio quality in this episode. It turns out one of the channels on the compressor is broken and we didn’t realise until much later on.

 Download Ogg  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll have some interviews from JISC, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Pages