Feed aggregator

Nekhelesh Ramananthan: Clock App Reboot Backstory

Planet Ubuntu - Sat, 10/11/2014 - 05:28

This week while reading Michael Hall's blog post about 1.0 being the deadliest milestone I couldn't help but grin when I read,

1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

This was exactly the thought that crossed my mind when I first heard about the Clock Reboot at the Malta Sprint.

Let me share how Clock Reboot came to be :). At the start of the Malta sprint I was told that the Clock app would receive some new designs that would need to be implemented for the Release-To-Manufacture milestone (RTM) which at that time was 4 months away. And on the eve of the meeting, we (Canonical Community team and the Core Apps team) rush into the meeting room where you see Giorgio Venturi, John Lea, Benjamin Keyser, Mark Shuttleworth, and other department heads looking over the designs.

Giorgio goes over the presentation and explains how they are supposed to work. At the end of the presentation, I am asked if this is feasible within the given timeframe and everyone starts to look at me. Honestly, at the moment I felt a shiver run down my spine. I was uncertain given that it took the clock app almost a year to buckle down and get to a point where it was useable and I wondered if the 4 months time was sufficient.

Strangely enough, during the presentation I recollected a conversation I had with Kyle Nitzsche a few days before that meeting, where he asked if the clock app code changes started to stabilize (no invasive code changes) considering we are this close to RTM.

Fast-forwarding to today, I think that the Clock Reboot was a blessing in disguise. I have been working on the clock app since the beginning which was somewhere in the first quarter of 2013. And I can confidently say that I have made a ton of mistakes in the so called 1.0 milestone. The Clock Reboot gave me the opportunity to start from scratch and carefully iterate every piece of code going into the trunk while avoiding the mistakes from the past.

And I like to think that it has resulted in the codebase being much more clean, lean and manageable and in a more reliable clock experience for the user.

The Music App is going through that same transition and I wish them the best since and I think it will make them stronger and better.

I like to end this blog with a blast from the past since it is a one year anniversay for the clock app! ;) It was first released to the Ubuntu Touch store on the 10th of October 2013.

Diego Turcios: Central America is discovering BeagleBoard!

Planet Ubuntu - Sat, 10/11/2014 - 00:32
The title is clear enough! I was on the VI Encuentro Centroamericano of Software Libre and gave a presentation about what is Beagleboard.org and showing some small demos of the BeagleBone Black.

Important refereces about Central America and Open Hardware/Single Board Computers & Microcontrollers

All of this data is according to what people said on Panama.
  • Arduino it's used in 6 countries of Central America as part of their computer science studies or electrical engineer
  • Raspberry Pi is also use on the 7 countries of Central America
  • Just 2 countries of Central America have listen about the BeagleBone Black. (Costa Rica & Honduras)
  • Just 1 country of Central America has work with a BeagleBone Black. (Honduras)

What was my talk about?

Basically my talk was about, what is the Beagleboard.org community and the Beaglebone Black.

  • I did a presentation on HTML explaining the BeagleBoard.org history and goals. Check the presentation here
  • I show some slides of Jason  Kridner presentation.
  •  Run a small demo on how to build a traffic light with leds, showing how to use the bonescript library. Here
What did people said?
They love it!

  • I will said it again, people love it!
  • They were surprise about the bonescript library,
  • That's basically one of the major problems of other platforms. How to involved Web with Hardware platform.
  • The University of Costa Rica will buy BeagleBone Blacks for next years. (They have listen about the project and were interested on it, but didn't knew about all the capacities it had)
  • Some professors of Panama universities were quite happy to see that they could use this tool on their courses, this is thanks to the bonescript library, it is now easy to request project involving web applications and hardware combined thanks to the BeagleBone Black.

Some images of the event :











Diego Turcios: In the Encuentro Centroamericano de Software Libre!

Planet Ubuntu - Fri, 10/10/2014 - 12:05
Currently I'm on the city of Chitre in Panama for the VI Encuentro Centroamericano de Software Libre.

This has been an excelent experience. I met several friends and community members I meet on the Primer Encuentro Centroamericano in the city of Esteli Nicaragua and of course new persons from central america.

Something new the ECSL has, is the presence of several recognize people of the open source community. We had the presence of Ramon Ramon, a famous blogger in the latin america. Guillermo Movia Community Manager for Latin America for Mozilla and other people of the Open Source Community in Central America.

I'm going to write in future post about my presentation of the BeagleBoard.org and other talks I have with the Mozilla people. By the way I got the opportunity to know Jorge Aguilar founder of the Mozilla Community of Honduras.

Some images.
If you want to read this presentation in spanish click here.






Michael Hall: 1.0: The deadliest milestone

Planet Ubuntu - Fri, 10/10/2014 - 11:49

So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Ubuntu Podcast from the UK LoCo: S07E28 – The One with the List

Planet Ubuntu - Fri, 10/10/2014 - 03:59

We’re back with Season Seven, Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Mark Johnson, and Tony Whitmore! We ate this carrot cake from the Co-op. It’s very tasty.

 Download Ogg  Download MP3 Play in Popup

In this week’s show:

  • We also discuss:
  • We share some Command Line Lurve which saves you valuable keystrokes: tar xvf archive.tar.bz2 tar xvf foo.tar.gz

    Tar now auto-detects the compression algorithm used!

  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Martin Pitt: Running autopkgtests in the cloud

Planet Ubuntu - Fri, 10/10/2014 - 00:25

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

$ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

$ cat pkglist apport apt aptdaemon apache2 autopilot-gtk autopkgtest binutils chromium-browser cups dbus gem2deb glib-networking glib2.0 gvfs kcalc keystone libnih libreoffice lintian lxc mysql-5.5 network-manager nut ofono-phonesim php5 postgresql-9.4 python3.4 sbuild shotwell systemd-shim ubiquity ubuntu-drivers-common udisks2 upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova #!/bin/sh -e adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \ --flavor m1.small --image adt-utopic-i386 \ --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Forums Council: Happy Birthday Ubuntu Forums

Planet Ubuntu - Thu, 10/09/2014 - 22:53

Somewhere roundabout 10 years ago Ryan Troy started what we all know and love – Ubuntu Forums.

The look and feel has gone through several iterations, matching the Ubuntu colour scheme evolutions. Each look & feel has had its own crowd saying the previous one was better, but the one constant has been the members who in their thousands log on to share knowledge

Click to view slideshow.

As part of the forum wide celebration – there’s a custom avatar you can use if you wish, and if you’ve less than 10 posts, you too can use it now that we’ve changed the 10 post rule to allow you to use custom avatars. We’re also making some changes to how we deal with other operating systems – soon.

For now we’ll be checking the posts that get made roundabout midnight of the 9th – if we pick you – expect a PM from one of us.

Thanks for your participation in helping the forum become what it is – as we passed through 2 million threads and rapidly approach 2 million users.


Scarlett Clark: Kubuntu: KDE Frameworks 5.3.0 Now released to archive.

Planet Ubuntu - Thu, 10/09/2014 - 18:07

Frameworks 5.3.0 has finished uploading to archive! apt-get update is all that is required to upgrade.
We are currently finishing up Plasma 5.1.0 ! The problems encountered during beta have been resolved

Kubuntu Wire: Weta Uses Kubuntu for Hobbit

Planet Ubuntu - Thu, 10/09/2014 - 03:30

Top open source news website TheMukt has an article headlined KDE’s Plasma used in Hobbit movies.  Long time fans of Kubuntu will know of previous boasts that they use a 35,000-Core Ubuntu Farm to render films like Avatar and The Hobbit supported by a Kubuntu desktop.  Great to see they’re making use of Plasma 4 and its desktop capabilities.

Marcin Juszkiewicz: 2 years of AArch64 work

Planet Ubuntu - Wed, 10/08/2014 - 10:55

I do not remember exactly when I started working on ARMv8 stuff. Checked old emails from Linaro times and found that we discussed AArch64 bootstrap using OpenEmbedded during Linaro Connect Asia (June 2012). But it had to wait a bit…

First we took OpenEmbedded and created all tasks/images we needed but built them for 32-bit ARM. But during September we had all toolchain parts available: binutils was public, gcc was public, glibc was on a way to be released. I remember that moment when built first “helloworld” — probably as one of first people outside ARM and their hardware partners.

At first week of October we had ARMv8 sprint in Cambridge, UK (in Linaro and ARM offices). When I arrived and took a seat I got information that glibc just went public. Fetched, rebased my OpenEmbedded tree to drop traces of “private” patches and started build. Once finished all went public at git.linaro.org repository.

But we still lacked hardware… The only thing available was Versatile Express emulator which required license from ARM Ltd. But then free (but proprietary) emulator was released so everyone was able to boot our images. OMG it was so slow…

Then fun porting work started. Patched this, that, sent patches to OpenEmbedded and to upstream projects and time was going. And going.

In January 2013 I started X11 on emulated AArch64. Had to wait few months before other distributions went to that point.

February 2013 was nice moment as Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

It was March or April when OpenSUSE did mass build of whole distribution for AArch64. They had biggest amount of packages built for quite long time. But I did not tracked their progress too much.

And then 31st May came. A day when I left Linaro. But I was already after call with Red Hat so future looked quite bright ;D

June was month when first silicon was publicly presented. I do not know what Jon Masters was showing but it probably was some prototype from Applied Micro.

On 1st August I got officially hired by Red Hat and started month later. My wife was joking that next step would be Retired Software Engineer ;D

So I moved from OpenEmbedded to Fedora with my AArch64 work. Lot of work here was already done as Fedora developers were planning 64-bit ARM port few years before — when it was at design phase. So when Fedora 15 was bootstrapped for “armhf” it was done as preparation for AArch64. 64-bit ARM port was started in October 2012 with Fedora 17 packages (and switched to Fedora 19 during work).

My first task at Red Hat was getting Qt4 working properly. That beast took few days in foundation model… Good that we got first hardware then so it went faster. 1-2 months later and I had remote APM Mustang available for my porting work.

In January 2014 QEmu got AArch64 system emulation. People started migrating from foundation model.

Next months were full of hardware announcements. AMD, Cavium, Freescale, Marvell, Mediatek, NVidia, Qualcomm and others.

In meantime I decided to make crazy experiment with OpenEmbedded. I was first to use it to build for AArch64 so why not be first to build OE on 64-bit ARM?

And then June came. With APM Mustang for use at home. Finally X11 forwarding started to be useful. One of first things to do was running firefox on AArch64 just to make fun of running software which porting/upstreaming took me biggest amount of time.

Did not took me long to get idea of transforming APM Mustang (which I named “pinkiepie” as all machines at my home are named after cartoon characters) into ARMv8 desktop. Still waiting for PCI Express riser and USB host support.

Now we have October. Soon will be 2 years since people got foundation model available. And there are rumors about AArch64 development boards in production with prices below 100 USD. Will do what needed to get one of them on my desk ;)

All rights reserved © Marcin Juszkiewicz
2 years of AArch64 work was originally posted on Marcin Juszkiewicz website

Related posts:

  1. AArch64 for everyone
  2. AArch64 porting update
  3. ARM 64-bit porting for OpenEmbedded

Scott Kitterman: Thanks Canonical (reallly)

Planet Ubuntu - Wed, 10/08/2014 - 08:49

Some of you might recall seeing this insights article about Ubuntu and the City of Munich.  What you may not know is that the desktop in question is the Kubuntu flavor of Ubuntu.  This wasn’t clear in the original article and I really appreciate Canonical being willing to change it to make that clear.

Ubuntu Kernel Team: Kernel Team Meeting Minutes – October 07, 2014

Planet Ubuntu - Tue, 10/07/2014 - 10:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141007 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 08-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Ubuntu Podcast from the UK LoCo: S07E27 – The One with Five Steaks and an Eggplant

Planet Ubuntu - Tue, 10/07/2014 - 10:09

The full team assemble (that’s Laura Cowen, Mark Johnson and Alan Pope, joined by a returning Tony Whitmore) in Studio L for Season Seven, Episode Twenty-Seven of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll have some mystery content and your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Michael Hall: The Open Source community is wonderful

Planet Ubuntu - Tue, 10/07/2014 - 09:25

But it isn’t perfect.  And that, in my opinion, is okay.  I’m not perfect, and neither are you, but you are still wonderful too.

I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.

You reap what you sow

People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.

Don’t get even, get better

An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.

Know your audience

Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.

Monoculture is dangerous

I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.

Good people are human too

The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too.  I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.

Love the whole, not the parts

When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.

Andrea Veri: The GNOME Infrastructure is now powered by FreeIPA!

Planet Ubuntu - Tue, 10/07/2014 - 02:21

As preannounced here the GNOME Infrastructure switched to a new Account Management System which is reachable at https://account.gnome.org. All the details will follow.

Introduction

It’s been a while since someone actually touched the underlaying authentication infrastructure that powers the GNOME machines. The very first setup was originally configured by Jonathan Blandford (jrb) who configured an OpenLDAP istance with several customized schemas. (pServer fields in the old CVS days, pubAuthorizedKeys and GNOME modules related fields in recent times)

While OpenLDAP-server was living on the GNOME machine called clipboard (aka ldap.gnome.org) the clients were configured to synchronize users, groups, passwords through the nslcd daemon. After several years Jeff Schroeder joined the Sysadmin Team and during one cold evening (date is Tue, February 1st 2011) spent some time configuring SSSD to replace the nslcd daemon which was missing one of the most important SSSD features: caching. What surely convinced Jeff to adopt SSSD (a very new but promising sofware at that time as the first release happened right before 2010’s Christmas) and as the commit log also states (“New sssd module for ldap information caching”) was SSSD’s caching feature.

It was enough for a certain user to log in once and the ‘/var/lib/sss/db’ directory was populated with its login information preventing the LDAP daemon in charge of picking up login details (from the LDAP server) to query the LDAP server itself every single time a request was made against it. This feature has definitely helped in many occasions especially when the LDAP server was down for a particular reason and sysadmins needed to access a specific machine or service: without SSSD this wasn’t ever going to work and sysadmins were probably going to be locked out from the machines they were used to manage. (except if you still had ‘/etc/passwd’, ‘/etc/group’ and ‘/etc/shadow’ entries as fallback)

Things were working just fine except for a few downsides that appeared later on:

  1. the web interface (view) on our LDAP user database was managed by Mango, an outdated tool which many wanted to rewrite in Django that slowly became a huge dinosaur nobody ever wanted to look into again
  2. the Foundation membership information were managed through a MySQL database, so two databases, two sets of users unrelated to each other
  3. users were not able to modify their own account information on their own but even a single e-mail change required them to mail the GNOME Accounts Team which was then going to authenticate their request and finally update the account.

Today’s infrastructure changes are here to finally say the issues outlined at (1, 2, 3) are now fixed.

What has changed?

The GNOME Infrastructure is now powered by Red Hat’s FreeIPA which bundles several FOSS softwares into one big “bundle” all surrounded by an easy and intuitive web UI that will help users update their account information on their own without the need of the Accounts Team or any other administrative entity. Users will also find two custom fields on their “Overview” page, these being “Foundation Member since” and “Last Renewed on date”. As you may have understood already we finally managed to migrate the Foundation membership database into LDAP itself to store the information we want once and for all. As a side note it might be possible that some users that were Foundation members in the past won’t find any detail stored on the Foundation fields outlined above. That is actually expected as we were able to migrate all the current and old Foundation members that had an LDAP account registered at the time of the migration. If that’s your case and you still would like the information to be stored on the new setup please get in contact with the Membership Committee at stating so.

Where can I get my first login credentials?

Let’s make a little distinction between users that previously had access to Mango (usually maintainers) and users that didn’t. If you were used to access Mango before you should be able to login on the new Account Management System by entering your GNOME username and the password you were used to use for loggin in into Mango. (after loggin in the very first time you will be prompted to update your password, please choose a strong password as this account will be unique across all the GNOME Infrastructure)

If you never had access to Mango, you lost your password or the first time you read the word Mango on this post you thought “why is he talking about a fruit now?” you should be able to reset it by using the following command:

ssh -l yourgnomeuserid account.gnome.org

The command will start an SSH connection between you and account.gnome.org, once authenticated (with the SSH key you previously had registered on our Infrastructure) you will trigger a command that will directly send your brand new password on the e-mail registered for your account. From my tests seems GMail sees the e-mail as a phishing attempt probably because the body contains the word “password” twice. That said if the e-mail won’t appear on your INBOX, please double-check your Spam folder.

Now that Mango is gone how can I request a new account?

With Mango we used to have a form that automatically e-mailed the maintainer of the selected GNOME module which was then going to approve / reject the request. From there and in the case of a positive vote from the maintainer the Accounts Team was going to create the account itself.

With the recent introduction of a commit robot directly on l10n.gnome.org the number of account requests reduced its numbers. In addition to that users will now be able to perform pretty much all the needed maintenance on their accounts themselves. That said and while we will probably work on building a form in the future we feel that requesting accounts can definitely be achieved directly by mailing the Accounts Team itself which will mail the maintainer of the respective module and create the account. As just said the number of account creations has become very low and the queue is currently clear. The documentation has been updated to reflect these changes at:

https://wiki.gnome.org/AccountsTeam
https://wiki.gnome.org/AccountsTeam/NewAccounts

The migration of all the user data and ACLs has been massive and I’ve been spending a lot of time reviewing the existing HBAC rules trying to spot possible errors or misconfigurations. If you happen to not being able to access a certain service as you were used to in the past, please get in contact with the Sysadmin Team. All the possible ways to contact us are available at https://wiki.gnome.org/Sysadmin/Contact.

What is missing still?

Now that the Foundation membership information has been moved to LDAP I’ll be looking at porting some of the existing membership scripts to it. What I managed to port already are welcome e-mails for new or existing members. (renewals)

Next step will be generating a membership page from LDAP (to populate http://www.gnome.org/foundation/membership) and all the your-membership-is-going-to-lapse e-mails that were being sent till today.

Other news – /home/users mount on master.gnome.org

You will notice that loggin in into master.gnome.org will result in your home directory being empty, don’t worry, you did not lose any of your files but master.gnome.org is now currently hosting your home directories itself. As you may have been aware of adding files to the public_html directory on master resulted in them appearing on your people.gnome.org/~userid space. That was unfortunately expected as both master and webapps2 (the machine serving people.gnome.org’s webspaces) were mounting the same GlusterFS share.

We wanted to prevent that behaviour to happen as we wanted to know who has access to what resource and where. From today master’s home directories will be there just as a temporary spot for your tarballs, just scp and use ftpadmin against them, that should be all you need from master. If you are interested in receiving or keeping using your people.gnome.org’s webspace please mail stating so.

Other news – a shiny and new error 500 page has been deployed

Thanks to Magdalen Berns (magpie) a new error 500 web page has been deployed on all the Apache istances we host. The page contains an iframe of status.gnome.org and will appear every single time the web server behind the service you are trying to reach will be unreachable for maintenance or other purposes. While I hope you won’t see the page that often you can still enjoy it at https://static.gnome.org/error-500/500.html. Make sure to whitelist status.gnome.org on your browser as it currently loads it without https. (as the service is currently hosted on OpenShift which provides us with a *.rhcloud.com wildcard certificate, which differs from the CN the browser would expect it to be)

Randall Ross: Discount Offer for Ubuntu Members - Get Certified!

Planet Ubuntu - Tue, 10/07/2014 - 01:33

Are you an Ubuntu Member? Have you ever wanted to get a technical certification?

My buddy Jorge Castro has an offer for you! Please take a look at this page over on Ubuntu Discourse:

http://discourse.ubuntu.com/t/100-off-linux-foundation-certification-for-ubuntu-members/1915

In Jorge's words, "Go rock that exam!"

Randall Ross: I'm Back!

Planet Ubuntu - Tue, 10/07/2014 - 01:05

Greetings Planet! First, I'd like to apologize for not posting in a long while. Life has been, shall we say, interesting!

Up until the end of August, my focus has been on (non-Ubuntu-related) client work as part of my IT cyber-security consulting practice. This has meant that I've been traveling back and forth between San Francisco and Vancouver BC, living and working in both of these beautiful cities. This has also meant that I've been somewhat time-starved to do some of the things I've historically enjoyed doing in the Ubuntu world, blogging being one of those things.

So, what happened at the end of August? That's a bit of *great* news that I'll save that for an upcoming post. ;)

The Fridge: Ubuntu Weekly Newsletter Issue 386

Planet Ubuntu - Mon, 10/06/2014 - 16:04

Welcome to the Ubuntu Weekly Newsletter. This is issue #386 for the week September 29 – October 5, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Aaron Honeycutt
  • John Mahoney
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 386

The Fridge - Mon, 10/06/2014 - 16:04

Welcome to the Ubuntu Weekly Newsletter. This is issue #386 for the week September 29 – October 5, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Aaron Honeycutt
  • John Mahoney
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Svetlana Belkin: Start Planning For 14.11 UOS

Planet Ubuntu - Mon, 10/06/2014 - 13:53

As it was stated some months ago, the next Ubuntu Online Summit (UOS) is in (almost) a month on November 12th to the 14th.  There will be five (5) tracks: app development, cloud development, community, Ubuntu development, and users.  I will be one of the Community track leads.  Since the UOS is (again almost) a month away, we should start planning for sessions.   For the sessions that don’t need a blueprint, you are welcome to use the “propose a session” button on the UOS homepage.  For the ones that require a blueprint, please use this Google Spreadsheet to add your session idea.  Once we know how to name our UOS blueprints, it will be easy to remember what sessions that still need to be proposed.


Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator