Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 52 min 10 sec ago

Ross Gammon: Happy New Year – My Free Software activities in December 2016

Mon, 01/02/2017 - 15:58

So that was 2016! Here’s a summary of what I got up to on my computer(s) in December, a check of how I went against my plan, and the TODO list for the next month or so.

With a short holiday to Oslo, Christmas holidays, Christmas parties (at work and with Alexander at school, football etc.), travelling to Brussels with work, birthdays (Alexander & Antje), I missed a lot of deadlines, and failed to reach most of my Free Software goals (including my goals for new & updated packages in Debian Stretch – the soft freeze is in a couple of days). To top it all off, I lost my grandmother at the ripe old age of 93. Rest in peace Nana. I wish I could have made it to the funeral, but it is sometimes tough living on the other side of the world to your family.

Debian Ubuntu
  • Added the Ubuntu Studio testsuites to the package tracker, and blogged about running the Manual Tests.
Other Plan status & update for next month Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

For the Debian Stretch release:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages. – Done
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Still to do
  • Start testing & bug triaging Ubuntu Studio packages.
  • Test Len’s work on ubuntustudio-controls
Other
  • Continue working to convert my Family History website to Jekyll – Done
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.
  • Give JMRI a good try out and look at what it would take to package it.

Dimitri John Ledkov: Ubuntu Archive and CD/USB images complete migration to 4096 RSA signing keys

Mon, 01/02/2017 - 06:54

Enigma machine photo by Alessandro Nassiri [CC BY-SA 4.0], via Wikimedia Commons
Ubuntu Archive and CD/USB image use OpenPGP cryptography for verification and integrity protection. In 2012, a new archive signing key was created and we have started to dual-sign everything with both old and new keys.

In April 2017, Ubuntu 12.04 LTS (Precise Pangolin) will go end of life. Precise was the last release that was signed with just the old signing key. Thus when Zesty Zapus is released as Ubuntu 17.04, there will no longer be any supported Ubuntu release that require the 2004 signing keys for validation.

The Zesty Zapus release is now signed with just the 2012 signing key, which is 4096 RSA based key. The old 2004 signing keys, where were 1024 DSA based, have been removed from the default keyring and are no longer trusted by default in Zesty and up. The old keys are available in the removed keys keyring in the ubuntu-keyring package, for example in case one wants to verify things from old-releases.ubuntu.com.

Thus the signing key transition is coming to an end. Looking forward, I hope that by 18.04 LTS time-frame the SHA-3 algorithm will make its way into the OpenPGP spec and that we will possibly start a transition to 8096 RSA keys. But this is just wishful thinking as the current key strength, algorithm, and hashsums are deemed to be sufficient.

Xubuntu: Introducing the Xubuntu Council

Sun, 01/01/2017 - 09:43

At the beginning of 2016 the Xubuntu team started a process to transition the project to become council-run rather than having a single project leader. After careful planning, writing and approving the general direction, the team was ready to vote on for the first three members of the council for the project.

In this article we explain what the new Xubuntu Council is and who the council members are.

What is the Xubuntu Council about?

The purpose of the council is very similar to the purpose of the former Xubuntu Project Leader (XPL): to make sure the direction of the project stays stable, in adherence to the Strategy Document and be responsible for making long-term plans and decisions where needed.

The two main differences between a council and the XPL, both favoring the council approach, are:

  • The administrative and bureaucratic work of managing the project is split between several people. This means more reliability and faster response times.
  • A council, with a diversity of views, can more fairly evaluate and arbitrate disputes.

Additionally, the council will stay more in the background in terms of daily decisions, the council does not have a casting or veto vote in the same way that the XPL had. We believe this lets us embrace the expertise in the team even more than we did before. The council also acts as a fallback to avoid deadlocks that a single point of failure like “an XPL gone missing” could produce.

If you wish to learn more about the council, you can read about it in the Xubuntu Council section of our contributor documentation.

Who is in the Council?

On August 31st, Simon Steinbeiß announced the results of vote by Xubuntu project members. The first Xubuntu Council contains the following members:

  • Sean Davis (bluesabre), the council chair and the Xubuntu Technical Lead
  • Simon Steinbeiß (ochosi), the Xubuntu Artwork Lead and a former XPL
  • Pasi Lallinaho (knome), the Xubuntu Website Lead and a former XPL and former Xubuntu Marketing Lead

As the titles alone can tell you, the three council members all have a strong history with Xubuntu project. Today we want to go a bit deeper than just these titles, which is why we asked the council members a few quick questions so you can start to get to know them.

Interviewing the Council What inspired you to get involved with the Xubuntu project?

Sean: I started using Xubuntu in 2006 (when it was first released) and used it all throughout college and into my career. I started reporting bugs to the project in 2012 and contributing to the Ubuntu community later that year. My (selfish) inspiration was that I wanted to make my preferred operating system even better!

Simon: When Dapper Drake saw the light of day 10 years ago (I know, it’s incredible – it’s been a decade!) and I started using LInux my first choice was – and this has never changed – Xfce and Ubuntu. At first I never thought I would be fit to contribute, but the warm welcome from the amazing community around these projects pulled me in.

Pasi: When I converted to Linux from Windows for good in 2006, I started contributing to the Amarok project, my media player of choice back then. A few years later my contributions there slowed down at it felt like a natural step to start working with the operating system I was using.

Can you share some thoughts about the future of Xubuntu?

Sean: Xubuntu has always taken a conversative approach to the desktop. It includes simple, effective applications on top of a traditional desktop. That said, the technologies that Xubuntu is built on (GTK+, GStreamer, Xfce, and many many others) are undergoing significant changes and we’re always looking to improve. I think we’ll continue to see improvements that will welcome new users and please our longtime fans.

Simon: Change is hard for many people, however based on a recent psych test I am “surprisingly optimistic” :) While Xubuntu – and this is heritage from Xfce – has a what many would call “conservative” approach I believe we can still improve the current experience by quite a bit. I don’t mean this change has to be radical, but it should be more than just “repainting the walls”. This is why I personally welcome the changes in GTK+ and why I believe our future is bright.

Pasi: As Sean mentioned, we will be seeing changes in Xubuntu in consequence of the underlying technologies and components – whether we like them or not. To be able to be part of the decision making and that Xubuntu can and will feel as integrated and polished as it does now, it’s important to keep involved with the migration work. While this will mean less resources to put into Xubuntu-specific work in the near future, I believe it leads us into a better place later.

So that people can get to know you a bit better, is there an interesting fact about yourself that you wish to share?

Sean: Two unrelated things: I’m also an Xfce developer and one of my current life goals is to visit Japan (and maybe one day live there).

Simon: My background is a bit atypical: my two majors at University were Philosophy and Comparitive Religious Studies.

Pasi: In addition to contributing to open source, I use my free time to play modern board games. I have about 75 of them in my office closet.

Further questions?

If you have any questions about the council, please don’t hesitate to ask! You can contact us by joining the IRC channel #xubuntu-devel on freenode or by joining the Xubuntu-devel mailing list.

Additionally, if this sparked your interest to get involved, be in touch with anybody from the Xubuntu team. There are a lot of things to do and all kinds of skills are useful. Maybe someday you might even become a Xubuntu Council member!

Stephen Michael Kellat: Staring Ahead At 2017

Sat, 12/31/2016 - 21:21

2016 was not the best of years. While my parents have told me that it wasn't a bad year, my "log line" for year was that this was the year I was under investigation, threat assessment, and who knows what other official review. These things kinda happen when you work in the civil service of a president who sometimes thinks he is a Stuart monarch and even worse acts like one from time to time.

Tonight I spent some time playing with a software-defined radio. A project in 2017 is to set up an automated recorder out in the garage to monitor the CBC Radio One outlet that is audible from the other side of Lake Erie in southwest Ontario. Right now there is a bit of a noise problem to overcome with some antenna construction as the waterfall display below shows I can barely even hear the local outlet of NOAA Weather Radio (KEC58) out in Erie, Pennsylvania amidst some broad-spectrum noise shown in yellow:

Even though it isn't funded, I'm still looking at the Outernet research project. By way of Joey Hess over in the pump.io spaces, I see I'm not the only one thinking about them either as there was a presentation at 33c3. Eventually I'll need to watch that.

I will note contra David Tomaschik that disclosure of employee information that is available under the Freedom of Information Act isn't really a hack. In general you can request that directory information from any federal agency including DHS and FBI. The FOIA micro-site created by the US Department of Justice can help in drafting your own inquiries.

The folks at the Ubuntu Podcast had an opportunity to prognosticate about the future. With the storm and stress of my civil service post, frankly I forgot to chip in. This happens increasingly. Since I used to be an Ubuntu-related podcaster I can offer some prognostication.

My guesses for 2017 include:

  • I may not be a federal civil servant by the end of 2017. It probably won't be by my choice based upon the views of the incoming administration.
  • 2017 will be the Year of Xubuntu.
  • Laura Cowen will finish her PhD.
  • Lubuntu will be subsumed into the Kubuntu project as a light version of Kubuntu.
  • There will be a steep contraction in the number of Ubuntu derivatives.
  • James Cameron will retcon the Terminator franchise once again and now call Skynet instead Mirai.
  • The United States will lose a significant portion of its consumer broadband access. The rest of the world won't notice.
  • I may celebrate New Year's Eve 2017 well outside the Continental United States and quite possibly outside US jurisdiction.

To all a happy new year. We have work to do.

Colin King: Kernel printk statements

Sat, 12/31/2016 - 11:20
The kernel contains tens of thousands of statements that may print various errors, warnings and debug/information messages to the kernel log.  Unsurprisingly, as the kernel grows in size, so does the quantity of these messages.  I've been scraping the kernel source for various kernel printk style statements and macros and scanning these for various typos and spelling mistakes and to make this easier I hacked up kernelscan (a quick and dirty parser) that helps me find literal strings from the kernel for spell checking.

Using kernelscan, I've gathered some statistics for the number of kernel print statements for various kernel releases:


As one can see, we have over 200,000 messages in the 4.9 kernel(!).  Given the kernel growth, we can see this seems to roughly correlate with the kernel source size:



So how many lines of code in the kernel do we have per kernel printk messages over time?


..showing that the trend is to have more lines of code per frequent printk statements over time.  I didn't differentiate between different types of printk message, so it is hard to see any deeper trends on what kinds of messages are being logged more or less frequently over each release, for example,  perhaps there are less debug messages landing in the kernel nowadays.

I find it quite amazing that the kernel contains quite so many printk messages; it would be useful to see just how many of these are actually in a production kernel. I suspect quite large number are for driver debugging and may be conditionally omitted at build time.

Sebastian Kügler: 33C3: Works for me

Fri, 12/30/2016 - 05:26

Rocket ScienceThe calm days between christmas and new year are best celebrated with your family (of choice), so I went to Hamburg where the 33rd edition of the Chaos Computer Congress opened the door to 12.000 hackers, civil rights activists, makers and people interested in privacy and computer security. The motto of this congress is “works for me” which is meant as a critical nudge towards developers who stop after technology works for them, while it should work for everyone. A demand for a change in attitude.

33C3’s ballroom

The congress is a huge gathering of people to share information, hack, talk and party, and the past days have been a blast. This congress strikes an excellent balance between high quality talks, interesting hacks and electronics and a laid back atmosphere, all almost around the clock. (Well, the official track stops around 2 a.m., but continues around half past eleven in the morning.) The schedule is really relaxed, which makes it possibly to party at night, and interrupt dancing for a quick presentation about colonizing intergalactic space — done by domain experts.

The conference also has a large unconference part, hacking spaces, and lounge areas, meaning that the setup is somewhere in between a technology conference, a large hack-fest and a techno party. Everything is filled to the brim with electronics and decorated nicely, and after a few days, the outside world simply starts to fade and “congress” becomes the new reality.

No Love for the U.S. Gov

I’ve attended a bunch of sessions on civil rights and cyber warfare, as well as more technical things. One presentation that touched me in particular was the story of Lauri Love, who is accused of stealing data from agencies including Federal Reserve, Nasa and FBI. This talk was presented by a civil rights activist from the Courage foundation, and two hackers from Anonymous and Lulzsec. While Love is a UK citizen, the US is demanding extradition from the UK so they can prosecute him under US law (which is much stricter than the UK’s). This would create a precedent making it much easier for the US to essentially be able to prosecute citizens anywhere under US law.

What kind of technoparty^W congres is this?This, combined with the US jail system poses a serious threat to Love. He wouldn’t be the first person to commit suicide under the pressure put on him by the US government agencies, who really seem to be playing hardball here. (Chelsea Manning, the whistleblower behind the videos of the baghdad airstrikes, in which US airforce killed innocent citizens carelessly, among others) who suffered from mental health issues, was put into solitary confinement, instead of receiving health care. Against that background, the UK would send one of their own citizens into a jail that doesn’t even respect basic human rights. On particularly touching moment was when the brother of Aaron Swartz took the microphone and appealed to the people who asked how they could prevent another Aaron, that helping Lauri (and Chelsea) is the way to help out, and that’s where the energy should be put. Very moving.

The media team at this event is recording most of the sessions, so if you have some time to spare, head over to media.ccc.de and get your fix. See you at 34C3!

Jorge Castro: Unifi's new cheaper switches are great

Wed, 12/28/2016 - 17:00

I started switching to Ubiquiti’s Unifi equipment at home when one of my coworkers, Sean Sosik-Hamor, recommended them for prosumer use. A little while later Lee Hutchinson published Ubiquiti Unifi made me realise how terrible consumer Wi-Fi gear is when they launched their newer (and cheaper) line of 802.11ac access points. I’ve got one of those, as well as the USG for routing duties. The USG isn’t something to write home about, but it gets the job done, and in some advanced cases you can always ssh to it, but generally speaking I just use it as intended and mostly setting it up and forgetting about it.

Unlike most routers, you don’t manage Unifi gear through a web UI on the device, you run controller software on a host and then the controller software blasts out the config and updates to the devices. I recommend reading Dustin Kirkland’s blog post for running Unifi in LXD as it currently is 14.04 only, and if you’re like me, you’re finding that it’s becoming much more manageable to keep server software nice and isolated in it’s own container instead of splatting all its dependencies on the host OS. If you prefer things more old school, look for the “EdgeRouter” line of routers and switches.

At $99 for an AP and $149 for an access point you can come up with a nice business-grade combo, especially with the latest consumer routers starting to get close the $300(!) and utterly terrible software. The one thing that was always expensive though, was the Unifi line of managed switches. It’s nice, but at $199 for 8 ports, just too much for each port. Here’s a nice review from Lee on the Unifi Switch 8. Thanks to the wonder of their beta store, I was able to pick up the newer, slimmed down 8 port, the Unifi US-8:

There it is, with the unmanaged switch it replaced. They dropped the SFP ports, and you can see the port LEDs are on the top instead of in each port, probably for cost? And since it’s Unifi, it plops in nicely with the UI, giving me some nice per-port stats:

And it gets better, they’ve done a US-24 and US-48 as well. I put a US-24 in my basement. $215 all day for 24 ports, compared to the older model, which would go north of $500!

I’m in the process of still setting up the homelab VLAN, so I don’t have much to report on that, but having everything managed in one system is a really great feature. I didn’t really need SFP plugs or lots of PoE power for my use, so this new low-end line is perfect for me, if you find yourself wanting cheap-but-good equipment with decent software, then I recommend you check them out, and of course drop by /r/ubiquiti if you need anything.

See also Troy Hunt’s more indepth blog post for more information.

Lucas Nussbaum: The Linux 2.5, Ruby 1.9 and Python 3 release management anti-pattern

Mon, 12/26/2016 - 06:32

There’s a pattern that comes up from time to time in the release management of free software projects.

To allow for big, disruptive changes, a new development branch is created. Most of the developers’ focus moves to the development branch. However at the same time, the users’ focus stays on the stable branch.

As a result:

  • The development branch lacks user testing, and tends to make slower progress towards stabilization.
  •  Since users continue to use the stable branch, it is tempting for developers to spend time backporting new features to the stable branch instead of improving the development branch to get it stable.

This situation can grow up to a quasi-deadlock, with people questioning whether it was a good idea to do such a massive fork in the first place, and if it is a good idea to even spend time switching to the new branch.

To make things more unclear, the development branch is often declared “stable” by its developers, before most of the libraries or applications have been ported to it.

This has happened at least three times.

First, in the Linux 2.4 / 2.5 era. Wikipedia describes the situation like this:

Before the 2.6 series, there was a stable branch (2.4) where only relatively minor and safe changes were merged, and an unstable branch (2.5), where bigger changes and cleanups were allowed. Both of these branches had been maintained by the same set of people, led by Torvalds. This meant that users would always have a well-tested 2.4 version with the latest security and bug fixes to use, though they would have to wait for the features which went into the 2.5 branch. The downside of this was that the “stable” kernel ended up so far behind that it no longer supported recent hardware and lacked needed features. In the late 2.5 kernel series, some maintainers elected to try backporting of their changes to the stable kernel series, which resulted in bugs being introduced into the 2.4 kernel series. The 2.5 branch was then eventually declared stable and renamed to 2.6. But instead of opening an unstable 2.7 branch, the kernel developers decided to continue putting major changes into the 2.6 branch, which would then be released at a pace faster than 2.4.x but slower than 2.5.x. This had the desirable effect of making new features more quickly available and getting more testing of the new code, which was added in smaller batches and easier to test.

Then, in the Ruby community. In 2007, Ruby 1.8.6 was the stable version of Ruby. Ruby 1.9.0 was released on 2007-12-26, without being declared stable, as a snapshot from Ruby’s trunk branch, and most of the development’s attention moved to 1.9.x. On 2009-01-31, Ruby 1.9.1 was the first release of the 1.9 branch to be declared stable. But at the same time, the disruptive changes introduced in Ruby 1.9 made users stay with Ruby 1.8, as many libraries (gems) remained incompatible with Ruby 1.9.x. Debian provided packages for both branches of Ruby in Squeeze (2011) but only changed the default to 1.9 in 2012 (in a stable release with Wheezy – 2013).

Finally, in the Python community. Similarly to what happened with Ruby 1.9, Python 3.0 was released in December 2008. Releases from the 3.x branch have been shipped in Debian Squeeze (3.1), Wheezy (3.2), Jessie (3.4). But the ‘python’ command still points to 2.7 (I don’t think that there are plans to make it point to 3.x, making python 3.x essentially a different language), and there are talks about really getting rid of Python 2.7 in Buster (Stretch+1, Jessie+2).

In retrospect, and looking at what those projects have been doing in recent years, it is probably a better idea to break early, break often, and fix a constant stream of breakages, on a regular basis, even if that means temporarily exposing breakage to users, and spending more time seeking strategies to limit the damage caused by introducing breakage. What also changed since the time those branches were introduced is the increased popularity of automated testing and continuous integration, which makes it easier to measure breakage caused by disruptive changes. Distributions are in a good position to help here, by being able to provide early feedback to upstream projects about potentially disruptive changes. And distributions also have good motivations to help here, because it is usually not a great solution to ship two incompatible branches of the same project.

(I wonder if there are other occurrences of the same pattern?)

Update: There’s a discussion about this post on HN

Alessio Treglia: Creativity Draws on the Deep Well of the Past

Fri, 12/23/2016 - 10:15

 


Octagonal Well in the Cloister of Giuliano da Sangallo, Faculty of Engineering,
Via Eudossiana, Rome

In the tetralogy “Joseph and His Brothers“, Thomas Mann states, “Deep is the well of the past...”. Sometimes this well is bottomless and it may appear far away and passed, yet all of our actions and everyday decisions come to life by its contents. It is the fundamental substrate, the raw material from which to draw the basic connections of our creativity.

The image of the well, used by Thomas Mann, is very significant. In symbolism, the well is the place where you take contact with the deep self and where to get water that gives life. The ancient times remind us of the socializing role of the well, invested with an aura of sacredness, where sharing with others took place. It was…

<Read More…[by Fabio Marzocca]>

Jono Bacon: Recommendations Requested: Building a Smart Home

Thu, 12/22/2016 - 09:00

Early next year Erica, the scamp, and I are likely to be moving house. As part of the move we would both love to turn this house into a smart home.

Now, when I say “smart home”, I don’t mean this:

We don’t need any holographic dogs. We are however interested in having cameras, lights, audio, screens, and other elements in the house connected and controlled in different ways. I really like the idea of the house being naturally responsive to us in different scenarios.

In other houses I have seen people with custom lighting patterns (e.g. work, party, romantic dinner), sensors on gates that trigger alarms/notifications, audio that follows you around the house, notifications on visible screens, and other features.

Obviously we will want all of this to be (a) secure, (b) reliable, and (c) simple to use. While we want a smart home, I don’t particularly want to have to learn a million details to set it up.

Can you help?

So, this is what we would like to explore.

Now, I would love to ask you folks two questions:

  1. What kind of smart-home functionality and features have you implemented in your house (in other words, what neat things can you do?)
  2. What hardware and software do you recommend for rigging a home up as a smarthome. I would ideally like to keep re-wiring to a minimum. Assume I have nothing already, so recommendations for cameras, light-switches, hubs, and anything else is much appreciated.

If you have something you would like to share, please plonk it into the comment box below. Thanks!

The post Recommendations Requested: Building a Smart Home appeared first on Jono Bacon.

Ubuntu Podcast from the UK LoCo: S09E43 – Talk to the Hand - Ubuntu Podcast

Thu, 12/22/2016 - 08:00

It’s Season Nine Episode Forty-Three of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Laura Cowen are connected and speaking to your brain.

We are four once more and Laura is back!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Xubuntu: Integrating releases to the website

Wed, 12/21/2016 - 06:00

During 2016, the Xubuntu website team has been working on integrating the releases better to the website. What this essentially means is that the maintenance is easier for the team in the future, but also that the release information is presented in a more concise and central way for each release.

Release pages

We finally have a one-stop page for all currently supported releases – the Releases landing page. You can find the page at any time under the About menu. From this page, you can easily access individual release pages.

The new individual release pages (for example, see the 16.04 release page) lists the basic information for the release, appropriate release links (downloads and documentation), all articles on the release as well as the press links and finally, all screenshots related to the release.

We hope our users find the new release pages useful – we definitely do!

Automatized links

In addition to the release pages, we’ve worked how some of the common links are stored internally. With these changes, we’re able to build dynamic link lists for the downloads, the press links (including the press archive) as well as the documentation links for supported releases.

These changes help with the maintenance and we hope to put the time freed from running the maintenance routines into finding more useful content for the press links and improving the site content otherwise.

The Fridge: Ubuntu Weekly Newsletter Issue 492

Mon, 12/19/2016 - 15:59

Welcome to the Ubuntu Weekly Newsletter. This is issue #492 for the week December 12 – 18, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Diego Turcios: Not an Ubuntu Topic, but a Humanity Towards Others

Mon, 12/19/2016 - 09:37
Hi, I'm completely sorry to publish this article in the Ubuntu planet, as it's not related to any Ubuntu Topic.

I'm writing to you, requesting your help for one friend, coworker, Laura Cano. She has been indirectly involved in our Ubuntu Honduras Team. (She did the proof reading of our English articles).

She is currently fighting Acute Lymphoblastic Leukemia for the second time. In order to recover she needs a bone marrow transplant. Unfortunately, this procedure is not done in Honduras. Meaning she would have to go abroad to get the transplant done as well as looking for a donor, getting the proper medical examinations, and other expenses. She cannot afford any of these because she is still recovering economically from the first episode.

If you will like to help Laura please feel free to do so: https://www.gofundme.com/laura-cano-medical-fund or sharing this link to your friends will be of great help!

As I mention at the beginning, this is not an Ubuntu Topic, but as the meaning of Ubuntu is Humanity Toward Others, we can help any way to this cause.

Thanks
Diego



PS. If this post violates the policies of the Ubuntu Planet, please let me know, so we can remove it and prevent any issue.

Jono Bacon: Building Better Teams With Asynchronous Workflow

Mon, 12/19/2016 - 08:54

One of the core principles of open source and innersource communities is asynchronous workflow. That is, participants/employees should be able to collaborate together with ubiquitous access, from anywhere, at any time.

As a practical example, at a previous company I worked at, pretty much everything lived in GitHub. Not just the code for the various products, but also material and discussions from the legal, sales, HR, business development, and other teams.

This offered a number of benefits for both employees and the company:

  • History – all projects, discussions, and collaboration was recorded. This provided a wealth of material for understanding prior decisions, work, and relationships.
  • Transparency – transparency is something most employees welcome and this was the case here where all employees felt a sense of connection to work across the company.
  • Communication – with everyone using the same platform it meant that it was easier for people to communicate clearly and consistently and to see the full scope of a discussion/project when pulled in.
  • Accountability – sunlight is the best disinfectant and having all projects, discussions, and work items/commitments, available in the platform ensured people were accountable in both their work and commitments.
  • Collaboration – this platform made it easier for people to not just collaborate (e.g. issues and pull requests) but also to bring in other employees by referencing their username (e.g. @jonobacon).
  • Reduced Silos – the above factors reduced the silos in the company and resulted in wider cross-team collaboration.
  • Untethered Working – because everything was online and not buried in private meetings and notes, this meant employees could be productive at home, on the road, or outside of office hours (often when riddled with jetlag at 3am!)
  • Internationally Minded – this also made it easier to work with an international audience, crossing different timezones and geographical regions.

While asynchronous workflow is not perfect, it offers clear benefits for a company and is a core component for integrating open source methodology and workflows (also known as innersource) into an organization.

Asynchronous workflow is a common area in which I work with companies. As such, I thought I would write up some lessons learned that may be helpful for you folks.

Designing Asynchronous Workflow

Many of you reading this will likely want to bring in the above benefits to your own organization too. You likely have an existing workflow which will be a mixture of (a) in-person meetings, (b) remote conference/video calls, (c) various platforms for tracking tasks, and (d) various collaboration and communication tools.

As with any organizational change and management, culture lies at the core. Putting platforms in place is the easy bit: adapting those platforms to the needs, desires, and uncertainties that live in people is where the hard work lays.

In designing asynchronous workflow you will need to make the transition from your existing culture and workflow to a new way of working. Ultimately this is about designing workflow that generates behaviors we want to see (e.g. collaboration, open discussion, efficient working) and behaviors we want to deter (e.g. silos, land-grabbing, power-plays etc).

Influencing these behaviors will include platforms, processes, relationships, and more. You will need to take a gradual, thoughtful, and transparent approach in designing how these different pieces fit together and how you make the change in a way that teams are engaged in.

I recommend you manage this in the following way (in order):

  1. Survey the current culture – first, you need to understand your current environment. How technically savvy are your employees? How dependent on meetings are they? What are the natural connections between teams, and where are the divisions? With a mixture of (a) employee surveys, and (b) observational and quantitive data, summarize these dynamics into lists of “Behaviors to Improve” and “Behaviors to Preserve”. These lists will give us a sense of how we want to build a workflow that is mindful of these behaviors and adjusts them where we see fit.
  2. Design an asynchronous environment – based on this research, put together a proposed plan for some changes you want to make to be more asynchronous. This should cover platform choices, changes to processes/policies, and roll-out plan. Divide this plan up in priority order for which pieces you want to deliver in which order.
  3. Get buy-in – next we need to build buy-in in senior management, team leads, and with employees. Ideally this process should be as open as possible with a final call for input from the wider employee-base. This is a key part of making teams feel part of the process.
  4. Roll out in phases – now, based on your defined priorities in the design, gradually roll out the plan. As you do so, provide regular updates on this work across the company (you should include metrics of the value this work is driving in these updates).
  5. Regularly survey users – at regular check-points survey the users of the different systems you put in place. Give them express permission to be critical – we want this criticism to help us refine and make changes to the plan.

Of course, this is a simplication of the work that needs to happen, but it covers the key markers that need to be in place.

Asynchronous Principles

The specific choices in your own asynchronous workflow plan will be very specific to your organization. Every org is different, has different drivers, people, and focus, so it is impossible to make a generalized set of strategic, platform, and process recommendations. Of course, if you want to discuss your organization’s needs specifically, feel free to get in touch.

For the purposes of this piece though, and to serve as many of you as possible, I want to share the core asynchronous principles you should consider when designing your asynchronous workflow. These principles are pretty consistent across most organizations I have seen.

Be Explicitly Permissive

A fundamental principle of asynchronous working (and more broadly in innersource) is that employees have explicit permission to (a) contribute across different projects/teams, (b) explore new ideas and creative solutions to problems, and (c) challenge existing norms and strategy.

Now, this doesn’t mean it is a free for all. Employees will have work assigned to them and milestones to accomplish, but being permissive about the above areas will crisply define the behavior the organization wants to see in employees.

In some organizations the senior management team spoo forth said permission and expect it to stick. While this top-down permission and validation is key, it is also critical that team leads, middle managers, and others support this permissive principle in day-to-day work.

People change and cultures develop by others delivering behavioral patterns that become accepted in the current social structure. Thus, you need to encourage people to work across projects, explore new ideas, and challenge the norm, and validate that behavior publicly when it occurs. This is how we make culture stick.

Default to Open Access

Where possible, teams and users should default to open visibility for projects, communication, issues, and other material. Achieving this requires not just default access controls to be open, but also setting the cultural and organization expectation that material should be open for all employees.

Of course, you should trust your employees to use their judgement too. Some efforts will require private discussions and work (e.g. security issues). Also, some discussions may need to be confidential (e.g. HR). So, default to open, but be mindful of the exceptions.

Platforms Need to be Accessible, Rich, and Searchable

There are myriad platforms for asynchronous working. GitHub, GitLab, Slack, Mattermost, Asana, Phabricator, to name just a few.

When evaluating platforms it is key to ensure that they can be made (a) securely accessible from anywhere (e.g. desktop/mobile support, available outside the office), (b) provide a rich and efficient environment for collaboration (e.g. rich discussions with images/videos/links, project management, simple code collaboration and review), (c) and any material is easily searchable (finding previous projects/discussions to learn from them, or finding new issues to focus on).

Always Maintain History and Never Delete, but Archive

You should maintain history in everything you do. This should include discussions, work/issue tracking, code (revision control), releases, and more.

On a related note, you should never, ever permanently delete material. Instead, that material should be archived. As an example, if you file an issue for a bug or problem that is no longer pertinent, archive the issue so it doesn’t come up in popular searches, but still make it accessible.

Consolidate Identity and Authentication

Having a single identity for each employee on asynchronous infrastructure is important. We want to make it easy for people to reference individual employees, so a unique username/handle is key here. This is not just important technically, but also for building relationships – that username/handle will be a part of how people collaborate, build their reputations, and communicate.

A complex challenge with deploying asynchronous infrastructure is with identity and authentication. You may have multiple different platforms that have different accounts and authentication providers.

Where possible invest in Single Sign On and authentication. While it requires a heavier up-front lift, consolidating multiple accounts further down the line is a nightmare you want to avoid.

Validate, Incentivize, and Reward

Human beings need validation. We need to know we are on the right track, particularly when joining new teams and projects. As such, you need to ensure people can easily validate each other (e.g. likes and +1s, simple peer review processes) and encourage a culture of appreciation and thanking others (e.g. manager and leaders setting an example to always thank people for contributions).

Likewise, people often respond well to being incentivized and often enjoy the rewards of that work. Be sure to identify what a good contribution looks like (e.g. in software development, a merged pull request) and incentivize and reward great work via both baked-in features and specific campaigns.

Be Mindful of Uncertainty, so Train, Onboard, and Support

Moving to a more asynchronous way of working will cause uncertainty in some. Not only are people often reluctant to change, but operating in a very open and transparent manner can make people squeamish about looking stupid in front of their colleagues.

So, be sure to provide extensive training as part of the transition, onboard new staff members, and provide a helpdesk where people can always get help and their questions answered.

Of course, I am merely scratching the surface of how we build asynchronous workflow, but hopefully this will get your started and generate some ideas and thoughts about how you bring this to your organization.

Of course, feel free to get in touch if you want to discuss your organization’s needs in more detail. I would also love to hear additional ideas and approaches in the comments!

The post Building Better Teams With Asynchronous Workflow appeared first on Jono Bacon.

Stephen Michael Kellat: Heading Into Christmas

Sun, 12/18/2016 - 19:00

Nine months ago I had a blog post up about "Apologizing". This continues what was said there as the matter I was apologizing about then finally came to a conclusion. You can read that if you want.

Many folks who frequent a variety of "hacker" events worry about the so-called Five Eyes and, if they're in the US, the actions of "three letter agencies" towards them. I can nowadays say that I was under investigation for the vast majority of 2016 by "three letter agencies" as well as four letter agencies, five letter agencies, and bureaus you've probably never heard of. Those investigations are successfully concluded although they got really messy at points where I had to limit contacts with "non-US persons" and verify if people in the US that I came into contact with were US citizens or not.

When you're involved in Linux at large or even the Ubuntu realm more specifically, that's incredibly restricting. I could end up apologizing until well beyond the heat death of the universe to people I had to ask questions of so that I could be prepared for sit-down meetings with investigators. All of this came about simply due to a transfer request to move from one federal civil service job to another. That wound up on hold initially for an investigation that went 9 months overdue and then even as that is finally done the job transfer is stuck on hold for an additional fiscal year.

I had yet another co-worker literally carried out on a stretcher from the job last week. Our attrition rate at work has been phenomenal over the past several months. We're not BOFHs by any stretch but we're in a high pressure environment as civil servants handling the cases of citizens and beyond. The stress of our work gets bad enough at points that when we talk about "group rates" for counseling sessions it is more a form of dark humor than anything else. This is why the extraordinary measures were taken to even attempt the transfer request with all the restrictive hoops that cut me off from the community. The gamble was to get through processing things and then be able to re-join community activities. Sadly it didn't work and I've been watching crewmates get ground down.

As we've discussed in The Group, the plan for many of us is to leave the civil service. As I've been studying hard about mass care, the Incident Command System, as well as doing more to back up the ministry of West Avenue Church of Christ it seems that that is something to slide into. I could also concurrently pick the almost moribund not-funded research project up off the floor as the Outernet satellite effort keeps taking new directions but there is little external evaluation of it such as I propose to do. Getting out would allow me to be able to contribute back to the Ubuntu realm at least somewhat once 4 hours of the day spent commuting get reallocated to other uses.

For others looking at contributing, all I can say is not only am I not a lawyer I am also not your lawyer. For US tax guidance, Chapter 24 of Publication 17 Your Federal Income Tax may be quite useful to read as it talks about deducting contributions. If you look at Table 24-1 and decide that you want to support the "Domestic Mission Field Activity" at West Avenue Church of Christ that I already do work in, you can write to:

West Avenue Church of Christ
5901 West Avenue
Ashtabula, OH 44004
United States of America

I wish I could say the congregation has a website or an electronic presence but it doesn't so postal mail is it. The Field Activity has seen actions like me preaching in the woods as well as helping put together the church's outreach float in the Ashtabula City Christmas parade. When the regular pulpit minister is out, I'm the normal backup.

Unlike the "Holiday Hole" by the folks at Cards Against Humanity, I would say something productive would come out of funding me working 2017. External evaluation of the Outernet satellite effort can finally get underway. I would be aiding in the care of souls from a different perspective compared to metaphorically swinging the sword of state in my current job. I could probably sleep far more easily than I currently do. Some publishing work would be done as papers would be printed.

Since various efforts from the Software Freedom Conservancy, Free Software Foundation, and the Electronic Frontier Foundation are also out there I'll say that frankly there isn't a fixed dollar amount sought. I don't hazard a guess what will come. Ohio will have an increase in its minimum hourly wage for 2017 and the Ohio Development Services Agency released an updated scorecard as to what the "median income" and other stats are for Ashtabula County. For far, far less than was raised for the Holiday Hole that was later filled back in by the Cards Against Humanity folks I can walk away from the conundrums of what it would mean to be a civil servant in the time of President Donald Trump.

If I don't catch up with you otherwise, have a merry Christmas. I've been "disappeared" from IRC for quite a while. I hope to fix that one of these days...

Valorie Zimmerman: Merry KDEmas everyone!

Sun, 12/18/2016 - 15:29
Lookie what I got in the mail!


It is one of the cards you can get too -- if you help out KDE by the end of the year.

Your gift helps support KDE developers all year long, so head to https://www.kde.org/fundraisers/yearend2016/ and give big!

Colin King: A seasonal obfuscated C program for 2016

Sat, 12/17/2016 - 10:34
Another year passes and once more I have another seasonal obfuscated C program.  I was caught short on free time this year to heavily obfuscate the code which is a shame. However, this year I worked a bit harder at animating the output, so hopefully that will make up for lack of obfuscation.

The source is available on github to eyeball.  I've had criticism on previous years that it is hard to figure out the structure of my obfuscated code, so this year I made sure that the if statements were easier to see and hence understand the flow of the code.

This year I've snapped up all my seasonal obfuscated C programs and put them into the snap store as the christmas-obfuscated-c snap.

Below is a video of the program running; it is all ASCII art and one can re-size the window while it is running.


Unlike previous years, I have the pre-obfuscated version of the code available in the git repository at commit c98376187908b2cf8c4d007445b023db67c68691 so hopefully you can see the original hacky C source.

Have a great Christmas and a most excellent New Year. 

Leo Arias: Ubuntu Testing Day wrap up - snapcraft and beers (20161216)

Fri, 12/16/2016 - 19:34

Today we had the last Ubuntu Testing Day of the year.

We invited Sergio Schvezov and Joe Talbott, to join Kyle and myself. Together we have been working on Snapcraft the whole year.

Sergio did a great introduction of snapcraft, and showed some of the new features that will land next week in Ubuntu. And because it was the last day of work for everybody (except Kyle), we threw some beers into the hang out and made it our team end of year party.

You can watch the full recording by clicking the image below.

Snapcraft is one of the few projects that have an exception to land new features into released versions of Ubuntu. So every week we are landing new things in Xenial and Yakkety. This means that we need to constantly test that we are not breaking anything for all the people using stable Ubuntu releases; and it means that we would love to have many more hands helping us with those tests.

If you would like to help, all you have to do is set up a virtual machine and enable the proposed pocket in there.

This is the active bug for the Stable Release Update of snapcraft 2.24: bug #1650632

Before I shut down my computer and start my holidays, I would like to thank all the Ubuntu community for one more year, it has been quite a ride. And I would like to thank Sergio, Kyle and Joe in particular. They are the best team a QA Engineer could ask for <3.

See you next year for more testing days.

Pages