Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 1 week 5 days ago

Ubuntu Insights: Apps to Snaps

Wed, 06/01/2016 - 02:35

Distributing applications on Linux is not always easy. You have different packaging formats, base systems, available libraries, and distribution release cadences all of which contribute to the headache. But now we have something much simpler: Snaps.

Snaps are a new way for developers to package their applications, bringing with it many advantages over the more traditional package formats such as .deb, .rpm, and others. Snaps are secure, isolated from each other and the host system using technologies such as AppArmor, they are cross-platform, and they are self-contained, allowing a developer to package the exact software their application needs. This sandboxed isolation also improves security and allows applications, and whole snap-based systems, to be rolled back should an issue occur. Snaps really are the future of Linux application packaging.

Creating a snap is not difficult. First, you need the snap-based runtime environment that is able to understand and execute snaps on your desktop; this tool is named snapd and comes as default on all Ubuntu 16.04 systems. Next you need the tool to create snaps, Snapcraft, which can be installed simply with:

$ sudo apt-get install snapcraft

Once you have this environment available it is time to get snapping.

Snaps use a special YAML formatted file named snapcraft.yaml that defines how the application is packaged as well as any dependencies it may have. Taking a simple application to demonstrate this point, the following YAML file is a real example of how to snap the moon-buggy game, available from the Ubuntu archive.

name: moon-buggy
version: 1.0.51.11
summary: Drive a car across the moon
description: |
A simple command-line game where you drive a buggy on the moon
apps:
play:
command: usr/games/moon-buggy
parts:
moon-buggy:
plugin: nil
stage-packages: [moon-buggy]
snap:
– usr/games/moon-buggy

The above code demonstrates a few new concepts. The first section is all about making your application discoverable in the store; setting the packaging metadata name, version, summary, and description. The apps section implements the play command which points to the location of the moon-buggy executable. The parts section tells Snapcraft about any required plugins that are needed to build the application along with any packages it depends on. In this simple example all we need is the moon-buggy application itself from the Ubuntu archive and Snapcraft takes care of the rest.

Running snapcraft in the directory where you have the snapcraft.yaml file will create the moon-buggy_1.0.51.11_amd64.snap which can be installed by running:

$ snap install moon-buggy_1.0.51.11_amd64.snap

To seen an example of snapping something a little more complex, like the Electron-based Simplenote application see here, for a tutorial online here and the corresponding code on GitHub. More examples can be found on the getting Ubuntu developer website here.

Zygmunt Krynicki: Making your first contribution to snapd

Wed, 06/01/2016 - 00:20
Making your first contribution to snapd is quite easy but if you are not familiar with how go projects are structured it may be non-obvious where the code is or why your forked branch is not being used. Let's see how to do this step by step:


  1. Fork the project on github. You will need a github account.
  2. Set the GOPATH environment variable to something like ~/hacking or ~/fun or like I do ~/work (though I must say my work is full of fun hacking :-). You can set it directly with export GOPATH=~/hacking but I would recommend to add it to your ~/.profile file. Note that it will only be used on the next login so you can also set it directly or source the ~/.profile file with . ~/.profile after you've made the change.
  3. Set PATH to include $GOPATH/bin -- you will need this to run some of the programs you build and install (export PATH=$PATH:$GOPATH/bin) -- also make this persistent as described above
  4. Create $GOPATH/src/snapcore (mkdir -p $GOPATH/src/github.com/snapcore) and enter that directory (cd $GOPATH/src/github.com/snapcore)
  5. Clone your fork here (git clone git@github.com:zyga/snapd -- do replace zyga with your github account name)
  6. Add the upstream repository as a remote (git remote add upstream git@github.com:snapcore/snapd) - you will use this later to get latest changes from everyone
  7. Run the ./get-deps.sh script. You will probably need a few extra things (sudo apt-get install golang-go bzr)
  8. You are now good to go :-)

At this stage you can go build and go test individual packages. I would also advise you to have a look at my devtools repository where I've prepared useful scripts for various tasks. One thing you may want to use is the refresh-bits script. Having followed steps 1 through 8 above, let's use refresh-bits to run our locally built version of snapd and snap.
  1. Clone devtools to anywhere you want (git clone git@github.com:zyga/devtools) and enter the cloned repository (cd devtools)
  2. Assuming your GOPATH is still set, run ./refresh-bits snap snapd setup run-snapd restore
  3. Look at what is printed on screen, at some point you will notice that locally started snapd is running (you can always stop it by pressing ctrl+c) along with instructions on how to run your locally built snap executable to talk to it
  4. In another terminal, in the same directory, run sudo ./snap.amd64 --version
Voila :-)

If you have any questions about that, please use the comments section below. Thanks!

Jorge Castro: Zeppelin is now a top level Apache project

Tue, 05/31/2016 - 10:19

Apache Zeppelin has just graduated to become a top-level project at the Apache Foundation.

As always, our Big Data team has you covered, you can find all the goodness here:

But for most people you likely just want to be able to consume Zeppelin as part of your Spark cluster, check out these links below for some out-of-the-box clusters:

Happy Big-data-ing, and as always, you can join other big data enthusiasts on the mailing list: bigdata@lists.ubuntu.com

The Fridge: Ubuntu Weekly Newsletter Issue 467

Mon, 05/30/2016 - 19:51

Welcome to the Ubuntu Weekly Newsletter. This is issue #467 for the week May 23 – 29, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Sirrs
  • Simon Quigley
  • Chris Guiver
  • Seth Johnson
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Paul Tagliamonte: Iron Blogger DC

Mon, 05/30/2016 - 18:37

Back in 2014, Mako ran a Boston Iron Blogger chapter, where you had to blog once a week, or you owed $5 into the pot. A while later, I ran it (along with Molly and Johns), and things were great.

When I moved to DC, I had already talked with Tom Lee and Eric Mill about running a DC Iron Blogger chapter, but it hasn’t happened in the year and a half I’ve been in DC.

This week, I make good on that, with a fantastic group set up at dc.iron-blogger.com; with more to come (I’m sure!).

Looking forward to many parties and though provoking blog posts in my future. I’m also quite pleased I’ll be resuming my blogging. Hi, again, planet Debian!

Sebastian Kügler: Multiscreen in Plasma 5.7 and beyond

Mon, 05/30/2016 - 15:56

Here’s a quick status update about where we currently stand with respect to multiscreen support in Plasma Desktop.

While for many people, multiscreen support in Plasma works nicely, for some of our users, it doesn’t. There are problems with restoring previously set up configurations, and around the primary display mechanism. We’re really unhappy about that, and we’re working on fixing it for all of our users. These kinds of bugs are the stuff nightmares are made of, so there’s not a silver bullet to fix everything of it, once and for all right away. Multiscreen support requires many different components to play in tune with each other, and they’re usually divided into separate processes communicating via different channels with each other. There’s X11 involved, XCB, Qt, libkscreen and of course the Plasma shell. I can easily at least three different protocols in this game, Wayland being a fourth (but likely not used at the same time as X11). There’s quite some complexity involved, and the individual components involved are actually doing their jobs quite well and have their specific purposes. Let me give an overview.

Multiscreen components

Plasma Shell renders the desktop, places panels, etc., When a new screen is connected, it checks whether it has an existing configuration (wallpaper, widgets, panels etc.) and extends the desktop. Plasma shell gets its information from QScreen now (more on that later on!)

KWin is the compositor and window manager. KWin/X11 interacts with X11 and is responsible for window management, movement, etc.. Under Wayland, it will also take the job of the graphical and display server work that X11 currently does, though mostly through Wayland and *GL APIs.

KScreen kded is a little daemon (actually a plugin) that keeps track of connected monitors and applies existing configs when they change

KScreen is a module in systemsettings that allows to set up the display hardware, positioning, resolution, etc.

Libkscreen is the library that backs the KScreen configuration. It offers an API abstraction over XRandR and Wayland. libkscreen sits pretty much at the heart of proper multiscreen support when it comes to configuring manually and loading the configuration.

Primary Desktop

The primary display mechanism is a bit of API (rooted in X11) to mark a display as primary. This is used to place the Panel in Plasma, and for example to show the login manager window on the correct monitor.

Libkscreen and Qt’s native QScreen are two different mechanism to reflect screen information. QScreen is mainly used for querying info (and is of course used throughout QtGui to place windows, get information about resolution and DPI, etc.). Libkscreen has all this information as well, but also some more, such as write support. Libkscreen’s backends get this information directly from Xorg, not going through Qt’s QScreen API. For plasmashell, we ended up needing both, since it was not possible to find the primary display using Qt’s API. This causes quite some problems since X11 is async by its nature, so essentially we ended up having “unfixable” race conditions, also in plasmashell. These are likely the root cause of the bug you’re seeing here.

This API has been added in Qt 5.6 (among a few other fixes) by Aleix Pol, one of our devs in the screen management team. We have removed libkscreen from plasmashell today and replaced it with “pure QScreen” code, since all the API we need for plasmashell is now available in the Qt we depend on.

These changes should fix much of the panel placement grief that bug 356225 causes. It does need some good testing, now it’s merged. Therefore, we’d like to see as many people, especially those reporting problem with multiscreen, to test against latest Plasma git master (or the upcoming Plasma 5.7 Beta, which is slated for release on June, 16th).

Remember the config

Another rough area that is under observation right now is remembering and picking the right configuration from a previous setup, for example when you return to your docking station which has another display connected. Bug 358011 is an example for that. Here, we get “spurious” events about hardware changes from X11, and I’m unsure where they come from. The problem is that it’s not easy to reproduce, it only happens for certain setups. This bug was likely introduced with the move to Qt 5 and Frameworks, it’s a regression compared to Plasma 4.
I’ve re-reviewed the existing code, added more autotests and made the code more robust in some places that seemed relevant, but I can’t say that we’ve found a sure solution to these problems. The code is now also better instrumented for debugging the areas at play here. Now we need some more testing of the upcoming beta. This is certainly not unfixable, but needs feedback from testing so we can apply further fixes if needed.

Code quality musings

From a software engineering point of view, we’re facing some annoying problems. It took us a long time to get upstream QScreen code to be robust and featureful enough to draw the lines between the components involved, especially QScreen and libkscreen more clearly, which certainly helps to reduce hard-to-debug race conditions involving hardware events. The fact that it’s almost impossible to properly unit test large parts of the stack (X11 and hardware events are especially difficult in that regard) means that it’s hard to control the quality. On the other hand, we’re lacking testers, especially those that face said problems and are able to test the latest versions of Qt and Plasma.
QA processes is something we spent some serious work on, on the one hand, our review processes for new code and changes to current code are a lot stricter, so we catch more problems and potential side-effects before code gets merged. For new code, especially the Wayland support, our QA story also looks a lot better. We’re aiming for near-100% autotest coverage, and in many cases, the autotests are a lot more demanding than the real world use cases. Still, it’s a lot of new code that needs some real world exposure, which we hope to get more of when users test Plasma 5.7 using Wayland.

Costales: Ubucon Paris 16.04. Day 2

Mon, 05/30/2016 - 14:54
Last day of the Ubucon Paris of release xenial!

Podcast in 3... 2... 1...This time, we did an international podcast. Rudy, Quesh, Didier, Gonzalo and me from the Ubuntu Party and Marius, Ilonka, Alfred and even Simon from their homes.
We spoke about the Ubucons, tablet, news. So great experience!



It was a few hours, until the launch time. Then I ate with Gonzalo, Rudy, Winael, Didier and Yoboy.

After the lunch, I saw a Gemma's conference.

Gemma's talk

Nicolas and me were catching public inside the event from the hall of the building. And it worked so well.

Nicolas did a so great work!!

The Ubucon event was closed by Rudy, explaining thinks about the convergence, ubucon... etc.

Last conference
We finished in a restaurant, with no so much people as yesterday, but enough :) Dinner and a few drinks together.

Cheers!
Excited, this was a great event. The Ubuntu Paris is doing a so great work and this team is incredible.
Congrats!!

Convergence
Lovely Mozilla!
Hall
Quesh
wow!
Indeed!
Future :)

Until the next!

Ubuntu App Developer Blog: Can I haz MainView in a Window?

Mon, 05/30/2016 - 06:06

When using Unity8 these days connecting a Bluetooth mouse to a device enables windowed mode. Another option is to connect an external monitor via HDMI and most recently on some devices wireless displays. This raises a few questions on the API side of things.

Apps are currently advised to use a MainView as the root item, which can have a width and a height used as the window dimensions in a windowed environment - on phones and tablets by default all apps are always full screen. As soon as users can freely resize the window, some apps may not look great anymore - QtQuick.Window solves this by providing minimum/maximum/Width/Height properties. Another question is what title is used for the window - as soon as there is more than one Page that's no longer obvious and it's actually somewhat redundant.

So what can we do now?

There’s two ways to sort this that we’ll be discussing here. One way is to in fact go ahead and use MainView, which is just an Item, and put it inside a Window. That’s perfectly fine to do and that’s a good stop-gap for any apps affected now. To the user the outcome is almost the same, except the title and sizing can be customized behind the scenes.

import QtQuick 2.4
import QtQuick.Window 2.2
import Ubuntu.Components 1.3
Window {
    title: "Hello World"
    minimumWidth: units.gu(30)
    minimumHeight: units.gu(50)
    maximumWidth: units.gu(90)
    maximumHeight: units.gu(120)
    MainView {
        applicationName: "Hello World"
    }
}

From here on after things work exactly the same way they did before. And this is something that will continue to work in the future.

A challenger appears

That said, there’s another way under discussion. What if there was a new MainWindow component that could replace the MainView and provide the missing features out of the box? Code would be simpler. Is it worth it, though, just to save some lines of code you might wonder? Yes actually. It is worth it when performance enters the picture.

As it is now, MainView does many different things. It displays a header for starters - that is, if you’re not using AdaptivePageLayout to implement convergence. It also has automaticOrientation API, something the shell does a much better job of these days. And it handles actions, which are, like the header, part of each Page now. It’s still doing a good job at things we need, like setting up folders for confinement (config, cache, localization) and making space for the OSK (in the form of anchorsToKeyboard). So in short, there’s several internals to re-consider if we had a chance to replace it.

Even more drastic would be the impact of implementing properties in MainWindow that right now are context properties. “units” and “theme” are very useful in so many ways and at the same time by design super slow because of how QML processes them. A new toplevel component in C++ could provide regular object properties without the overhead potentially speeding up every single use of those properties throughout the application as well as the components using them behind the scenes.

Let’s be realistic, however, these are ideas that need discussion, API design and planning. None of this is going to be available tomorrow or next week. So by all means, engage in the discussions, maybe there’s more use cases to consider, other approaches, it’s the one component virtually every app uses so we better do a good job coming up with a worthy successor.

Kubuntu: Plasma 5.6.4 available in 16.04 Backports

Mon, 05/30/2016 - 05:15

The Kubuntu Team announces the availability of Plasma 5.6.4 on Kubuntu 16.04 though our Backports PPA.

Plasma 5.6.4 Announcement:
https://www.kde.org/announcements/plasma-5.6.4.php

How to get the update (in the commandline):
1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

Here is a great video demoing the some of the new features in this release:
https://www.youtube.com/watch?v=v0TzoXhAbxg

Forums Council: New Ubuntu Member via forums contributions

Mon, 05/30/2016 - 03:24

The Forum Council is proud to announce a new Ubuntu membership obtained through forum contributions.

Please welcome our newest Member, Mark Phelps. You can see Mark’s application thread here.

Mark has been a been a long time contributor and has always showed sustained and helpful contributions to the forums.

If you have been a contributor to the forums and wish to apply to Ubuntu Membership, all you have to do is to put together a wiki and Launchpad pages, sign the Ubuntu Code of Conduct and follow the process outlined in the Ubuntu Membership via Forums contributions wiki page.


Svetlana Belkin: What Programs Do I Use: Mudlet

Sun, 05/29/2016 - 11:16

Like many people, I have different hobbies and also like many, I play computer/video games. But not the extreme, as some though. I do have Steam and my username is senseopennes if you want to add me. I played many graphical games but I tend to get bored of them fast.  I mean it.  I think the longest time that I stuck to a game was one year off and on and that was a MMOPRG (RO or AO, I think).

The only game that I played and still playing is a text-based multi-user dungeon (MUD) called  Armageddon MUD.  It’s a 20 plus year old game that is roleplay enforced, meaning that while you have coded actions, you need to also roleplay out them.  In short, it’s collaborative storytelling.  I think I have played it for 7 years but off and on and my longest live character (it’s perma-death one) lived for close two real life years before I had to store them.  One day, I will write a post dedicated for  Armageddon MUD, something that I said that I was going to do ages ago…

Anyhow, the MUD doesn’t have a client that I can play on, but I use a client called Mudlet:

Main screen with Armageddon MUD main screen.

I used three other clients before Mudlet, two for windows back in 2008 – 2009 (MUSH and something else) and one on Ubuntu, which was KClient.  KClient stopped working with  Armageddon MUD after the staff of the MUD moved the server to the cloud.  I did my research from what other players of the MUD were using and found that Mudlet was the most used.  Like most Open Source and Free programs, Mudlet is very customizable but I just use it out-of-the-box.  I have no triggers or keystrokes set up, I don’t need them.  I type out everything.  Someday, I might work on customizing it.

It’s a great program out of the box and you can have multiple profiles and games running at the same time and they can be all saved.  What is great about Mudlet that I like is the fact there is a built-in notepad for notes.  I use it a lot to keep track of things.

I plan to write about MyPaint next week. See you then!

Lubuntu Blog: Top Menu for Lubuntu

Sun, 05/29/2016 - 05:39
Thanks to the blog WebUpd8, there’s a new “trick” to add an app menu to the LXDE panel, just like Unity interface has. Check this nice tutorial in our Tips’n’Tricks page.

Costales: Ubucon Paris 16.04. Day 1

Sun, 05/29/2016 - 04:11
And first day of the Ubucon Paris!

When I arrived there were a lot of public in all areas.

Install Party area

I was attended a Quesh talk, an introduction to the community.

Quesh's talk

After that Didier told us about the Snappy packages. Looks great.

Didier's talk

Then I was to eat and I saw Nicolas in there. Nicolas is a so great guy. I was speaking with he a few hours.

Nicolas and me

And then, I speak a bit about the 1st uNav's anniversary :) And in there was a big big big surprise from the Ubuntu Party members :)) They come with uNav and Ubuntu presents and they were singing happy birdthay :') Because of 10 years of Ubucon Paris and 1 year of uNav :)) (You guys are the best!).

:')))

And after that, it was the dinner time. So many members in the same restaurant.

Dinner
Presents from Ubuntu Paris

This was a great first day event. And tomorrow will be the last day of the Ubucon Paris.

James Hunt: Procenv 0.46 - now with more platform godness

Sat, 05/28/2016 - 12:44

I have just released procenv version 0.46. Although this is a very minor release for the existing platforms (essentially 1 bug fix), this release now introduces support for a new platform...

DarwinYup - OS X now joins the ranks of supported platforms.

Although adding support for Darwin was made significantly easier as a result of the recent internal restructure of the procenv code, it did present a challenge: I don't own any Apple hardware. I could have borrowed a Macbook, but instead I decided to see this as a challenge:

  • Could I port procenv to Darwin without actually having a local Apple system?
 Well, you've just read the answer, but how did I do this?

Stage 1: Docker
Whilst surfing around I came across this interesting docker image:


It provides a Darwin toolchain that I could run under Linux. It didn't take very long to follow my own instructions on porting procenv to a new platform. But although I ended up with a binary, I couldn't actually run it, partly because Darwin uses a different binary file format to Linux: rather than ELF, it uses the Mach-O format.



Stage 2: TravisThe final piece of the puzzle for me was solved by Travis. I'd read the very good documentation on their site, but had initially assumed that you could only build Objective-C based projects on OSX with Travis. But a quick test proved my assumption to be incorrect: it didn't take much more than adding "osx" to the os list and "clang" to the compiler list in procenv's .travis.yml to have procenv building and running (it runs itself as part of its build) on OSX under Travis!

Essentially, the following YAML snippet from procenv's .travis.yml did most of the work:

language: c
compiler:
  - gcc
  - clang
os:
  - linux
  - osx


All that remained was to install the build-time dependencies to the same file with this additional snippet:

before_install:
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install expat check perl; fi

(Note that it seems Travis is rather picky about before_install - all code must be on a single line, hence the rather awkward-to-read "if; then ....; fi" tests).


Summary
Although I've never personally run procenv under OSX, I have got a good degree of confidence that it does actually work.

That said, it would be useful if someone could independently verify this claim on a real system!) Feel free to raise bugs, send code (or even Apple hardware :-) my way!



Michael Lustfield: Long Term Secure Backups

Fri, 05/27/2016 - 22:00

Not that long ago, I managed to delete all off my physical HV hosts, backup server, all external backups, and a bit more. The first question that most people would ask would probably be how that's even possible. That may become a post by itself; it probably won't, though. What really matters is how I can keep this from ever happening again?

I sat down for some time to come up with some requirements, some ideas, and eventually rolled out a backup solution that I feel confident with.

Requirements

To build this backup solution, I first needed to define a set of requirements.

  • No server can see backups from other servers
  • The backup server can not access other servers
  • The backup server must create versioned backups (historical archives)
  • No server can access its own historical archive
  • All archives must be uploaded to an off-site location
  • All off-site backups must enforce data retention
  • The backup server must be unable to delete backups from an off-site location
  • All off-site backups must be retained for a minimum of three months
  • The backup server must keep two years worth of historical archives
  • The entire solution must be fully automated
  • Low budget
  • Can't impact quality of service

Some of these may sound like common sense, but most backup tools, including the big dollar options, don't meet all of them. In some (way too many) cases, the backup server is given access to root (or administrator) on most systems.

The Stack

Deciding how this stack should be contructed was definitely the most time consuming part of this project. I'm going to attempt to lay out what I built in the order of the direction data flows. Wish me luck!

Server to Backup Server

The obvious choice is SSH. It's a standard, reasonably secure, and very easy.

When people do backups with SSH, the typical decision is to have the backup server initiate and control backups, which almost always means the backup server has the ability to log into other servers. This makes your backup server a substantially higher value target for an attacker. Yes, it's horrible if any system gets compromised, but this minimizes the impact and aids in recovery.

Scheduling

Every server has a backup script that runs on a pseudo-random schedule. Because the node name will always be the same and checksums are worthless unless they produce the same value every time, I was able to use the node name to build the backup schedule.

This boils down to what is essentially:

snap: cron.present: - identifier: snap - name: /usr/local/sbin/snap - hour: 2,10,18 - minute: {{ pillar['backup_minute'] }}

The 'backup_minute' is created with ext_pillar. To build the entire ext_pillar is a task for the reader, what matters is:

import zlib return zlib.crc32(grains['hostname']) % 60

You may notice that using 60 doubles the chance a backup running on the top of the hour. You can feel free to choose 59, but I like nice round numbers that are easy to identify.

SSH Keys

I mentioned that I wanted something 100% automated. I'm a huge fan of Salt and use it in my home environment, so Salt was the only choice for the automation.

A feature of Salt is the Salt Mine. The mine is a way for minions (every server) to report bits of data back to the salt master that can be shared with other systems. I utilized this feature to share root's SSH public key. I also used salt to generate that key if it doesn't already exist.

Here's a mini-snippet for clarification:

root_sshkeygen: cmd.run: - name: 'ssh-keygen -f /root/.ssh/id_rsa -t rsa -N ""' - unless: 'test -f /root/.ssh/id_rsa.pub' /etc/salt/minion.d/mine.conf: file.managed: - contents: | mine_functions: ssh.user_keys: user: root prvfile: False pubfile: /root/.ssh/id_rsa.pub

Overall, this is pretty simple, but amazingly effective.

User Accounts

At this point, all of the servers are ready to back up their data. They just aren't able to yet because the backup server is sitting there empty with no user accounts.

This part is surprisingly easy as well. I simply use salt to create a separate jailed home directory for every server in the environment. The salt master already has the public SSH keys for every server in addition to the servers hostname.

To keep things simple, this example does not include jails.

{% for server, keys in salt['mine.get']('*', 'ssh.user_keys').items() %} {{ server }}: user.present: - name: {{ server }} - createhome: True ssh_auth.present: - user: {{ server }} - names: [ {{ keys['root']['id_rsa.pub'] }} ] # Ensures the user directory is never readable by others /home/{{ server }}: file.directory: - user: {{ server }} - group: {{ server }} - mode: '0700' - require: - user: {{ server }} {% endfor %}

This will get user accounts created on the backup server, add the SSH public key to the users trusted keychain, and force the users home directory to be set to 700 which prevents other users/groups from accessing the data.

Backup Archives

Now that data is getting from all servers to the backup server, it's time to start having more than a single copy of the data. The best tool I could find for this job was rsnapshot. I simply point rsnapshot at /home (or /srv/jails) and keep data stored where the existing servers can't access it. This means no compromised server can destroy any previous backups.

I broke some of my own rules and have rsnapshot also backing up my pfSense device as well as my Cisco switch configurations. I'll get a better solution in place for those, but that is it's own project.

Ice Ice Baby

At this point, we have a rather complete backup option that meets nearly everything I care about. So far, we're at $0.00 to build this solution. However, off-site backups haven't been included.

Do you want to trust your buddy and arrange to share backups with each other? Hopefully the obvious answer to everyone is an emphatic NO.

The only two reasonable options I found were AWS Glacier and Google Nearline. Because we're talking about data that you should never need to actually access, the two options are very comparable. Google Nearline advertises fastest time to first byte; however, the more you pull down, the slower your retrieval rate is. AWS Glacier advertises cheapest storage, but the faster you want your data, the more you get to pay.

The important thing to remember is that you're dealing with an off-site backup. You are "putting it on ice." If nothing ever breaks, the only time you will ever access this data is to verify your backup process.

I wrote a relatively simple script that runs on a cron (2x/mo) that:

  • Creates a squashfs image of the entire rsnapshot archive
  • Encrypts the quashfs image with a public GPG key
  • Uploads the encrypted image

I created a GPG key pair for this single process, encrypted the private key with my personal key, moved multiple copies (including paper) to various locations, and removed the private key from the server.

Wrapping Up

There are a lot of backup options that exist. I have concerns about nearly every option that exists, including most commercial/enterprise offerings. To have a backup solution that I considered reasonably secure, I had to spend a lot of time thinking through the process and researching many different tools.

I very much hope that what I put here will prove useful to other people trying address similar concerns. As always, I'm more than eager to answer questions.

Aurélien Gâteau: Mass edit your tasks with t_medit

Fri, 05/27/2016 - 16:02

If you are a Yokadi user or if you have used other todo list systems, you might have encountered this situation where you wanted to quickly add a set of tasks to a project. Using Yokadi you would repeatedly write t_add <project> <task title>. History and auto-completion on command and project names makes entering tasks faster, but it is still slower than the good old TODO file where you just write down one task per line.

t_medit is a command to get the best of both worlds. It takes the name of a project as an argument and starts the default editor with a text file containing a line for each task of the project.

Suppose you have a "birthday" project like this:

yokadi> t_list birthday birthday ID|Title |U |S|Age |Due date ----------------------------------------------------------------- 1 |Buy food (grocery) |0 |N|2m | 2 |Buy drinks (grocery)|0 |N|2m | 3 |Invite Bob (phone) |0 |N|2m | 4 |Invite Wendy (phone)|0 |N|2m | 5 |Bake a yummy cake |0 |N|2m | 6 |Decorate living-room|0 |N|2m |

Running t_medit birthday will start your editor with this content:

1 N @grocery Buy food 2 N @grocery Buy drinks 3 N @phone Invite Bob 4 N @phone Invite Wendy 5 N Bake a yummy cake 6 N Decorate living-room

By editing this file you can do a lot of things:

  • Change task titles, including adding or removing keywords
  • Change task status by changing the character in the second column to S (started) or D (done)
  • Remove tasks by removing their lines
  • Reorder tasks by reordering lines, this will change the task urgency so that they are listed in the defined order
  • Add new tasks by entering them prefixed with -

Let's say you modify the text like this:

2 N @grocery Buy drinks 1 N @grocery Buy food 3 D @phone Invite Bob 4 N @phone Invite Wendy & David - @phone Invite Charly 5 N Bake a yummy cake - S Decorate table - Decorate walls

Then Yokadi will:

  • Give the "Buy drinks" task a more important urgency because it moved to the first line
  • Mark the "Invite Bob" task as done because its status changed from N to D
  • Change the title of task 4 to "@phone Invite Wendy & David"
  • Add a new task titled: "@phone Invite Charly"
  • Remove task 6 "Decorate living-room"
  • Add a started task titled: "Decorate table" (note the S after -)
  • Add a new task titled: "Decorate walls"

You can even quickly create a project, for example if you want to plan your holidays you can type t_medit holidays. This creates the "holidays" project and open an empty editor. Just type new tasks, one per line, prefixed with -. When you save and quit, Yokadi creates the tasks you entered.

One last bonus: if you use Vim, Yokadi ships with a syntax highlight file for t_medit:

This should be in the upcoming 1.1.0 version, which I plan to release soon. If you want to play with it earlier, you can grab the code from the git repository. Hope you like it!

Kubuntu: Kubuntu Party 4 – The Gathering of Halflings

Fri, 05/27/2016 - 13:34

Come and join us for a most excellent Gathering of Halflings at Kubuntu Party 4, Friday 17th June 19:00 UTC.

The party theme is all about digging out those half finished projects we’ve all got lying around our Geekdoms, and fetching them along for a Show ‘n’ Tell. As ever, there will be party fun and games, an opportunity to kick back from all the contributing that we do, so join us and  enjoy good company and laughter.

Our last party Kubuntu party 3 proved to be another success, with further improvement and refinement upon the previous Kubuntu Party.

New to the Kubuntu Party scene? Fear not my intrepid guests, new friendships are merely but a few clicks away. Check out our previous story trail.

The lessons learned from party  2 had been implemented in party 3. Our main focus is on our guests and their topics of conversation. We didn’t try to incorporate too many things, but simply just let things flow and develop un-conference style. We kept to our plan of closing the party at 22:00 UTC, with a 30 minute over-run to allow people to finish up.  This worked really well and the feedback from the guests was really positive. For the next party we will tighten this over-time further to 15 minutes for close.

We had fun discussing many aspects of computing, including of course lots about Kubuntu. As the party progressed we got into a keyboard geek war, with various gaming keyboards, bluetooth devices, and some amazing back lighting.  However, there simply was nothing to compete with the bluetooth Laser projected keyboard and Mouse that Jim produced, it was awesome!

We also had great fun playing with an IRC Controlled Sphero Robot, a project that Rick Timmis has been working on. The party folks got chance to issue various motion and lighting commands to the Sphero spherical robot. Party goers were able to watch the Robot respond via Rick’s webcam in Big Blue Button.

Rick said

“It was also Awesome seeing that brightly coloured little ball, dashing back and forth at the behest of the party revelers.

It all got rather surreal when Marius broke out his VR Headset, a sophisticated version of the Google Cardboard. The headset enabled Marius to place one of his many (and I mean bags full) of mobile devices in the headset aperture, and vanish into an immersive 3D world.

What are you waiting for ? Book the party in your diary now.

Friday 17th June 19:00 UTC.

Details of our conference server will be posted to #kubuntu-podcast on irc.freenode.net at 18:30 UTC. Or you can follow us on Google+ Kubuntu Podcast and check in on the events page.

 

Bryan Quigley: Ubuntu 16.04 LiveCD Memory Usage Compared

Fri, 05/27/2016 - 12:31

The latest Ubuntu LTS is out, so it’s time for an updated memory usage comparison.

Boots means it will boot to a desktop that you can move the mouse on and is fully loaded.  While Browser and Smooth means we can load my website in a reasonable amount of time.

Takeaways

Lubuntu is super efficient

Lubuntu is amazing in how much less memory it can boot in.  I believe it is still the only one with ZRam enabled by default, which certainly helps a bit.

I actually did the memory usage for ZRam to the nearest MB for fun.
The 32 bit version boots in 224 MB, and is smooth with Firefox at only 240MB!   The 64 bit boots at only 25 MB more (251), but is needs 384 MB to be smooth.

If you are memory limited, change flavors first, 32-bit won’t help that much

Looking just at “Browser and Smooth” because that’s a more clear use-case.  There is no significant memory  difference between the 32 and 64 bit varients of: Xubuntu,  Ubuntu Gnome, Ubuntu (Unity).

Lubuntu, Kubuntu, and Ubuntu Mate do have significant deltas, which let’s explore:
Kubuntu – If you are worried about memory requirements do not use.
Ubuntu Mate – It’s at most a 128MB loss, likely less.  (We did that to 128MB accuracy).
Lubuntu 64 bit is smooth at 384MB.  32 bit saves almost 144 MB!  If you are severally memory limited 32-bit Lubuntu becomes your only choice.

Hard Memory Limit
The 32-bit hard memory requirement is 224 MB. (Below that is panics)
The 64-bit hard memory requirement is 251 MB.  Both of these were tested with Lubuntu.

Check out the 14.04 Post.   I used Virt-Manager/KVM instead of Virtualbox for the 16.04 test.

Extras: Testing NotesSpreadsheet

Ubuntu Podcast from the UK LoCo: S09E13 – Hollywood Nights - Ubuntu Podcast

Thu, 05/26/2016 - 07:00

It’s Episode Thirteen of Season Nine of the Ubuntu Podcast! Alan Pope, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again, but one of us is not!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Pages