Feed aggregator

Ubuntu Insights: Intel® Joule™ board powered by Ubuntu Core

Planet Ubuntu - Tue, 08/16/2016 - 12:06

We’re happy to welcome a new development board in the Ubuntu family! The new Intel® Joule™ is a powerful board targeted at IoT and robotics makers and runs Ubuntu for a smooth development experience. It’s also affordable and compact enough to be used in deployment, therefore Ubuntu Core can be installed to make any device it’s included in secure and up to date … wherever it is!

Check out this Robot Demo that was filmed pre-IDF – The Turtlebot runs ROS on Ubuntu using the Intel® Joule™ board and Realsense camera.

Ubuntu Core, also known as Snappy, is a stripped down version of Ubuntu, designed to run securely on autonomous machines, devices and other internet-connected digital things. From homes to drones, these devices are set to revolutionise many aspects of our lives, but they need an operating system that is different from that of traditional PCs. Learn more about Ubuntu Core here.

Get involved in contributing to Ubuntu Core here.

Ubuntu Insights: Advantech and Canonical Form Strategic Partnership for IoT Gateways

Planet Ubuntu - Tue, 08/16/2016 - 06:26

Ubuntu Core operating system provides production ready platform for gateways.

London, UK – August 16, 2016 – Canonical today announced it has formed a strategic partnership with Advantech to work together to certify the company’s Internet of Things (IoT) gateways for Ubuntu Core.

The partnership ensures users of Advantech’s selected Intel x86-based IoT gateways are certified to have a fully functioning and supported Ubuntu image for their gateway. Users will also have access to an Ubuntu image and developer tools to ready their devices for production, as well as a number of services to fully manage their device’s security and software.

“We are extremely pleased to be forming this strategic partnership with Advantech, one of the world’s leaders in providing trusted innovative embedded and automation products and solutions,” said Jon Melamut, vice president of commercial devices operations for Canonical. “This partnership confirms Ubuntu Core as the operating system of choice for IOT developers and systems integrators who want to deploy products to market quickly. Ubuntu Core is Ubuntu for IoT and it provides amongst others, a production ready operating system for IoT gateways.”

“Advantech aims to provide a full range of IoT gateways with pre-integrated software to fulfill the needs of diverse IoT applications. We are very pleased with our collaboration with Canonical and the expansion of our operating system offerings. This collaboration will enable us to satisfy even more customer requirements and deliver an integrated, pre-validated, and flexible open-computing gateway platform that allows fast solution development and deployment,” said Miller Chang, vice president of Advantech Embedded Computing Group.

Canonical is the company behind Ubuntu, the world’s most popular open-source platform, while Advantech is the leader in providing trusted innovative embedded and automation products and solutions.

For more information on Ubuntu Core, please visit here.

About Canonical
Canonical is the company behind Ubuntu, the leading Operating System for cloud and the Internet of Things. Most public cloud workloads are running Ubuntu, and most new smart gateways, self-driving cars and advanced humanoid robots are running Ubuntu as well. Canonical provides enterprise support and services for commercial users of Ubuntu.

Canonical leads the development of the snap universal Linux packaging system for secure, transactional device updates and app stores. Ubuntu Core is an all-snap OS, perfect for devices and appliances. Established in 2004, Canonical is a privately held company.

About Advantech
Founded in 1983, Advantech is a leader in providing trusted, innovative products, services, and solutions. Advantech offers comprehensive system integration, hardware, software, customer-centric design services, embedded systems, automation products, and global logistics support. We cooperate closely with our partners to help provide complete solutions for a wide array of applications across a diverse range of industries. Our mission is to enable an intelligent planet with Automation and Embedded Computing products and solutions that empower the development of smarter working and living. With Advantech, there is no limit to the applications and innovations our products make possible. Advantech is a premier member of the Intel® Internet of Things Solutions Alliance. From modular components to market-ready systems, Intel and the 350+ global member companies of the Alliance provide scalable, interoperable solutions that accelerate deployment of intelligent devices and end-to-end analytics. Close collaboration with Intel and each other enables Alliance members to innovate with the latest technologies, helping developers deliver first-in-market solutions. (Corporate Website: www.advantech.com).

The Fridge: Ubuntu Weekly Newsletter Issue 478

Planet Ubuntu - Mon, 08/15/2016 - 14:22

Welcome to the Ubuntu Weekly Newsletter. This is issue #478 for the week August 8 – 14, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Paul White
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 478

The Fridge - Mon, 08/15/2016 - 14:22

Welcome to the Ubuntu Weekly Newsletter. This is issue #478 for the week August 8 – 14, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Paul White
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Canonical Design Team: Convergent terminal

Planet Ubuntu - Mon, 08/15/2016 - 09:24

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

Cesar Sevilla: Scratch, un Software interesante

Planet Ubuntu - Mon, 08/15/2016 - 04:03

Hola a todos, después de tanto tiempo me he permitido escribir este artículo como propósito de compartir una experiencia vivida hace poco a través de un programa de formación presentado por www.zuliatec.com llamado Procodi www.procodi.com, programa de formación para niños donde les brinda desde muy temprana edad herramientas que les permiten a ellos desarrollar habilidades y destreza en las áreas de desarrollo, diseño gráfico, Electrónica Digital, Robótica, Medios digitales y música digital.

Scratch (Cómo lo dice su misma página web), está diseñado especialmente para edades entre los 8 y 16 años, pero es usado por personas de todas las edades. Millones de personas están creando proyectos en Scratch en una amplia variedad de entornos, incluyendo hogares, escuelas, museos, bibliotecas y centros comunitarios.

También cita: La capacidad de codificar programas de computador es una parte importante de la alfabetización en la sociedad actual. Cuando las personas aprenden a programar en Scratch, aprenden estrategias importantes para la solución de problemas, diseño de proyectos, y la comunicación de ideas.

Dado a que es muy provechoso esta herramienta tanto para niños como para los adultos me dedico a explicarle de una manera sencilla como hacer funcionar el programa desde GNU/Linux de manera OffLine.

Si usas Gnome o derivado es necesario tener instalado una librería que tiene como nombre Gnome-Keyring y si usas KDE debes tener instalado Kde-Wallet.

Para este ejemplo explico como hacer funcionado Scratch para Linux Mint y que puede servir para S.O derivados para Debian.

  1. Descarga primero desde la web oficial de Scratch https://scratch.mit.edu/scratch2download/ los archivos para instalar Adobe Air y Scratch para Linux, también está disponible para Windows y para Mac.
  2. Luego instalar gnome-keyring: sudo aptitude install gnome-keyring
  3. Agregar dos enlaces simbólicos a la carpeta /usr/lib/ de la siguiente manera: sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0 && sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
  4. Luego te posiciones desde la consola donde se encuentre el instalador de Adobe Air, y desde allí ejecutar lo siguiente: chmod +x AdobeAIRInstaller.bin && ./AdobeAIRInstaller.bin
  5. Sigue todos los pasos que te indique el instalador y ten un poco de paciencia.
  6. Ya instalado el Adobe Air, buscamos gráficamente el archivo Scratch-448.air y lo abrimos con Adobe AIR application Instaler. También hay que tener un poco de paciencia, pero al terminar te generará un enlace en tu escritorio donde podrás acceder al sistema las veces que desees.

Con lo ante expuesto ya podemos utilizar Scratch OffLine, pero recuerda que si entraste en la web oficial del proyecto pudiste haber notado que también lo podemos utilizar en linea.

Happy Hacking.


Paul Tagliamonte: Minica - lightweight TLS for everyone!

Planet Ubuntu - Sun, 08/14/2016 - 17:40

A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.

I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.

Thus was born minica!

Something as simple as minica tag@domain.tls domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.

Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!

Simon Quigley: A look at Lubuntu's LXQt Transition

Planet Ubuntu - Sat, 08/13/2016 - 11:18

This blog post is not an announcement of any kind or even an official plan. This may even be outdated, so check the links I provide for additional info.

As you may have seen, the Lubuntu team (which I am a part of) has started the migration process to LXQt. It's going to be a long process, but I thought I might write about some of the things that goes into this process.

Step 1 - Getting a metapackage

This step is already done, and it's installable last time I checked in a Virtual Machine. The metapackage is lubuntu-qt-desktop, but there's a lot to be desired.

While we already have this package, there's a lot to be tweaked. I've been running LXQt with the Lubuntu artwork as my daily driver for a few months now, and there's a lot missing that needs to be tweaked. So while you have the ability to install the package and play around with it, it needs to be a lot different to be usable.

Also in this image are our candidates (not final yet) for applications that will be included in Lubuntu. Here's a current list of what's on the image:

An up-to-date listing of the software in this metapackage is available here.

Step 2 - Getting an image

The next step is getting a working image for the team to test. The two outstanding merge proposals adding this have been merged, and we're now waiting for the images to be spun up and added to the ISO QA Tracker for testers.

Having this image will help us gauge how much system resources are used, and gives us the ability to run some benchmarks on the desktop. This will come after the image is ready and spins up correctly.

Step 3 - Testing

An essential part of any good operating system is the testing. We need to create some LXQt-specific test cases and make sure the ISO QA test cases are working before we can release a reliable image to our users.

As mentioned before, we need test cases. We created a blueprint last cycle tracking our progress with test cases, and the sooner that those are done, the sooner Lubuntu can make the switch knowing that all of our selected applications work fine.

Step 4 - Picking applications

This is the tough step in all of this. We need to pick the applications that best suit our users' use cases (a lot of our users run on older hardware) and needs (LibreOffice for example). Every application will most likely need a week or two to do proper benchmarking and testing, but if you have a suggestion for an application that you would like to see in Lubuntu, share your feedback on the blueprints. This is the best way to let us know what you would like to see and your feedback on the existing applications before we make a final decision.

Final thoughts

I've been using LXQt for a while now, and it has a lot of advantages not only in applications, but the desktop itself. Depending on how notable some things are, I might do a blog post in the future, otherwise, see for yourself. :)

Here is our blueprint that will be updated a lot in the next week or so that will tell you more about the transition. If you have any questions, shoot me an email at tsimonq2@lubuntu.me or send an email to the Lubuntu-devel mailing list.

I'm really excited for this transition, and I hope you are too.

Ubuntu Podcast from the UK LoCo: S09E24 – Elementary Penguin - Ubuntu Podcast

Planet Ubuntu - Thu, 08/11/2016 - 07:00

It’s Episode Twenty-four of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, and Martin Wimpress are here again.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Stuart Langridge: Feeling old, depending on how old you are

Planet Ubuntu - Thu, 08/11/2016 - 06:14

I recently discovered Martin O’Leary‘s Feeling Old twitter bot,1 which has a big list of “things that have happened” and then constructs comparisons such as “Y2K was as close to the release of Return of the Jedi as to now”, to make you do that weird “wow, I am old!” double-take. As Martin says, it occasionally throws out a gem. But, looking through the list, I think that they’re probably all gems, but whether they hit home for you depends on how old you are.

Basically, the form of the sentence is: thing is closer to old thing than to today. Ideally, you want thing to be something that you think is recent, and old thing to be something that you think is ancient, and therefore you’ll be surprised that thing really isn’t actually recent and that’s because you’re a decrepit old codger.

My theory is this: old thing ought to be before you were born. By definition, anything that happens before you were born feels like a long time ago to you. And the gap between thing and old thing is the same as the gap between thing and now (because that’s what constructs the sentences). So thing has to happen in the first half of your life. Stuff that happens while you’re a young child also feels like a long time ago — you were a kid when it happened! — so we want something that happened once you started to feel like you in your head. Say, around 12 or so years of age. Thus, we take a big list of pop culture things, find an event which happened between the ages of 12 and half your current age, find a corresponding old event, display them to you, and have you be surprised and displeased. It’s a living.

Give it a try.

I was born before 1965 in the late 60s in the early 70s in the late 70s in the early 80s in the late 80s in the early 90s in the late 90s in the 21st century

happened closer to than to today.

  1. because I was reading his excellent work on how to create accurate looking fantasy maps

Aaron Honeycutt: Plasma features – The endless search Pt.1

Planet Ubuntu - Wed, 08/10/2016 - 20:54

I’ll start this off with mentioning that I’m on Plasma 5.7.2 so you might not see these features (yet!).

Since I started working with a global team I’ve hit the unforgiving thing called Time Zones, so my first feature will be covering the ‘Digital Clock’ widget. I’m sorry to report that those of you who love that ‘Fuzzy Clock’ widget are missing this feature. With this widget I can add Time Zones by simply right-clicking the widget if it’s one of your panels already or on your desktop.If the widget is in the panel then you can just right click it like so. (Look above)

But if you have the widget on your desktop you have to press and hold it with left click like so. (Look above)

It will turn a light blue and a pop up will well… pop up with some buttons. The bottom one is the one we want. (Look above)

Either from the panel widget or the desktop widget we will get this window. (Look above) From here we can search for Time Zones based on cities ex. London being the capital of England or my state falling into the Time Zone of New York.

 

Mythbuntu: Mythbuntu 16.04 Released

Planet Ubuntu - Wed, 08/10/2016 - 17:26
CORRECTION 2016.04.23 - It was previously stated that 16.04 is a point release to 14.04. This was due to a silly copy&paste issue from our previous release statement for 14.04. The Mythbuntu 16.04 release is a flavor of Ubuntu 16.04. We're sorry for any confusion this has caused.
Mythbuntu 16.04 has been released. This is our third LTS release and will be supported until shortly after the 18.04 release.The Mythbuntu team would like to thank our ISO testers for helping find critical bugs before release. You guys rock!With this release, we are providing torrents only. It is very important to note that this release is only compatible with MythTV 0.28 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.You can get the Mythbuntu ISO from our downloads page.HighlightsUnderlying system
  • Underlying Ubuntu updates are found here
MythTVWe appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 16.04 or xenial), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/16.04/).
Upgrade NodesIf you have enabled the mysql tweaks in the Mythbuntu Control Center these will need to be disabled prior to upgrading. Once upgraded, these can be reenabled.Known issues

Julian Andres Klode: Porting APT to CMake

Planet Ubuntu - Wed, 08/10/2016 - 08:52

Ever since it’s creation back in the dark ages, APT shipped with it’s own build system consisting of autoconf and a bunch of makefiles. In 2009, I felt like replacing that with something more standard, and because nobody really liked autotools, decided to go with CMake. Well, the bazaar branch was never really merged back in 2009.

Fast forward 7 years to 2016. A few months ago, we noticed that our build system had trouble with correct dependencies in parallel building. So, in search for a way out, I picked up my CMake branch from 2009 last Thursday and spent the whole weekend working on it, and today I am happy to announce that I merged it into master:

123 files changed, 1674 insertions(+), 3205 deletions(-)

More than 1500 lines less build system code. Quite impressive, eh? This also includes about 200 lines of less code in debian/, as that switched from prehistoric debhelper stuff to modern dh (compat level 9, almost ready for 10).

The annoying Tale of Targets vs Files

Talking about CMake: I don’t really love it. As you might know, CMake differentiates between targets and files. Targets can in some cases depend on files (generated by a command in the same directory), but overall files are not really targets. You also cannot have a target with the same name as a file you are generating in a custom command, you have to rename your target (make is OK with the generated stuff, but ninja complains about cycles because your custom target and your custom command have the same name).

Byproducts for the (time) win

One interesting thing about CMake and Ninja are byproducts. In our tree, we are building C++ files. We also have .pot templates depending on them, and .mo files depending on the templates (we have multiple domains, and merge the per-domain .pot with the all-domain .po file during the build to get a per-domain .mo). Now, if we just let them depend naively, changing a C++ file causes the .pot file to be regenerated which in turns causes us to build .mo files for every freaking language in the package. Even if nothing changed.

Byproducts solve this problem. Instead of just building the .pot file, we also create a stamp file (AKA the witness) and write the .pot file (without a header) into a temporary name and only copy it to its final name if the content changed. The .pot file is declared as a byproduct of the command.

The command doing the .pot->.mo step still depends on the .pot file (the byproduct), but as that only changes now if strings change, the .mo files only get rebuild if I change a translatable string. We still need to ensure that that the .pot file is actually built before we try to use it – the solution here is to specify a custom target depending on the witness and then have the target containing the .mo build commands depend on that target.

Now if you use  make, you might now this trick already. In make, the byproducts remain undeclared, though, while in CMake we can now actually express them, and they are used by the Ninja generator and the Ninja build tool if you chose that over make (try it out, it’s fast).

Further Work

Some command names are hardcoded, I should find_program() them. Also cross-building the package does not yet work successfully, but it only requires a tiny amount of patches in debhelper and/or cmake.

I also tried building the package on a Fedora docker image (with dpkg installed, it’s available in the Fedora sources). While I could eventually get the programs build and most of the integration test suite to pass, there are some minor issues to fix, mostly in the documentation building and GTest department: Fedora ships its docbook stylesheets in a different location, and ships GTest as a pre-compiled library, and not a source tree.

I have not yet tested building on exotic platforms like macOS, or even a BSD. Please do and report back. In Debian, CMake is not up-to.date enough on the non-Linux platforms to build APT due to test suite failures, I hope those can be fixed/disabled soon (it appears to be a timing issue AFAICT).

I hope that we eventually get some non-Debian backends for APT. I’d love that.


Filed under: Debian, Uncategorized

Zygmunt Krynicki: Creating your first snappy interface

Planet Ubuntu - Wed, 08/10/2016 - 03:04

Today is a day I've been waiting for a long time. We now have enough knowledge to create our first real interface from scratch. To really understand this content you need to be familiar with parts [1], [2], [3] and [4].

We will go all the way, from branching snapd all the way to running a program that uses our new interface. We will focus on the ancillary tasks this time, the actual interface will be rather basic. Still, this knowledge will be invaluable next time where we will try to do something more complicated.

Adding the new "hello" interface
Let's get started. It all begins with snapd. If you didn't already, fork snapd and clone your fork locally. You may find this small guide that I wrote earlier useful. It goes through all those steps in detail. At the end of the exercise you should be able to build your fork of snapd (make sure it is really your fork, not the upstream version!)

Let's look around. Each time a new interface is added, the following files are modified:
  • The file interfaces/builtin/foo{,_test}.go contains the actual interface
  • The file interfaces/builtin/all{,_test}.go contains tiny change that is used to register a new interface
In addition, if the interface slot should implicitly show up on the core snap we need to change the file snap/implicit{,_test}.go. We will get back to this.
So let's create our new interface now. As mentioned in part [3] this entails implementing a new type with a few methods. Let's do that now:
Using your favorite code editor create a new file and save it as interfaces/builtin/hello.go. Paste the following code inside:

// -*- Mode: Go; indent-tabs-mode: t -*-

/*
* Copyright (C) 2016 Canonical Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 3 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/

package builtin

import (
"fmt"

"github.com/snapcore/snapd/interfaces"
)

// HelloInterface is the hello interface for a tutorial.
type HelloInterface struct{}

// String returns the same value as Name().
func (iface *HelloInterface) Name() string {
return "hello"
}

// SanitizeSlot checks and possibly modifies a slot.
func (iface *HelloInterface) SanitizeSlot(slot *interfaces.Slot) error {
if iface.Name() != slot.Interface {
panic(fmt.Sprintf("slot is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the slot side.
return nil
}

// SanitizePlug checks and possibly modifies a plug.
func (iface *HelloInterface) SanitizePlug(plug *interfaces.Plug) error {
if iface.Name() != plug.Interface {
panic(fmt.Sprintf("plug is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the plug side.
return nil
}

// ConnectedSlotSnippet returns security snippet specific to a given connection between the hello slot and some plug.
func (iface *HelloInterface) ConnectedSlotSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentSlotSnippet returns security snippet permanently granted to hello slots.
func (iface *HelloInterface) PermanentSlotSnippet(slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// ConnectedPlugSnippet returns security snippet specific to a given connection between the hello plug and some slot.
func (iface *HelloInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentPlugSnippet returns the configuration snippet required to use a hello interface.
func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// AutoConnect returns true if plugs and slots should be implicitly
// auto-connected when an unambiguous connection candidate is available.
//
// This interface does not auto-connect.
func (iface *HelloInterface) AutoConnect() bool {
return false
}

TIP: Any time you are making code changes use go fmt to re-format all of the code in the current working directory to the go formatting standards. Static analysis checkers in the snappy tree enforce this so your code won't be able to land without first being formatted correctly.
This code is perfectly fine, if a little verbose (we'll get it shorter eventually, I promise). Now let's make one more change to ensure the interface known to snapd. All we need to do is to add it to the allInterfaces list in the same golang package. I've decided to just use an init() function so that all of the changes are in one file and cause less conflicts for other developers creating their interfaces. You can see my patch below.
diff --git a/interfaces/builtin/all_test.go b/interfaces/builtin/all_test.go
index 46ca587..86c8fad 100644
--- a/interfaces/builtin/all_test.go
+++ b/interfaces/builtin/all_test.go
@@ -62,4 +62,5 @@ func (s *AllSuite) TestInterfaces(c *C) {
c.Check(all, DeepContains, builtin.NewCupsControlInterface())
c.Check(all, DeepContains, builtin.NewOpticalDriveInterface())
c.Check(all, DeepContains, builtin.NewCameraInterface())
+ c.Check(all, Contains, &builtin.HelloInterface{})
}
diff --git a/interfaces/builtin/hello.go b/interfaces/builtin/hello.go
index d791fc5..616985e 100644
--- a/interfaces/builtin/hello.go
+++ b/interfaces/builtin/hello.go
@@ -130,3 +130,7 @@ func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securit
func (iface *HelloInterface) AutoConnect() bool {
return false
}
+
+func init() {
+ allInterfaces = append(allInterfaces, &HelloInterface{})
+}


Now switch to the snap/ directory, edit the file implicit.go and add "hello" to implicitClassicSlots. You can see the whole change below. I also updated the test that checks the number of implicitly added slots. This change will make snapd create a hello slot on the core snap automatically. As you can see by looking at the file, there are many interfaces that take advantage of this little trick.
diff --git a/snap/implicit.go b/snap/implicit.go
index 3df6810..098b312 100644
--- a/snap/implicit.go
+++ b/snap/implicit.go
@@ -60,6 +60,7 @@ var implicitClassicSlots = []string{
"pulseaudio",
"unity7",
"x11",
+ "hello",
}

// AddImplicitSlots adds implicitly defined slots to a given snap.
diff --git a/snap/implicit_test.go b/snap/implicit_test.go
index e9c4b07..364a6ef 100644
--- a/snap/implicit_test.go
+++ b/snap/implicit_test.go
@@ -56,7 +56,7 @@ func (s *InfoSnapYamlTestSuite) TestAddImplicitSlotsOnClassic(c *C) {
c.Assert(info.Slots["unity7"].Interface, Equals, "unity7")
c.Assert(info.Slots["unity7"].Name, Equals, "unity7")
c.Assert(info.Slots["unity7"].Snap, Equals, info)
- c.Assert(info.Slots, HasLen, 29)
+ c.Assert(info.Slots, HasLen, 30)
}

func (s *InfoSnapYamlTestSuite) TestImplicitSlotsAreRealInterfaces(c *C) {


Now let's see what we have. You should have three patches:
  1. The first one adding the dummy hello interface
  2. The second one registering it with allInterfaces
  3. The third one adding it to implicit slots on the core snap, on classic

This is how it looked like for me. You can also see the particular commit message style I was using. All the commits are prefixed with the directory where the changes were made, followed by a colon and by the summary of the change. Normally I would also add longer descriptions but here this is not required as the changes are trivial.



Seeing our interface for the first time
Let's run our changed code with devtools. Switch to the devtools directory and run:
./refresh-bits snapd setup run-snapd restore
This roughly means:
  1. Build snapd from source (based on correctly set $GOPATH)
  2. Prepare for running locally built snapd
  3. Run locally built snapd
  4. Restore regular version of snapd

The script will prompt you for your password. This is required to perform the system-wide operations. It will also block and wait until you interrupt it by pressing control-c. Please don't do this yet though, we want to check if our changes really made it through.



To check this we can simply list the available interfaces with
sudo snap interfaces | grep hello
Why sudo? Because of the particular peculiarity of how refresh-bits works. Normally it is not required. Did it work? It did for me:


TIP: If it didn't work for you and you didn't get the hello interface then the most likely cause of the issue is that you were editing your own fork but refresh-bits still built the vanilla upstream version that is checked out somewhere else.

Go to $GOPATH/src/github.com/snapcore/snapd and ensure that this is indeed the fork you were expecting. If not just remove this directory and move your fork (that you may have cloned elsewhere) here and try again.

Great. Now we are in business. Let's recap what we did so far:
  • We added a whole new interface by dropping boilerplate code into interfaces/builtin/hello.go
  • We registered the interface in the list of allInterfaces
  • We made snapd inject an implicit (internally defined) slot on the core snap when running on classic
  • We used refresh-bits to run our locally built version and confirmed it really works

Granting permissions through interfaces
With the scaffolding in place we can now work on using our new interface to actually grant additional permissions. When designing real interfaces you have to think about what kind of action should grant extra permissions. Recall from part [3] that you can freely associate extra security snippets that encapsulate permissions either permanent or connection-based snippets on both plugs and slots.
A permanent snippet is always there as long as a plug or a slot exists. Typically this is used on snaps other than the core snap (e.g. a service provider that is packaged as a snap) to allow the service to run. Examples of such services could include network-manager, modem-manager, docker, lxd, x11 any anything like that. The granted permissions allow the service to operate as well as to create a communication channel (typically a socket or a DBus bus name) that clients can connect to.
Connection based snippet is only applied when a connection (interface connection) is made. This permission is given to the client and typically involves basic IPC system calls as well as a permission to open a given socket (e.g. the X11 socket) or use specific DBus methods and objects.
A special class of connection snippets exist for things that don't involve a client application talking to a server application but rather a program using given kernel services. The prime example of this use case is the network interface which grants access to several network system calls. This interface grants nothing to the slot, just to connected plugs.
Today we will explore that last case. Our hello interface will give us access to a system call that is usually forbidden, reboot
The graceful-reboot snap
For the purpose of this exercise I wrote a tiny C program that uses the reboot() function. The whole program is reproduced below, along with appropriate build support for snapcraft.
graceful-reboot.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/reboot.h>
#include <linux reboot.h>
#include <errno.h>


int main() {
sync();
if (reboot(LINUX_REBOOT_CMD_RESTART) != 0) {
switch (errno) {
case EPERM:
printf("Insufficient permissions to reboot the system\n");
break;
default:
perror("reboot()");
break;
}
return EXIT_FAILURE;
}
printf("Reboot requested\n");
return EXIT_SUCCESS;
}

Makefile
TIP: Makefiles rely on differences between tabs and spaces. When copy pasting this sample you need to ensure that tabs are preserved in the clean and install rules
CFLAGS += -Wall
.PHONY: all
all: graceful-reboot
.PHONY: clean
clean:
rm -f graceful-reboot
graceful-reboot: graceful-reboot.c
.PHONY: install
install: graceful-reboot
install -d $(DESTDIR)/usr/bin
install -m 0755 graceful-reboot $(DESTDIR)/usr/bin/graceful-reboot

snapcraft.yaml
name: graceful-reboot
version: 1
summary: Reboots the system gracefully
description: |
This snap contains a graceful-reboot application that requests the system
to reboot by talking to the init daemon. The application uses a custom
"hello" interface that is developed as a part of a tutorial.
confinement: strict
apps:
graceful-reboot:
command: graceful-reboot
plugs: [hello]
parts:
main:
plugin: make
source: .

We now have everything required. Now let's use snapcraft to build this code and get our snap ready. I'd like to point out that we use confinement: strict. This tells snapcraft and snapd that we really really don't want to run this snap in devmode where all sandboxing is off. Doing so would defeat the purpose of adding the new interface.
Let's install the snap and try it out:
$ snapcraft

$ sudo snap install ./graceful-reboot_1_amd64.snap

Ready? Let's run the command now (don't worry, it won't reboot)
$ graceful-reboot
Bad system call

Aha! That's what we are here to fix. As we know from part [4] that seccomp, which is a part of the sandbox system, blocks all system calls that are not explicitly allowed. We can check the system log to see what the actual error message was to figure out which system call need to be added.
Looking through journalctl -n 100 (last 100 messages) I found this:
sie 10 09:47:02 x200t audit[13864]: SECCOMP auid=1000 uid=1000 gid=1000 ses=2 pid=13864 comm="graceful-reboot" exe="/snap/graceful-reboot/x1/usr/bin/graceful-reboot" sig=31 arch=c000003e syscall=169 compat=0 ip=0x7f7ef30dcfd6 code=0x0
Decoding this is rather cryptic message is actually pretty easy. The key fact we are after is the system call number. Here it is 169. Which symbolic system call is that? We can use the scmp_sys_resolver program to find out.
$ scmp_sys_resolver 169
reboot

Bingo. While this case was pretty obvious, library functions don't always map to system call names directly. There are often cases where system calls evolve to gain additional arguments, flags and what not and the name of the library function is not changed.
Adjusting the hello^Hreboot interface
At this stage we can iterate on our interface. We have a test case (the snap we just wrote), we have the means to change the code (editing the hello.go file) and to make it live in the system (the refresh-bits command).
Let's patch our interface to be a little bit more meaningful now. We will rename it from hello to reboot. This will affect the file name, the type name and the string returned from Name(). We will also grant a connected plug permission, through the seccomp security backend, to the use the reboot system call. You can try to make the necessary changes yourself but I provide the essential part of the  patch below. The key thing to keep in mind is that ConnectedPlugSnippet method must return, when asked about SecuritySecComp, the name of the system call to allow, like this []byte(`reboot`).
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:- return nil, nil
+ return []byte(`reboot`), nilcase interfaces.SecurityDBus:
We can now return to the terminal that had refresh-bits running, interrupt the running command with control-c and simply re-start the whole command again. This will build the updated copy of snapd.
Surely enough, if we now ask snap about known interfaces we will see the reboot interface.
Let's now update the graceful-reboot snap to talk about the reboot interface and re-install it (note that you don't have to change the version string at all, just re-install the rebuilt snap). If we ask snapd about interfaces now we will see that the core snap exposes the reboot slot and the graceful-reboot snap has a reboot plug but they are disconnected.



Let's connect them now.
sudo snap connect graceful-reboot:reboot ubuntu-core:reboot
Let's ensure that it worked by running the interfaces command again:
$ sudo snap interfaces | grep reboot
:reboot              graceful-reboot

Success!
Now before we give it a try, let's have a look at the generated seccomp profile. The profile is derived from the base seccomp template that is a part of snapd source code and the set of plugs, slots and connections made to or from this snap. Each application of each snap gets a profile, we can see that for ourselves by looking at
/var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
$ grep reboot /var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
reboot

There are a few gotchas that we should point out though:
  • Security profiles are derived from interfaces but are only changed when a new connection is made (and that connection affects a particular snap), when the snap is initially installed or every time it is updated.
  • In practice we will either disconnect / reconnect the hello interface or reinstall the snap (whichever is more convenient)
  • Snapd remembers connections that were made explicitly and will re-establish them across snap updates. If you rename an interface while working on it, snapd may print a message (to system log, not to the console) about being unable to reconnect the "hello" interface because that interface no longer exists in snapd. To make snapd forget all those connections simply remove and reinstall the affected snap.
  • You can experiment by editing seccomp profiles directly. Just edit the file mentioned above and add additional system calls. Once you are happy with the result you can adjust snapd source code to match.
  • You can also do that with apparmor profiles but you have to re-load the profile into the kernel each time using the command apparmor_parser -r /path/to/the/profile

Okay, so did it work? Let's try the graceful-reboot command again.
$ graceful-reboot
Insufficient permissions to reboot the system

The process didn't get killed straight away but the reboot call didn't work yet either. Let's look at the system log and see if there are any hints.
sie 10 10:25:55 x200t kernel: audit: type=1400 audit(1470817555.936:47): apparmor="DENIED" operation="capable" profile="snap.graceful-reboot.graceful-reboot" pid=14867 comm="graceful-reboot" capability=22  capname="sys_boot"
This time we could use the system call but apparmor stepped in and said that we need the sys_boot capability. Let's modify the interface to allow that then. The precise apparmor syntax for this is "capability sys_boot,". You absolutely have to include the trailing comma. It was my most common mistake when hacking on apparmor profiles.
Let's patch the interface and re-run refresh-bits and re-install the snap. You can see the patch I applied below
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
- return nil, nil
+ return []byte(`capability sys_boot,`), nil
case interfaces.SecuritySecComp:
return []byte(`reboot`), nil
case interfaces.SecurityDBus:

TIP: I'm using sudo with a full path because of a bug in sudo where /snap/bin is not kept on the path.
Warning: the next command will reboot your system straight away!
$ sudo /snap/bin/graceful-reboot

Final touches
Since interfaces like this are very common, snapd has some support for writing less repetitive code. The exactly identical interface can be written with just this short snippet
// NewRebootInterface returns a new "reboot" interface.
func NewRebootInterface() interfaces.Interface {
return &commonInterface{
name: "reboot",
connectedPlugAppArmor: `capability sys_boot,`,
connectedPlugSecComp: `reboot`,
reservedForOS: true,
}
}

Everywhere where we used to say &RebootInterface{} we can now say NewRebootInterface(). The abbreviation helps in code reviews and in just conveying the meaning and intent of the interface.
You can find the source code of this interface in my github repository. The code of the graceful reboot snap is here. Feel free to comment below, or on Google+ or ask me questions directly.

David Tomaschik: HSC Part 3: DEF CON

Planet Ubuntu - Wed, 08/10/2016 - 00:00

This is the 3rd, and final, post in my Hacker Summer Camp 2016 series. Part 1 covered my class at Black Hat, and Part 2 the 2016 BSidesLV Pros versus Joes CTF. Now it’s time to talk about the capstone of the week: DEF CON.

DEF CON is the world’s largest (but not oldest) Hacker conference. This year was the biggest yet, with Dark Tangent stating that they produced 22,000 lanyards – and ran out of lanyards. That’s a lot of attendees. It covered both the Paris and Bally’s conference areas, and that still didn’t feel like enough.

DEF CON is also what I measure my year by. You can have your New Year’s, I measure mine from August to August (though apparently next year it’ll be the end of July…). Probably the single biggest regret in my life is that I didn’t find a way to go to DEF CON before DEF CON 20. The people and experiences there are memorable and well worth it.

Crowds

I don’t talk about it a whole lot, but I actually have pretty bad social anxiety. I do a terrible job of talking with people I don’t know, introducing myself to people, etc. At most events, I’m what you would call a “wallflower”. That doesn’t combine very well with 22,000 people. That especially doesn’t combine well with the chokepoints in a number of places, especially the packet capture village. It was hard to get through the week, but I told myself I wasn’t going to let my social anxiety ruin my con, and I think I did a good job of that.

Capture the Packet

So, since DEF CON 21, I’ve always played in Capture the Packet. At DC22, I even managed a 2nd place overall finish (just one spot away from the coveted black Uber badge). This year, I went back to play and discovered major changes:

  • Rounds are now two hours instead of 1.
  • There is now a qualifying round, semifinals, and finals.
  • They’re really pulling out the obscure protocols.
  • Significantly lower submit attempt limits (like 1-2 in most questions).

On the other hand, they had serious gameplay issues that really makes me regret spending so much of my con this year on Capture the Packet. These include:

  • Every round started late. The finals were supposed to start at 10:00 on Sunday, they started at 12:30. That was two and a half hours of sitting there waiting.
  • There were many answers where the answer in the database contained typos.
  • There were many answers where the question contained typos that made it difficult or impossible to find the traffic (wrong IP, wrong MAC, etc.)
  • There were many questions that were very poorly phrased. It was nearly impossible to parse some of the questions. One question asked for the “last three hex” of a value, but it wasn’t clear: last 3 bytes in hex format, last 3 hex characters, etc.

Combining the problems with questions and the lowered submission limits, it meant that several times we were locked out of questions just because it wasn’t clear what format the answer should be or how much data they wanted. The organizers clearly need to:

  • Increase the limits. (I’m not asking for unlimited tries, but on text answers, give us at least 3-5.)
  • Build some sort of fuzzy matching (case insensitive, automatically strip whitespace leading/trailing, etc.)
  • Write questions more clearly.

I’m actually amazed that Aries Security is able to sell CTP as a commercial offering for training to government and companies. It’s a wonderful concept and they try hard, but I’m so disappointed in the outcome. I spent ~12 hours sitting in the CTP area, but only 6 of those were actually playing. The other 6 were waiting for games to start, and then the games were disheartening when they didn’t work correctly. I’ll probably play again next year, but I really hope they’ve put some polish on the game by then.

Parties

As usual, DEF CON had a variety of parties to choose from. Most importantly, I got my hit of Dual Core in at the Friday night EDM night, and spent a little bit of time at the Queercon pool party. (Though it was too hot and humid to spend much time by the pool unless you were in the pool, and I’m not someone anyone wants in the pool…)

Just keeping track of all of the parties has become a major task, but the DCP guys have you covered there. I’d love to see some more parties that are a little more “chill”: less loud music, more just hanging out and having a drink with friends. (Or maybe I was just at all the wrong ones this year.)

Next Year

I can’t wait for next year – DEF CON 25 promises to be big, and we’re moving over to Caesars (2 years is all we got out of Bally’s/Paris). I’m trying to come up with ideas of how I can make my own personal DEF CON 25 bigger and better, without ripping off ideas like the AND!XOR badge, but I want to do something cool. Suggestions to @Matir or find my email if you know me. :) Hopefully I’ll see all of you, my hacker friends, out in Las Vegas for another fun Hacker Summer Camp.

David Tomaschik: HSC Part 2: Pros versus Joes CTF

Planet Ubuntu - Wed, 08/10/2016 - 00:00

Continuing my Hacker Summer Camp Series, I’m going to talk about one of my Hacker Summer Camp traditions. That’s right, it’s the Pros versus Joes CTF at BSidesLV. I’ve written about my experiences and even a player’s guide before, but this was my first year as a Pro, captaining a blue team (The SYNdicate).

It’s important to me to start by congratulating all of the Joes – this is an intense two days, and your pushing through it is a feat in and of itself. In past years, we had players burn out early, but I’m proud to say that nearly all of the Joes (from every team) worked hard until the final scorched earth. Every one of the players on my team was outstanding and worked their ass off for this CTF, and it paid off, as The SYNdicate was declared the victors of the 2016 BSides LV Pros versus Joes.

What worked well

Our team put in incredible amounts of effort into preparation. We built hardening scripts, discussed strategy, and planned our “first hour”. Keep in mind that PvJ simulates you being brought in to harden a network under active attack, so the first hour is absolutely critical. If you are well and thoroughly pwned in that time, getting the red cell out is going to be hard. There’s a lot of ways to persist, and finding them all is time consuming (especially since neither I nor my lieutenant does much IR).

We really jelled as a team and worked very, very, well together on the 2nd day. We hardened faster than I thought was possible and got our network very locked down. In that day, we only lost 1000 points via beacons (10 minutes on one Windows XP host). Our network was reportedly very secure, but I don’t know how thoroughly the other teams were checking versus the “low hanging fruit” approach.

What didn’t work well

The first day, we did not coordinate well. We had machines that hadn’t been touched for hardening even after 4 hours. I failed when setting up the firewall and blocked ICMP for a while, causing all of our services to score as down. I’ve said it before and I’ll say it again: coordination and organization are the most important aspects of working as a team in this environment.

The Controversy

There was an issue with scoring during the competition where tickets were being counted incorrectly. For example, my team had ticket points deducted even when we had 0 open tickets: the normal behavior being that only when you had a ticket open would you lose points. This resulted in massive ticket deductions showing up on the scoreboard, which Dichotomy was only able to correct after gameplay had ended. This was a very controversial issue because it resulted in the team that was leading on the scoreboard dropping to last place and pushed my team to the top. The final scoring (announced on Twitter) was in accordance with the written rules as opposed to the scoreboard, but it still was confusing for every team involved.

Conclusion

Overall, this was a good game, and I’m very proud of my lieutenant, my joes, and all of the other teams for playing so well. I’m also very appreciative of the hard work from Dichotomy, Gold Cell, and Grey Cell in doing all of the things necessary to make this game possible. This game is the closest thing to a live fire security exercise I’ve ever seen at a conference, and I think we all have something to learn from that environment.

Stephen Kelly: Opt-in header-only libraries with CMake

Planet Ubuntu - Tue, 08/09/2016 - 13:00

Using a C++ library, particularly a 3rd party one, can be complicated affair. Library binaries compiled on Windows/OSX/Linux can not simply be copied over to another platform and used there. Linking works differently, compilers bundle different code into binaries on each platform etc.

This is not an insurmountable problem. Libraries like Qt distribute dynamically compiled binaries for major platforms and other libraries have comparable solutions.

There is a category of libraries which considers the portable binaries issue to be a terminal one. Boost is a widespread source of many ‘header only’ libraries, which don’t require a user to link to a particular platform-compatible library binary. There are also many other examples of such ‘header only’ libraries.

Recently there was a blog post describing an example library which can be built as a shared library, or as a static library, or used directly as a ‘header only’ library which doesn’t require the user to link against anything to use the library. The claim is that it is useful for libraries to provide users the option of using a library as a ‘header only’ library and adding preprocessor magic to make that possible.

However, there is yet a fourth option, and that is for the consumer to compile the source files of the library themselves. This has the
advantage that the .cpp file is not #included into every compilation unit, but still avoids the platform-specific library binary.

I decided to write a CMake buildsystem which would achieve all of that for a library. I don’t have an opinion on whether good idea in general for libraries to do things like this, but if people want to do it, it should be easy as possible.

Additionally, of course, the CMake GenerateExportHeader module should be used, but I didn’t want to change the source from Vittorio so much.

The CMake code below compiles the library in several ways and installs it to a prefix which is suitable for packaging:

cmake_minimum_required(VERSION 3.3) project(example_lib) # define the library set(library_srcs example_lib/library/module0/module0.cpp example_lib/library/module1/module1.cpp ) add_library(library_static STATIC ${library_srcs}) add_library(library_shared SHARED ${library_srcs}) add_library(library_iface INTERFACE) target_compile_definitions(library_iface INTERFACE LIBRARY_HEADER_ONLY ) set(installed_srcs include/example_lib/library/module0/module0.cpp include/example_lib/library/module1/module1.cpp ) add_library(library_srcs INTERFACE) target_sources(library_srcs INTERFACE $<INSTALL_INTERFACE:${installed_srcs}> ) # install and export the library install(DIRECTORY example_lib/library DESTINATION include/example_lib ) install(FILES example_lib/library.hpp example_lib/api.hpp DESTINATION include/example_lib ) install(TARGETS library_static library_shared library_iface library_srcs EXPORT library_targets RUNTIME DESTINATION bin ARCHIVE DESTINATION lib LIBRARY DESTINATION lib INCLUDES DESTINATION include ) install(EXPORT library_targets NAMESPACE example_lib:: DESTINATION lib/cmake/example_lib ) install(FILES example_lib-config.cmake DESTINATION lib/cmake/example_lib )

This blog post is not a CMake introduction, so to see what all of those commands are about start with the cmake-buildsystem and cmake-packages documentation.

There are 4 add_library calls. The first two serve the purpose of building the library as a shared library and then as a static library.

The next two are INTERFACE libraries, a concept I introduced in CMake 3.0 when it looked like Boost might use CMake. The INTERFACE target can be used to specify header-only libraries because they specify usage requirements for consumers to use, such as include directories and compile definitions.

The library_iface library functions as described in the blog post from Vittorio, in that users of that library will be built with LIBRARY_HEADER_ONLY and will therefore #include the .cpp files.

The library_srcs library causes the consumer to compile the .cpp files separately.

A consumer of a library like this would then look like:

cmake_minimum_required(VERSION 3.3) project(example_user) find_package(example_lib REQUIRED) add_executable(myexe src/src0.cpp src/src1.cpp src/main.cpp ) ## uncomment only one of these! # target_link_libraries(myexe # example_lib::library_static) # target_link_libraries(myexe # example_lib::library_shared) # target_link_libraries(myexe # example_lib::library_iface) target_link_libraries(myexe example_lib::library_srcs)

So, it is up to the consumer how they consume the library, and they determine that by using target_link_libraries to specify which one they depend on.


Simos Xenitellis: How to install LXD/LXC containers on Ubuntu on cloudscale.ch

Planet Ubuntu - Tue, 08/09/2016 - 10:26

In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.

cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.

In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.

We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.

Set up the VPS

We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).

The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.

We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).

Here is the summary of the freshly launched VPS server. The IP address is shown as well.

Connect and update the VPS

In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.

myusername@mycomputer:~$ ssh ubuntu@5.102.145.245 Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64) * Documentation: https://help.ubuntu.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myubuntuserver:~$

Let’s update the package list,

ubuntu@myubuntuserver:~$ sudo apt update Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB] ... Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B] Fetched 10.5 MB in 2s (4707 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 67 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@myubuntuserver:~$ sudo apt upgrade Reading package lists... Done Building dependency tree ... Processing triggers for libc-bin (2.23-0ubuntu3) ... ubuntu@myubuntuserver:~$

In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.

Configure LXD/LXC

Let’s see how much free disk space is there,

ubuntu@myubuntuserver:~$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.7G 1.2G 8.6G 12% / ubuntu@myubuntuserver:~$

There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.

ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux Reading package lists... Done ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@myubuntuserver:~$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: myzfspool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 5 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes ...accept the network autoconfiguration settings that you will be asked... LXD has been successfully configured. ubuntu@myubuntuserver:~$

That’s it! We are good to go and configure our first LXD/LXC container.

Testing a container as a Web server

Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.

ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web Creating web Retrieving image: 100% Starting web ubuntu@myubuntuserver:~$

We launched a container called web.

Let’s connect to the container, update the package list and upgrade any available packages.

ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash root@web:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease ... 9 packages can be upgraded. Run 'apt list --upgradable' to see them. root@web:~# apt upgrade Reading package lists... Done ... Processing triggers for initramfs-tools (0.122ubuntu8.1) ... root@web:~#

Still inside the container, we install nginx.

root@web:~# apt install nginx Reading package lists... Done ... Processing triggers for ufw (0.35-0ubuntu2) ... root@web:~#

Let’s make a small change in the default index.html,

root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html --- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000 +++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000 @@ -1,7 +1,7 @@ <!DOCTYPE html> <html> <head> -<title>Welcome to nginx!</title> +<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title> <style> body { width: 35em; @@ -11,7 +11,7 @@ </style> </head> <body> -<h1>Welcome to nginx!</h1> +<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> root@web:/var/www/html#

Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.

root@web:/var/www/html# exit ubuntu@myubuntuserver:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | web | RUNNING | 10.5.242.156 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+ ubuntu@myubuntuserver:~$ ifconfig ens3 ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB) ubuntu@myubuntuserver:~$

Therefore, the iptables command that will allow access to the container is,

ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80 ubuntu@myubuntuserver:~$

Here is the result when we visit the new Web server from our computer,

Benchmarks

We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.

CPU

We are benchmarking the CPU using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 10000 Test execution summary: total time: 10.9448s total number of events: 10000 total time taken by event execution: 10.9429 per-request statistics: min: 0.96ms avg: 1.09ms max: 2.79ms approx. 95 percentile: 1.27ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 10.9429/0.00 ubuntu@myubuntuserver:~$

The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.

Memory

We are benchmarking the memory using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing memory operations speed test Memory block size: 1K Memory transfer size: 102400M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 104857600 (1768217.45 ops/sec) 102400.00 MB transferred (1726.77 MB/sec) Test execution summary: total time: 59.3013s total number of events: 104857600 total time taken by event execution: 47.2179 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.80ms approx. 95 percentile: 0.00ms Threads fairness: events (avg/stddev): 104857600.0000/0.00 execution time (avg/stddev): 47.2179/0.00 ubuntu@myubuntuserver:~$

The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.

Disk

We are benchmarking the disk using dd with the following parameters.

ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync 1024+0 records in 1024+0 records out 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s ubuntu@myubuntuserver:~$

 

 

It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.

ZFS pool free space

Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,

ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 811M 4,18G - 11% 15% 1.00x ONLINE - ubuntu@myubuntuserver:~$

Again, after a second container was just created, (new and empty)

ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 822M 4,17G - 11% 16% 1.00x ONLINE - ubuntu@myubuntuserver:~$

Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.

Conclusion

We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.

We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.

Finally, we see some benchmarks for the vCPU, the memory and the disk.

Dustin Kirkland: Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for Linux.com

Planet Ubuntu - Tue, 08/09/2016 - 06:26

I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
  1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
  2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
  3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
  4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
  5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
  6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:
Cheers,
Dustin

The Fridge: Ubuntu Weekly Newsletter Issue 477

Planet Ubuntu - Mon, 08/08/2016 - 19:42

Welcome to the Ubuntu Weekly Newsletter. This is issue #477 for the week August 1 – 7, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Aaron Honeycutt
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator