Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 4 days 42 min ago

Jo Shields: A quick introduction to Flatpak

Sun, 12/04/2016 - 03:44

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Ross Gammon: My Open Source Contributions June – November 2016

Sat, 12/03/2016 - 04:52

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian
  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.
Ubuntu
  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.
Other
  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.
Plan for December Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.
Other
  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

Timo Aaltonen: Mesa 12.0.4 backport available for testing

Fri, 12/02/2016 - 15:28

Hi!

I’ve uploaded Mesa 12.0.4 for xenial and yakkety to my testing PPA for you to try out. 16.04 shipped with 11.2.0 so it’s a slightly bigger update there, while yakkety is already on 12.0.3 but the new version should give radeon users a 15% performance boost in certain games with complex shaders.

Please give it a spin and report to the (yakkety) SRU bug if it works or not, and mention the GPU you tested with. At least Intel Skylake seems to still work fine here.

UPDATE:

The ppa now has 12.0.5 for both xenial & yakkety. There have been no comments to the SRU bug about success/failure, so feel free to add a comment here instead.


Costales: Fairphone 2 & Ubuntu

Fri, 12/02/2016 - 10:54
En la Ubucon Europe pude conocer de primera mano los avances de Ubuntu Touch en el Fairphone 2.

Ubuntu Touch & Fairphone 2El Fairphone 2 es un móvil único. Como su propio nombre indica, es un móvil ético con el mundo. No usa mano de obra infantil, construido con minerales por los que no corrió la sangre y que incluso se preocupa por los residuos que genera.

Delantera y traseraEn el apartado de software ejecuta varios sistemas operativos, y por fin, Ubuntu es uno de ellos.

Tu elecciónEl port de Ubuntu está implementado por el proyecto UBPorts, que está avanzando a pasos de gigante cada semana.

Cuando yo probé el móvil, me sorprendió la velocidad de Unity, similar a la de mi BQ E4.5.
La cámara es suficientemente buena. Y la duración de la batería es aceptable.
Me encantó especialmente la calidad de la pantalla, con sólo mirarla se nota su nitidez.
Respecto a las aplicaciones, probé varias de la Store sin problema.

CarcasaEn resumen, un gran sistema operativo, para un gran móvil :) Un win:win

Si te interesa colaborar como desarrollador de este port, te recomiendo este grupo de Telegram: https://telegram.me/joinchat/AI_ukwlaB6KCsteHcXD0jw

All images are CC BY-SA 2.0.

Ubuntu Insights: Cloud Chatter: November 2016

Fri, 12/02/2016 - 05:01

Welcome to our November edition. We begin with details on our latest partnership with Docker. Next up, we bring you a co-hosted webinar with PLUMgrid exploring how enterprises can build and manage highly scalable OpenStack clouds. We also have a number of exciting announcements with partners including Microsoft, Cloud Native Computing Forum and Open Telekom Cloud. Take a look at our top blog posts for interesting tutorials and videos. And finally, don’t miss out on our round up of industry news.

A Commercial Partnership With Docker

Docker and Canonical have announced an integrated Commercially Supported (CS) Docker Engine offering on Ubuntu providing Canonical customers with a single path for support of the Ubuntu operating system and CS Docker Engine in enterprise Docker operations.

As part of this agreement Canonical will provide Level 1 and Level 2 technical support for CS DOcker Engine backed by Docker, Inc providing Level 3 support.
Learn more

Webinar: Secure, scale and simplify your OpenStack deployments

In our latest on-demand webinar, we explore how enterprises and telcos can build and manage highly scalable OpenStack clouds with BootStack, Juju and PLUMgrid. Arturo Suarez, Product Manager for BootStack at Canonical, and Justin Moore, Principal Solutions Architect, at PLUMgrid, discuss common issues users run into when running OpenStack at scale, and how to circumnavigate them using solutions such as BootStack, Juju and PLUMgrid ONS.

Watch on-demand

In Other News Microsoft loves Linux. SQL Server Public Preview available on Ubuntu

Canonical announced that the next public release of Microsoft’s SQL Server is now available for Ubuntu. SQL Server on Ubuntu now provides freedom of choice for developers and organisations alike whether you use on premises or in the cloud. With SQL Server on Ubuntu, there are significant cost savings, performance improvements, and the ability to scale & deploy additional storage and compute resources easier without adding more hardware. Learn more

Canonical launches fully managed Kubernetes and joins the CNCF

Canonical recently joined The Cloud Native Computing Foundation (CNCF), expanding the Canonical Distribution of Kubernetes to include consulting, integration and fully-managed on-prem and on-cloud Kubernetes services. Ubuntu leading the adoption of Linux containers, and Canonical’s definition of a new class of application and new approach to operations, are only some of the key contributions being made. Learn more

Open Telekom Cloud joins Certified Public Cloud

T-Systems, a subsidiary of Deutsche Telekom, recently launched its own Open Telekom Cloud platform, based on Huawei’s OpenStack and hardware platforms. Canonical and T-Systems have announced their partnership to provide certified Ubuntu images on all LTS versions of Ubuntu to users of their cloud services. Learn more

Top blog posts from Insights Industry News Ubuntu Cloud in the news OpenStack & NFV Containers & Storage Big data / Machine Learning / Deep Learning

Raphaël Hertzog: My Free Software Activities in November 2016

Fri, 12/02/2016 - 04:45

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Ubuntu Insights: Canonical’s Distribution of Kubernetes Reduces Operational Friction

Thu, 12/01/2016 - 08:00

Linux containers (LXC) are one of the hottest technologies in the market today. Developers are adopting containers, especially Docker, as a way to speed-up development cycles and deliver code into testing or production environments much faster than traditional methods. With the largest base of LXC, LXD, and Docker users, Ubuntu has long been the platform of choice for developers driving innovation with containers and is widely used to run infrastructure like Kubernetes as a result. Due to customer demand, Canonical recently announced a partnership with Google to deliver the Canonical Distribution of Kubernetes.


Marco Ceppi, Engineering Manager at Canonical, tells our container story at KubeCon 2016

Explaining Containers and Canonical’s Distribution of Kubernetes

First, a bit of background, containers offer an alternative to traditional virtualization. Containers allow organizations to virtually run multiple Linux systems on a single kernel without the need for a hypervisor. One of the most promising features of containers is the ability to put more applications onto a physical server than you could with a virtual machine. There are two types of containers – machine and process.

Machine containers (sometimes called OS containers) allow developers/organizations to configure, install, and run applications, multiple processes, or libraries within a single container. They create an environment where companies can manage distributed software solutions across various environments, operating systems, and configurations. Machine containers are largely used by organizations to “lift-and-shift” legacy applications from on-premise to the cloud. Whereas process containers (sometimes called application containers) share the same kernel host, they can only run a single process or command. Process containers are especially valuable for creating microservices or API functions calls that are fast, efficient, and optimized. Process containers also allow developers to deploy services and solutions more efficiently and on time without having to deploy virtual machines.

Ubuntu is the container OS (operating system) used by a majority of Docker developers and deployments worldwide, and Kubernetes is the leader in coordinating process containers across a cluster, enabling high-efficiency DevOps, horizontal scaling, and support for 12-factor apps. Our Distribution of Kubernetes allows organizations to manage and monitor their containers across all major public clouds, and within private infrastructures. Kubernetes is effectively the air traffic controller for managing how containers are deployed.

Even as the cost of software has declined, the cost to operate today’s complex and distributed solutions have increased as many companies have found themselves managing these systems in a vacuum. Even for experts, deploying, and operating containers and Kubernetes at scale can be a daunting task. However, by deploying Ubuntu, Juju for software modeling, and Canonical’s Kubernetes distribution helps organizations to make deployment simple. Further, we have certified our distribution of Kubernetes to work with most major public clouds as well as on-premise infrastructure like VMware or Metal as a Service (MaaS) solutions thereby eliminating many of the integration and deployment headaches.

A new approach to IT operations

Containers are only part of the major change in the way we think about software. Organisations are facing fundamental limits in their ability to manage escalating complexity, and Canonical’s focus on operations has proven successful in enabling cost-effective scale-out infrastructure. Canonical’s approach dramatically increases the ability of IT operations teams to run ever more complex and large scale systems.

Leading open source projects like MAAS, LXD, and Juju help enterprises to operate in a hybrid cloud world. Kubernetes extends the diversity of applications which can now be operated efficiently on any infrastructure.

Moving to Ubuntu and to containers enables an organization to reduce overhead and improve operational efficiency. Canonical’s mission is to help companies to operate software on their public and private infrastructure, bringing Netflix-style efficiency to the enterprise market.

Canonical views containers as one of the key technologies IT and DevOps organizations will use as they work to become more cost effective and based in the cloud. Forward-looking enterprises are moving from proof of concepts (POCs) to actual production deployments, and the window for competitive advantage is closing.

For more information on how we can help with education, consulting, and our fully-managed or on cloud Kubernetes services, check out the Canonical Distribution of Kubernetes.

Ubuntu Podcast from the UK LoCo: S09E40 – Dirty Dan’s Dark Delight - Ubuntu Podcast

Thu, 12/01/2016 - 08:00

It’s Season Nine Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Dan Kermac are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We review the nexdock and how it works with the Raspberry Pi 3, Meizu Pro 5 Ubuntu Phone, bq M10 FHD Ubuntu Tablet, Android, Dragonboard 410c, Roku, Chomecast, Amazon FireTV and laptops from Dell and Entroware.

  • We share a Command Line Lurve:

sudo apt install netdiscover sudo netdiscover

The output looks something like this:

_____________________________________________________________________________ IP At MAC Address Count Len MAC Vendor / Hostname ----------------------------------------------------------------------------- 192.168.2.2 fe:ed:de:ad:be:ef 1 42 Unknown vendor 192.168.2.1 da:d5:ba:be:fe:ed 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.11 ba:da:55:c0:ff:ee 1 60 BROTHER INDUSTRIES, LTD. 192.168.2.30 02:02:de:ad:be:ef 1 60 Elitegroup Computer Systems Co., Ltd. 192.168.2.31 de:fa:ce:dc:af:e5 1 60 GIGA-BYTE TECHNOLOGY CO.,LTD. 192.168.2.107 da:be:ef:15:de:af 1 42 16) 192.168.2.109 b1:gb:00:bd:ba:be 1 60 Denon, Ltd. 192.168.2.127 da:be:ef:15:de:ad 1 60 ASUSTek COMPUTER INC. 192.168.2.128 ba:df:ee:d5:4f:cc 1 60 ASUSTek COMPUTER INC. 192.168.2.101 ba:be:4d:ec:ad:e5 1 42 Roku, Inc 192.168.2.106 ba:da:55:0f:f1:ce 1 42 LG Electronics 192.168.2.247 f3:3d:de:ad:be:ef 1 60 Roku, Inc 192.168.3.2 ba:da:55:c0:ff:33 1 60 Raspberry Pi Foundation 192.168.3.1 da:d5:ba:be:f3:3d 1 60 TP-LINK TECHNOLOGIES CO.,LTD. 192.168.2.103 da:be:ef:15:d3:ad 1 60 Unknown vendor 192.168.2.104 b1:gb:00:bd:ba:b3 1 42 Unknown vendor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Flickr.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Daniel Pocock: Using a fully free OS for devices in the home

Thu, 12/01/2016 - 06:11

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy will be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?
Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

Ubuntu Insights: UbuCon Europe – a sure sign of community strength

Thu, 12/01/2016 - 04:42

UbuCon Europe, 18-20 Nov 2016, Essen, Germany

Recovering from a great UbuCon Europe session that just took place a couple weeks ago! This year was attended by 170 people from 20 countries, about half coming from Germany. Almost everyone was (obviously) running Ubuntu and had been around in the community for years.

The event was organised by the Ubuntu community, led by Sujeevan Vijayakumaran – congrats to him! The venue, the schedule, the social events and sponsoring were all flawlessly executed and it showed the level of commitment the European Ubuntu community has. So much so that the community are already looking forward to the next big UbuCon in Paris!

The venue was the Unperfekthaus, central to Essen. A beautifully weird and inclusive 4-storey coworking / artist’s / maker-space / café / event location.

Selection of talks
We had a good Canonical attendance at the event that included Jane, Thibaut, Thomas, Alan, Christian, Daniel, Martin and Michael! Many long-standing community members such as Laura Fautley (czajkowski), Elizabeth Joseph Krumbach (pleia2), cm-t arudy, José Antonio Rey, Marcos Costales, Marius Gripsgård (mariogrip), Philip Ballew and Nathan Haines also attended and presented to – all worthy of a mention.

Jane gave a keynote at the event to a packed room. The talks and workshop from Alan, Daniel, Michael and Thibaut were focused on snaps and IoT and were well-received. There seemed to be a lot of interest in learning more about the new way of doing things and getting started immediately – what we like to hear.

Malte Lantin from Microsoft gave an overview of Bash on Ubuntu on Windows. The talk started by covering why Microsoft worked on the project, some history of previous attempts at nix compatibility and posix compliance along with some technical infrastructure details. A few demos were given and technical questions from the audience answered.

Elizabeth K Joseph gave a talk reminding the audience that while the Ubuntu project has been branching out to new technology, the traditional desktop is still there, and still needs volunteers to help. She outlined a great selection of ways in which new contributors can get involved with practical advice and links to resources all the way through.

Alan gave a talk on “zero to store” about snaps and the store. The examples were well picked and lots of fun – the feedback after the session was mostly amazement of how easy it has become to build and publish software. Michael’s session “digging deep into snapcraft” which was in the following time-slot was very well-attended. Probably as a result.

On the snap workshop, everyone worked through the codelabs examples in their own pace and we had some upstream participation (Limesurvey) as well.

During the final session, UbuCon Feedback and 2017 plannings, some attendees new to the Ubuntu community commented on how they didn’t know anyone coming into the event. They felt welcome and made many new friends. So much so, there is some serious interest in creating an UbuCon in Romania as a result….let’s do it!

More info here with the event page and the schedule.

Zygmunt Krynicki: Ubuntu Core Gadget Snaps

Thu, 12/01/2016 - 01:29
Gagdet snaps, the somewhat mysterious part of snappy that few people grok. Being a distinct snap type, next to kernel, os and the most common app types, it gets some special roles. If you are on a classic system like Ubuntu, Debian or Fedora you don't really need or have one yet. Looking at all-snap core devices you will always see one. In fact, each snappy reference platform has one. But where are they?

Up until now the gadget snaps were a bit hard to find. They were out there but you had to have a good amount of luck and twist your tongue at the right angle to find them. That's all changed now. If you look a https://github.com/snapcore you will see a nice, familiar pattern of devicename-gadget. Each repository is dedicated to one device so you will see a gadget snap for Raspberry Pi 2 or Pi 3, for example.

But there's more! Each of those github repositories is linked to a launchpad project that automatically mirrors the git repository, builds the snap and uploads it to the store and publishes the snap to the edge channel!

The work isn't over, as you will see the gadget snaps are mostly in binary form, hand-made to work but still a bit too mysterious. The Canonical Foundations team is working on building them in a way that is friendlier to community and easier to trace back to their source code origins.

If you'd like to learn more about this topic then have a look at the snapd wiki page for gadget snaps.

Eric Hammond: Amazon Polly Text To Speech With aws-cli and Twilio

Wed, 11/30/2016 - 11:30

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!" audio_file=speech.mp3 aws polly synthesize-speech \ --output-format "mp3" \ --voice-id "Salli" \ --text "$text" \ $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file 2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3 aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number to_phone="+1..." # Your phone number to call TWILIO_ACCOUNT_SID="..." # Your Twilio account SID TWILIO_AUTH_TOKEN="..." # Your Twilio auth token speech_url="http://s3.amazonaws.com/$s3bucket/$s3key" twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url" curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \ -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \ --data-urlencode "From=$from_phone" \ --data-urlencode "To=to_phone" \ --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \ "Good morning!" \ "Breakfast is being served now in Venetian Hall G.." \ "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \ --bucket "$s3bucket" \ --lifecycle-configuration '{ "Rules": [{ "Status": "Enabled", "ID": "Delete all objects after 1 day", "Prefix": "", "Expiration": { "Days": 1 } }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

Elizabeth K. Joseph: Ohio LinuxFest 2016

Wed, 11/30/2016 - 11:29

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

Ubuntu Insights: Docker and Canonical Partner on CS Docker Engine for Ubuntu users

Wed, 11/30/2016 - 07:01

  • New commercial agreement provides integrated enterprise support and SLAs for CS Docker Engine

SAN FRANCISCO and LONDON, 30th November 2016 – Docker and Canonical today announced an integrated Commercially Supported (CS) Docker Engine offering on Ubuntu, providing Canonical customers with a single path for support of the Ubuntu operating system and CS Docker Engine in enterprise Docker operations.

This commercial agreement provides for a streamlined operations and support experience for joint customers. Stable, maintained releases of Docker will be published and updated by Docker, Inc, as snap packages on Ubuntu, enabling direct access to the Docker Inc build of Docker for all Ubuntu users. Canonical will provide Level 1 and Level 2 technical support for CS Docker Engine backed by Docker, Inc providing Level 3 support. Canonical will ensure global availability of secure and Ubuntu images on Docker Hub.

Ubuntu is widely used as a devops platform in container-centric environments. “The combination of Ubuntu and Docker is popular for scale-out container operations, and this agreement ensures that our joint user base has the fastest and easiest path to production for CS Docker Engine devops,” said John Zannos, Vice President of Cloud Alliances and Business Development, Canonical.

CS Docker Engine is a software subscription to Docker’s flagship product backed by business day and business critical support. CS Docker Engine includes orchestration capabilities that enable an operator to define a declarative state for the distributed applications running across a cluster of nodes, based on a decentralized model that allows each Engine to be a uniform building block in a self-organizing, self-healing distributed system.

“Through our partnership, we provide users with more choice by bringing the agility, portability, and security benefits of the Docker CS engine to the larger Ubuntu community,” said Nick Stinemates, Vice President, Business Development and Technical Alliances at Docker. “Additionally, Ubuntu customers will be able to take advantage of official Docker support – a service that is not available from most Linux distributions. Together, we have aligned to make it even easier for organizations to create new efficiencies across the entire software supply chain.”

For more information please visit www.docker.com/products/docker-engine and www.ubuntu.com/cloud.

Jono Bacon: Luma Giveaway Winner – Garrett Nay

Mon, 11/28/2016 - 17:08

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

The Fridge: Ubuntu Weekly Newsletter Issue 489

Mon, 11/28/2016 - 15:37

Welcome to the Ubuntu Weekly Newsletter. This is issue #489 for the week November 21 – 27, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • David Morfin
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Seif Lotfy: Rustifying IronFunctions

Mon, 11/28/2016 - 14:47

As mentioned in my previous blog post there is new open-source, lambda compatible, on-premise, language agnostic, server-less compute service called IronFunctions.

While IronFunctions is written in Go. Rust is still very much admired language and it was decided to add support for it in the fn tool.

So now you can use the fn tool to create and publish functions written in rust.

Using rust with functions

The easiest way to create a iron function in rust is via cargo and fn.

Prerequisites

First create an epty rust project as follows:

$ cargo init --name func --bin

Make sure the project name is func and is of type bin. Now just edit your code, a good example is the following "Hello" example:

use std::io; use std::io::Read; fn main() { let mut buffer = String::new(); let stdin = io::stdin(); if stdin.lock().read_to_string(&mut buffer).is_ok() { println!("Hello {}", buffer.trim()); } }

You can find this example code in the repo.

Once done you can create an iron function.

Creating a function $ fn init --runtime=rust <username>/<funcname>

in my case its fn init --runtime=rust seiflotfy/rustyfunc, which will create the func.yaml file required by functions.

Building the function $ fn build

Will create a docker image <username>/<funcname> (again in my case seiflotfy/rustyfunc).

Testing

You can run this locally without pushing it to functions yet by running:

$ echo Jon Snow | fn run Hello Jon Snow Publishing

In the directory of your rust code do the following:

$ fn publish -v -f -d ./

This will publish you code to your functions service.

Running it

Now to call it on the functions service:

$ echo Jon Snow | fn call seiflotfy rustyfunc

which is the equivalent of:

$ curl -X POST -d 'Jon Snow' http://localhost:8080/r/seiflotfy/rustyfunc Next

In the next post I will be writing a more computation intensive rust function to test/benchmark IronFunctions, so stay tune :D

Ubuntu Insights: Canonical and AWS partner to deliver world-class support in the cloud

Mon, 11/28/2016 - 10:38

In today’s software world, support is many times an afterthought or an expensive contract used only to keep-up with the latest patches, updates, and versions. Hidden costs to upgrade software, including downtime, scheduling, and planning are also factors that need to be considered. Canonical does not believe the traditional norms of support apply. Our leading support product Ubuntu Advantage (UA) is a professional support package that provides Ubuntu users with the backing needed to be successful.

This week at AWS’ Re:invent 2016 conference we are announcing the ability to purchase UA Virtual Guest via AWS marketplace. Ubuntu Advantage Virtual Guest is designed for virtualized enterprise workloads on AWS, which use official Ubuntu images. The tooling, technology, and expertise of UA is available via the AWS marketplace with just a few clicks. It includes:

  • Access to Landscape (SaaS version), the system’s management tool for using Ubuntu at scale
  • Canonical Livepatch Service, which allows users to apply critical kernel patches without rebooting on Ubuntu 16.04 LTS images using the Linux 4.4 kernel
  • Up to 24×7 telephone and web support and the option of a dedicated Canonical support engineer
  • Access to the Canonical Knowledge Hub and regular security bug fixes

Further, the added benefits of accessing Ubuntu Advantage through AWS Marketplace SaaS model are hourly pricing rates based on the quantity of customers actual Ubuntu usage on AWS, their SLA requirements, and centralized billing through users AWS Marketplace account. Customers pay for what they consume within their account, no more.

Innovation and leadership on display at Re:invent 2016

The ability to buy UA through the AWS Marketplace is just the beginning. At Re:invent we will be showcasing many of our solutions that support Big Software including:

Containers are changing how software is deployed and operated. Canonical is also actively innovating around containers with our machine container solution LXD, providing the density and efficiency of containers, but with the manageability and security of virtual machines; enhanced partnerships with partners like Docker, the CNCF and others around process container orchestration. Finally, our Canonical Distribution of Kubernetes provides a ‘pure K8s’ experience across any cloud.

Juju for service modeling and Charms to make software deployments painless. Juju is an open source service modeling platform that makes it easy to deploy and operate complex, interlinked, dynamic software stacks. Juju has hundreds of preconfigured services called Juju Charms available in the Juju store. For example, Juju makes it easy to stand-up and scale up or down Hadoop, Kubernetes, Ceph, MySQL, etc. all without disruption to the cloud environment.

Snaps for product interoperability and enablement. Snaps is a new packaging format used to securely bundle any software as an app, making updates and rollbacks simple. A snap is a fancy zip file containing an application together with its dependencies, and a description of how it should be safely run on your system, especially the different ways it should talk to other software. Most importantly snaps are secure, sandboxed, containerised applications isolated from the underlying system and from other applications. Snaps allow the safe installation of apps from any vendor on mission critical devices and desktops. Canonical’s Ubuntu Core is the leading open source Snap-enabled production operating system which powers anything from robots, drones, industrial IoT gateways, network equipment, digital signage, mobile base stations, refrigerators, and more.

Even as the cost of software has declined, the expense to operate today’s complex and distributed solutions have increased as many companies have found themselves managing these systems in a vacuum. Even for experts, deploying, and operating containers and Kubernetes at scale can be a daunting task. However, by deploying Ubuntu, Juju for software modeling, and Canonical’s Kubernetes distribution helps organizations to make deployment simplified. Further, we have certified our distribution of Kubernetes to work with most major public clouds as well as on-premise infrastructure like VMware or bare-metal Metal as a Service (MaaS) solutions thereby eliminating many of the integration and deployment headaches.

Most of these solutions can be used and deployed in production with your AWS EC2 credentials today. What’s more, they are supported with a professional SLAs from Canonical. We are also looking for innovative ISVs and forward thinking systems integrators to help us drive value for our customers and bring compelling solutions to market.

At AWS Re:invent 2016, we will be talking about all this and more at booth 2341 in Hall D.

Ubuntu Insights: Mir is not only about Unity8

Mon, 11/28/2016 - 08:41

This is a guest post by Alan Griffiths, Software engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Mir is a project to support the management applications on the display(s) of a computer. It can be compared to the more familiar X-Windows used on the current Ubuntu desktop (and many others). I’ll discuss some of the motivation for Mir below, but the point of this post is to clarify the relationship between Mir and Unity8.

Most of the time you hear about Mir it is mentioned alongside Unity8. This is not surprising as Unity8 is Canonical’s new user interface shell and the thing end-users interact with. Mir “only” makes this possible. Unity8 is currently used on phones and tablets and is also available as a “preview” on the Ubuntu 16.10 desktop.

Here I want to explain that Mir is available to use without Unity8. Either for an alternative shell, or as a simpler interface for embedded environments: information kiosks, electronic signage, etc. The evidence for this is proved by the Mir “Abstraction Layer” which provides three important elements:

1.libmiral.so – a stable interface to Mir providing basic window management;
2. miral-shell – a sample shell offering both “traditional” and “tiling” window management; and,
3. miral-kiosk – a sample “kiosk” offering only basic window management.

The miral-shell and miral-kiosk sample servers are available from the zesty archive and Kevin Gunn has been blogging about providing a miral-kiosk based “kiosk” snap on “Voices”. I’ll give a bit more detail about using these examples below, but there is more (including “how to” develop your own alternative Mir server) on my “voices” blog.

USING MIR

Mir is a set of programming libraries, not an application in its own right. That means it needs applications to use it for anything to happen. There are two ways to use the Mir libraries: as a “client” when writing an application, or as a “server” when implementing a shell. Clients (as with X11) typically use a toolkit rather than using Mir (or X11) directly.

There’s Mir support available in GTK, Qt and SDL2. This means that applications using these toolkits should “just work” on Mir when that support is enabled in the toolkit (which is the default in Ubuntu). In addition there’s Xmir: an X11 server that runs on Mir, this allows X based applications to run on Mir servers.

But a Mir client needs a corresponding Mir server before anything can happen. Over the last development cycle the Mir team has produced MirAL as the recommended way to write Mir servers and a package “miral-examples” by way of demonstration. For zesty, the development version of Ubuntu, you can install from the archive:

$ sudo apt install miral-examples mir-graphics-drivers-desktop qtubuntu-desktop

For other platforms you would need to build MirAL this yourself (see An Example Mir Desktop Environment for details).

With miral-examples installed you can run a Mir server as a window on your Unity7 desktop and start clients (such as gedit) within it as follows:

$ miral-shell& $ miral-run gedit

This will give you (very basic) “traditional” desktop window management. Alternatively, you can try “tiling” window management:

$ miral-shell --window-manager tiling& $ miral-run qterminal

Or the (even more basic) kiosk:

$ miral-kiosk& $ miral-run 7kaa

None of these Mir servers provide a complete “desktop” with support for a “launcher”, notifications, etc. but they demonstrate the potential to use Mir without Unity8.

THE PROBLEM MIR SOLVES

The X-Windows system has been, and remains, immensely successful in providing a way to interact with computers. It provides a consistent abstraction across a wide range of hardware and drivers. This underlies many desktop environments and graphical user interface toolkits and lets them work together on an enormous range of computers.

But it comes from an era when computers were used very differently from now, and there are real concerns today that are hard to meet given the long legacy that X needs to support.
In 1980 most computers were big things managed by specialists and connecting them to one another was “bleeding edge”. In that era the cost of developing software was such that any benefit to be gained by one application “listening in” on another was negligible: there were few computers, they were isolated, and the work they did was not open to financial exploitation.

X-Windows developed in this environment and, through a series of extensions, has adapted to many changes. But it is inherently insecure: any application can find out what happening on the display (and affect it). You can write applications like Xeyes (that tracks the cursor with its “eyes”) or “Tickeys” (that listens to the keyboard to generate typewriter noises). The reality is that any and all applications can track and manipulate almost all of what is happening. That is how X based desktops like Unity7, Gnome, KDE and the rest work.

The open nature of window management in X-Windows is poorly adapted to a world with millions of computers connected to the Internet, being used for credit card transactions and online banking, and managed by non-experts who willingly install programs from complete strangers. There has been a growing realization that adapting X-Windows to the new requirements of security and graphics performance isn’t feasible.

There are at least two open source projects aimed at providing a replacement: Mir and Wayland. While some see these as competing, there are a lot of areas where they have common interests: They both need to interact with other software that previously assumed X11, and much of the work needed to introduce support alternatives benefits both projects.

Canonical’s replacement for X-Windows, Mir, only exposes the information to an application that it needs to have (so no snooping on keystrokes, or tracking the cursor). It can meet the needs of the current age and can exploit modern hardware such as graphics processors.

Pages