Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 24 min 45 sec ago

Stéphane Graber: Ubuntu Core in LXD containers

Tue, 01/31/2017 - 13:27

What’s Ubuntu Core?

Ubuntu Core is a version of Ubuntu that’s fully transactional and entirely based on snap packages.

Most of the system is read-only. All installed applications come from snap packages and all updates are done using transactions. Meaning that should anything go wrong at any point during a package or system update, the system will be able to revert to the previous state and report the failure.

The current release of Ubuntu Core is called series 16 and was released in November 2016.

Note that on Ubuntu Core systems, only snap packages using confinement can be installed (no “classic” snaps) and that a good number of snaps will not fully work in this environment or will require some manual intervention (creating user and groups, …). Ubuntu Core gets improved on a weekly basis as new releases of snapd and the “core” snap are put out.

Requirements

As far as LXD is concerned, Ubuntu Core is just another Linux distribution. That being said, snapd does require unprivileged FUSE mounts and AppArmor namespacing and stacking, so you will need the following:

  • An up to date Ubuntu system using the official Ubuntu kernel
  • An up to date version of LXD
Creating an Ubuntu Core container

The Ubuntu Core images are currently published on the community image server.
You can launch a new container with:

stgraber@dakara:~$ lxc launch images:ubuntu-core/16 ubuntu-core Creating ubuntu-core Starting ubuntu-core

The container will take a few seconds to start, first executing a first stage loader that determines what read-only image to use and setup the writable layers. You don’t want to interrupt the container in that stage and “lxc exec” will likely just fail as pretty much nothing is available at that point.

Seconds later, “lxc list” will show the container IP address, indicating that it’s booted into Ubuntu Core:

stgraber@dakara:~$ lxc list +-------------+---------+----------------------+----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------+---------+----------------------+----------------------------------------------+------------+-----------+ | ubuntu-core | RUNNING | 10.90.151.104 (eth0) | 2001:470:b368:b2b5:216:3eff:fee1:296f (eth0) | PERSISTENT | 0 | +-------------+---------+----------------------+----------------------------------------------+------------+-----------+

You can then interact with that container the same way you would any other:

stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap list Name Version Rev Developer Notes core 16.04.1 394 canonical - pc 16.04-0.8 9 canonical - pc-kernel 4.4.0-45-4 37 canonical - root@ubuntu-core:~# Updating the container

If you’ve been tracking the development of Ubuntu Core, you’ll know that those versions above are pretty old. That’s because the disk images that are used as the source for the Ubuntu Core LXD images are only refreshed every few months. Ubuntu Core systems will automatically update once a day and then automatically reboot to boot onto the new version (and revert if this fails).

If you want to immediately force an update, you can do it with:

stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap refresh pc-kernel (stable) 4.4.0-53-1 from 'canonical' upgraded core (stable) 16.04.1 from 'canonical' upgraded root@ubuntu-core:~# snap version snap 2.17 snapd 2.17 series 16 root@ubuntu-core:~#

And then reboot the system and check the snapd version again:

root@ubuntu-core:~# reboot root@ubuntu-core:~# stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap version snap 2.21 snapd 2.21 series 16 root@ubuntu-core:~#

You can get an history of all snapd interactions with

stgraber@dakara:~$ lxc exec ubuntu-core snap changes ID Status Spawn Ready Summary 1 Done 2017-01-31T05:14:38Z 2017-01-31T05:14:44Z Initialize system state 2 Done 2017-01-31T05:14:40Z 2017-01-31T05:14:45Z Initialize device 3 Done 2017-01-31T05:21:30Z 2017-01-31T05:22:45Z Refresh all snaps in the system Installing some snaps

Let’s start with the simplest snaps of all, the good old Hello World:

stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap install hello-world hello-world 6.3 from 'canonical' installed root@ubuntu-core:~# hello-world Hello World!

And then move on to something a bit more useful:

stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap install nextcloud nextcloud 11.0.1snap2 from 'nextcloud' installed

Then hit your container over HTTP and you’ll get to your newly deployed Nextcloud instance.

If you feel like testing the latest LXD straight from git, you can do so with:

stgraber@dakara:~$ lxc config set ubuntu-core security.nesting true stgraber@dakara:~$ lxc exec ubuntu-core bash root@ubuntu-core:~# snap install lxd --edge lxd (edge) git-c6006fb from 'canonical' installed root@ubuntu-core:~# lxd init Name of the storage backend to use (dir or zfs) [default=dir]: We detected that you are running inside an unprivileged container. This means that unless you manually configured your host otherwise, you will not have enough uid and gid to allocate to your containers. LXD can re-use your container's own allocation to avoid the problem. Doing so makes your nested containers slightly less safe as they could in theory attack their parent container and gain more privileges than they otherwise would. Would you like to have your containers share their parent's allocation (yes/no) [default=yes]? Would you like LXD to be available over the network (yes/no) [default=no]? Would you like stale cached images to be updated automatically (yes/no) [default=yes]? Would you like to create a new network bridge (yes/no) [default=yes]? What should the new bridge be called [default=lxdbr0]? What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? LXD has been successfully configured.

And because container inception never gets old, lets run Ubuntu Core 16 inside Ubuntu Core 16:

root@ubuntu-core:~# lxc launch images:ubuntu-core/16 nested-core Creating nested-core Starting nested-core root@ubuntu-core:~# lxc list +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+ | nested-core | RUNNING | 10.71.135.21 (eth0) | fd42:2861:5aad:3842:216:3eff:feaf:e6bd (eth0) | PERSISTENT | 0 | +-------------+---------+---------------------+-----------------------------------------------+------------+-----------+ Conclusion

If you ever wanted to try Ubuntu Core, this is a great way to do it. It’s also a great tool for snap authors to make sure their snap is fully self-contained and will work in all environments.

Ubuntu Core is a great fit for environments where you want to ensure that your system is always up to date and is entirely reproducible. This does come with a number of constraints that may or may not work for you.

And lastly, a word of warning. Those images are considered as good enough for testing, but aren’t officially supported at this point. We are working towards getting fully supported Ubuntu Core LXD images on the official Ubuntu image server in the near future.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Ubuntu Insights: Industrial IoT Revolution with Raspberry Pi Compute Module 3

Tue, 01/31/2017 - 08:05

The Raspberry Pi Foundation released a long awaited version of their Raspberry Pi Compute Module 3. The great news is that you get 4GB storage, 1GB memory and the same processor as the Raspberry Pi 3 for $30. This means that it now becomes a real solution for anybody wanting to build industrial products and run their own app store on top. Ubuntu Core can support the Raspberry Pi 3, so you are able to create your own app enabled products and run your own app store in 2017.

To show a simple example:

The Revolution Pi is a set of industrial PLC type of devices that use the compute module on the inside. Now with a powerful Compute Module 3 and Ubuntu Core, nothing will stop the innovators of using apps/snaps to control and manage industrial machinery.

Need some industrial protocol. Just install the app/snap. Need a cloud integration. Just install the app/snap. Want to do advanced machine learning on the edge? There is an app/snap for that. Any integration with any external system can be defined as a snap. Shortly you can sell commercial snaps from your own Brand Store.

The mobile app store market is saturated but now the counter is put back to zero and we can have app stores for lawnmowers, vacuum cleaners, elevators, pumps, air conditioning, vending machines, heating and any type of industrial machinery.

Soon we should see standardised compute module I/O boards for different markets, e.g. robotics, drones, PLCs/ALCs, vending machines, and many more. 2017 promises to be a very interesting year for industrial innovators!

Original blog post source here.

Harald Sitter: Neon OEM Mod…arghhh

Tue, 01/31/2017 - 05:25

For years and years already Ubuntu’s installer, Ubiquity, has an OEM mode. And for years and years I know it doesn’t really work with the Qt interface.

An understandable consequence of not actually having any real-life use cases of course, disappointing all the same. As part of the KDE Slimbook project I took a second and then a third look at the problems it was having and while it still is not perfect it is substantially better than before.

The thing to understand about the OEM mode is that technically it splits the installation in two. The OEM does a special installation which leads to a fully functional system that the OEM can modify and then put into “shipping” mode once satisfied with the configuration. After this, the system will go into a special Ubiquity that only offers the configuration part of the installation process (i.e. User creation, keyboard setup etc.). Once the customer completed this process the system is all ready to go, with any additional software the OEM might have installed during preparation.

Therein lies the problem in a way. The OEM configuration is design-wise kind of fiddly considering how the Qt interface is set up and interacts with other pieces of software (most notably KWin). This is double true for KDE neon where we use a slightly modified Ubiquity, with the fullscreen mode removed. However, as you might have guessed, not using fullscreen leads to all sorts of weird behavior in the OEM setup where practically speaking the user is meant to be locked out of the system but technically he is in a minimal session with a window manager. So, in theory, one can close the window, when started the window would be placed as though more windows are meant to appear, and it would have a minimize button etc. etc. All fairly terrible. However also not too tricky to fix once one has the identified all problems. Arguably that is the biggest feat with any installer change. Finding all possible scenarios where things can go wrong takes days.

So, to improve this the KDE Visual Design Group‘s Jens Reuterberg and I again descended into the hellish pit that is Qt 4 QWidget CSS theming on a code base that has seen way too many cooks over the years. The result I like much better than what we started out with, even if it isn’t perfect.

 

The sidebar has had visual complexity removed to bring it closer to a Breeze look and feel. Window decoration elements not wanted during OEM set up are being removed by setting up suitable KWin rules when preparing for first boot.

Additionally, we will hopefully soon have enough translations to push out a new slideshow featuring slightly more varied visuals than the current “Riding the Waves” picture we have for a slideshow.

For additional information on how to use the current OEM mode check out the documentation on the KDE UserBase.

Ubiquity code
Slideshow code (most interest translations setup this)

 

Raphaël Hertzog: My Free Software Activities in January 2017

Tue, 01/31/2017 - 03:33

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • I reviewed multiple CVE affecting ntp and opted to mark them no-dsa (just like what has been done for jessie).
  • I pinged upstream authors of jbig2dec (here) and XML::Twig (by private email) where the upstream report had not gotten any upstream reply yet.
  • I asked on oss-security for more details about CVE-2016-9584 because it was not clear whether it had already been reported upstream. Turns out that it was. I then updated the security tracker accordingly.
  • Once I got a reply on jbig2dec, I started to backport the patch pointed out by upstream, it was hard work. When I was done, I had also received by private email the fuzzed file at the origin of the report… unfortunately that file did not trigger the same problem with the old jbig2dec version in wheezy. That said valgrind still identified read outside of allocated memory. At this point I had a closer look at the git history only to discover that the last 3 years of work consisted mainly of security fixes for similar cases that were never reported to CVE. I thus opened a discussion about how to handle this situation.
  • Matthias Geerdsen reported in #852610 a regression in libtiff4. I confirmed the problem and spent multiple hours to come up with a fix. The patch that introduced the regression was Debian-specific as upstream did not fix those issues yet. I released a fixed package in DLA-610-2.
Debian packaging

With the deep freeze approaching, I made some last-minute updates:

  • schroot 1.6.10-3 fixing some long-standing issues with the way bind mounts are shared (#761435) and other important fixes.
  • live-boot 1:20170112 to fix a failure when booting on a FAT filesystem and other small fixes.
  • live-config 5.20170112 merging useful patches from the BTS.
  • I finished the update of hashcat 3.30 with its new private library and fixed RC bug #851497 at the same time. The work was initiated by fellow members of the pkg-security team.
Misc work

Sponsorship. I sponsored a new asciidoc upload demoting a dependency into a recommends (#850301). I sponsored a new upstream version of dolibarr.

Discussions. I seconded quite a few changes prepared by Russ Allbery on debian-policy. I helped Scott Kitterman with #849584 about a misunderstanding of how the postfix service files are supposed to work. I discussed in #849913 about a regression in building of cross-compilers, and made a patch to avoid the problem. In the end, Guillem developed a better fix.

Bugs. I investigated #850236 where a django test failed during the first week after each leap year. I filed #853224 on desktop-base about multiple small problems in the maintainer scripts.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Benjamin Mako Hill: Supporting children in doing data science

Mon, 01/30/2017 - 21:54

As children use digital media to learn and socialize, others are collecting and analyzing data about these activities. In school and at play, these children find that they are the subjects of data science. As believers in the power of data analysis, we believe that this approach falls short of data science’s potential to promote innovation, learning, and power.

Motivated by this fact, we have been working over the last three years as part of a team at the MIT Media Lab and the University of Washington to design and build a system that attempts to support an alternative vision: children as data scientists. The system we have built is described in a new paper—Scratch Community Blocks: Supporting Children as Data Scientists—that will be published in the proceedings of CHI 2017.

Our system is built on top of Scratch, a visual, block-based programming language designed for children and youth. Scratch is also an online community with over 15 million registered members who share their Scratch projects, remix each others’ work, have conversations, provide feedback, bookmark or “love” projects they like, follow other users, and more. Over the last decade, researchers—including us—have used the Scratch online community’s database to study the youth using Scratch. With Scratch Community Blocks, we attempt to put the power to programmatically analyze these data into the hands of the users themselves.

To do so, our new system adds a set of new programming primitives (blocks) to Scratch so that users can access public data from the Scratch website from inside Scratch. Blocks in the new system gives users access to project and user metadata, information about social interaction, and data about what types of code are used in projects. The full palette of blocks to access different categories of data is shown below.

Project metadata User metadata Site-wide statistics

The new blocks allow users to programmatically access, filter, and analyze data about their own participation in the community. For example, with the simple script below, we can find whether we have followers in Scratch who report themselves to be from Spain, and what their usernames are.

Simple demonstration of Scratch Community Blocks

In designing the system, we had two primary motivations. First, we wanted to support avenues through which children can engage in curiosity-driven, creative explorations of public Scratch data. Second, we wanted to foster self-reflection with data. As children looked back upon their own participation and coding activity in Scratch through the project they and their peers made, we wanted them to reflect on their own behavior and learning in ways that shaped their future behavior and promoted exploration.

After designing and building the system over 2014 and 2015, we invited a group of active Scratch users to beta test the system in early 2016. Over four months, 700 users created more than 1,600 projects. The diversity and depth of users creativity with the new blocks surprised us. Children created projects that gave the viewer of the project a personalized doughnut-chart visualization of their coding vocabulary on Scratch, rendered the viewer’s number of followers as scoops of ice-cream on a cone, attempted to find whether “love-its” for projects are more common on Scratch than “favorites”, and told users how “talkative” they were by counting the cumulative string-length of project titles and descriptions.

We found that children, rather than making canonical visualizations such as pie-charts or bar-graphs, frequently made information representations that spoke to their own identities and aesthetic sensibilities. A 13-year-old girl had made a virtual doll dress-up game where the player’s ability to buy virtual clothes and accessories for the doll was determined by the level of their activity in the Scratch community. When we asked about her motivation for making such a project, she said:

I was trying to think of something that somebody hadn’t done yet, and I didn’t see that. And also I really like to do art on Scratch and that was a good opportunity to use that and mix the two [art and data] together.

We also found at least some evidence that the system supported self-reflection with data. For example, after seeing a project that showed its viewers a visualization of their past coding vocabulary, a 15-year-old realized that he does not do much programming with the pen-related primitives in Scratch, and wrote in a comment, “epic! looks like we need to use more pen blocks. :D.”

Doughnut visualization Ice-cream visualization Data-driven doll dress up

Additionally, we noted that that as children made and interacted with projects made with Scratch Community Blocks, they started to critically think about the implications of data collection and analysis. These conversations are the subject of another paper (also being published in CHI 2017).

In a 1971 article called “Teaching Children to be Mathematicians vs. Teaching About Mathematics”, Seymour Papert argued for the need for children doing mathematics vs. learning about it. He showed how Logo, the programming language he was developing at that time with his colleagues, could offer children a space to use and engage with mathematical ideas in creative and personally motivated ways. This, he argued, enabled children to go beyond knowing about mathematics to “doing” mathematics, as a mathematician would.

Scratch Community Blocks has not yet been launched for all Scratch users and has several important limitations we discuss in the paper. That said, we feel that the projects created by children in our the beta test demonstrate the real potential for children to do data science, and not just know about it, provide data for it, and to have their behavior nudged and shaped by it.

This blog post and the paper it describes are collaborative work with Sayamindu Dasgupta. We have also received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Hal Abelson. Financial support came from the US National Science Foundation. The paper itself is open access so anyone can read the entire paper here. This blog post was also posted on Sayamindu Dasgupta’s blog, on the Community Data Science Collective blog, and in several other places.

The Fridge: Ubuntu Weekly Newsletter Issue 496

Mon, 01/30/2017 - 21:40

Welcome to the Ubuntu Weekly Newsletter. This is issue #496 for the week January 23 – 29, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Paul White
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Stuart Langridge: Niobium

Mon, 01/30/2017 - 16:20
[41 is] the smallest integer whose reciprocal has a 5-digit repetend. That is a consequence of the fact that 41 is a factor of 99999. — Wikipedia

I don’t understand a lot of things, these days. I don’t understand what a 5-digit repetend is, or why 41 being a factor of 99999 has to do with anything. I don’t understand how much all this has changed in the last thirteen years of posts. I don’t understand when building web stuff got hard. I don’t understand why I can’t find anyone who sells wall lights that look nice without charging a hundred notes for each one, which is a bit steep when you need six. I don’t understand why I can’t get thinner and still eat as many sandwiches as I want. I don’t understand an awful lot of why the world suddenly became a terrible, frightening, mean-spirited, mocking, vitriolic place. And most of what I do understand about that, I hate.

We all sorta thought that we were moving forward; there was less hatred of the Other, fewer knives out, not as much fear and spite as there used to be. And it turns out that it wasn’t gone; it was just suppressed, building up and up underneath the volcano cap until the bad guys realised that there’s nothing actually stopping them doing terrible things and there’s nothing anyone can do about it. So the Tories moved from daring to talk about shutting down the NHS to actually doing it and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. Trump starts restricting immigration and targeting Muslims directly and puts a Nazi adviser on the National Security Council and nobody said anything. Or, more accurately, a bunch of people said things and it didn’t make any difference. I don’t want to give in to hatred — it leads to the Dark Side — and so I don’t want to hate them for doing this. But I do hate that I have to fight to avoid it. I hate that I feel so helpless. I hate that the only way I know to fight back is to actually fight — to become them. I hate that they turn everyone into malign, terrible copies of themselves. I hate that they don’t understand. I hate that I don’t understand. I hate that I just hate all the time now.

I’m forty-one. Apparently, according to Wikipedia, the US Navy Fleet Ballistic Missile nuclear submarines from the George Washington, Ethan Allen, Lafayette, James Madison, and Benjamin Franklin classes were nicknamed “41 for Freedom“. 41 for freedom. Maybe that’s not a bad motto for me, being 41. Do more for freedom. My freedom, my family’s freedom, my friends’ freedom, my city’s freedom, people I’ve never met and never will’s freedom. None of us are free if one of us is chained, and if you don’t say it’s wrong then that says it right.

Two photos from today.

One is of Niamh, and her present to me for my birthday: a light box like the ones you get outside cinemas and churches and fast food places and we can put messages for one another on it. I’m hugely pleased with it. The other is of today’s anti-Trump demo in Victoria Square, at which Reverend David Butterworth, of the Methodist Church, said: “Whatever we can do to make this a more peaceful city and a more inclusive city, and to stand up and be counted, we must and should do it together. The only way that Donald Trump will win is if the good people of Birmingham, and of other cities that we’re twinned with like Chicago, stay silent.” People standing up, and a demonstration of what they’re standing up for. Not a bad way to start making me being 41 for freedom, perhaps.

Happy birthday to me. And for those of you less lucky than me today: I hope we can help.

Ubuntu Insights: 48% of people unaware their IoT devices pose a security threat

Mon, 01/30/2017 - 08:49

This is a guest post by agency Wildfire. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

LONDON, U.K. – 30 January, 2017 – Nearly half (48%) of citizens remain unaware that their connected devices could be infiltrated and used to conduct a cyber attack. That’s according to a new IoT security whitepaper which was published today by Canonical – the makers of Ubuntu.

The report, which includes research from over 2,000 UK citizens, highlights the lack of impact that consumer awareness campaigns are having when it comes to internet security and the internet of things.

Despite the government’s latest online cyber awareness campaign costing over £6 per visit, 37% of consumers still believe that they are not ‘sufficiently aware’ of the risks that connected devices pose. What’s more, consumers seem largely ignorant of the escalating threat demonstrated by the high spike in IoT attacks in 2016. 79% say they have not read or seen any recent news stories regarding IoT security or privacy, and 78% claim that their distrust of IoT security has not increased in the last year.

The research also highlights the limited benefits of better education: Consumers are simply not that motivated to actively apply security updates, with the majority applying them only ‘occasionally’ or in some cases – not at all.

Commenting on these findings, Thibaut Rouffineau, Head of Devices Marketing at Canonical said: “These figures are troubling, and should be a wake-up call for the industry. Despite good intentions, government campaigns for cyber awareness and IoT security still have a long way to go. But then that’s the point: Ultimately the IoT industry needs to step up and take on responsibility. Government education of consumers and legislation will have a part to play, but overall the industry needs to take charge of keeping devices up to date and find a way to eliminate any potential vulnerabilities from devices before they can cause issues, rather than placing the burden on consumers.”

To download Ubuntu’s full ‘Taking charge of the IoT’s security vulnerabilities’ report, visit: http://ubunt.eu/ey14Y2

ENDS

Notes to Editor
‘Connected devices’ were defined to respondents as including e.g. Wi-Fi routers, webcams, smart thermostats or boilers, smart hoovers and other such smart home devices, but excluding computers and phones.

Survey methodology
The survey was conducted on Canonical’s behalf by research company Opinium in December 2016 using a panel of 2,000 UK adults.

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu.

Established in 2004, Canonical is a privately held company.

For further information please visit https://www.ubuntu.com

Image source here.

Harald Sitter: KDE Applications in Ubuntu Snap Store

Mon, 01/30/2017 - 07:43

Following the recent addition of easy DBus service snapping in the snap binary bundle format, I am happy to say that we now have some of our KDE Applications in the Ubuntu 16.04 Snap Store.

To use them you need to first manually install the kde-frameworks-5 snap. Once you have it installed you can install the applications. Currently we have available:

  • ktuberling – The most awesome game ever!
  • kbruch – Learn how to do fractions (I almost failed at first exercise :O)
  • katomic – Fun and education in one
  • kblocks – Tetris-like game
  • kmplot – Plotting mathematical functions
  • kgeography – An education application for learning states/countries/capitals
  • kollision – Casual ball game
  • kruler – A screen ruler to measure pixel distance on your screen

The Ubuntu 16.04 software center comes with Snap store support built in, so you can simply search for the application and should find a snap version for installation. As we are still working on stabilizing Snap support in Plasma’s Discover, for now, you have to resort to a terminal to test the snaps on KDE neon.

To get started using the command line interface of snap you can do the following:

sudo snap install kde-frameworks-5 sudo snap install kblocks

All currently available snaps are auto generated. For some technical background check out my earlier blog post on snapping KDE applications. In the near future I hope to get manually maintained snaps also built automatically. Also from-git delivery to the edge channel is very much a desired feature still. Stay tuned.

Ubuntu Insights: Installing a DIY Bare Metal GPU cluster for Kubernetes

Mon, 01/30/2017 - 05:14

I don’t know if you have ever seen one of the Orange Boxes from Canonical

These are really sleek machines. They contain 10 Intel NUCs, plus an 11th one for the management. They are used as a demonstration tool for big software stacks such as OpenStack, Hadoop, and, of course, Kubernetes.

They are freely available from TranquilPC, so if you are an R&D team, or just interested in having a neat little cluster at home, I encourage you to have a look.

However, despite their immense qualities they lack a critical piece of kit that Deep Learning geeks cherish: GPUs!!

In this blog/tutorial we will learn how to build, install and configure a DIY GPU cluster that uses a similar architecture. We start with hardware selection and experiment, then dive into MAAS (Metal as a Service), a bare metal management system. Finally we look at Juju to deploy Kubernetes on Ubuntu, add CUDA support, and enable it in the cluster.

Hardware: Adding fully fledged GPUs to Intel NUCs?

When you look at them, it is hard to tell how to insert normal GPU cards into the tiny form factor of Intel NUCs. However they have a M.2 NGFF port. This is essentially a PCI-e 4x port, just in a different form factor.

And, there is this which converts M.2 into PCI-e 4x and that which converts PCI-e 4x into 16x.

Sooo… Theoritically, we can connect GPUs to Intel NUCs. Let’s try out!!

POC: First node

Let us start simple with a single Node Intel NUC and see if we can make a GPU to work with it.

Already owning a NUC from the previous generation (NUC5i7SYH) and an old Radeon 7870, I just had to buy

  • a PSU to power the GPU: for this, I found that the Corsair AX1500i was the best deal on the market, capable of powering up to 10 GPUs!! Perfect if I wanted to scale this with more nodes.
  • Adapters:
    M.2 to PCI-e 4x
    Riser 4x -> 16x
  • Hack to activate power on a PSU without having it connected to a “real” motherboard. Thanks to Michael Iatrou (@iatrou) for pointing me there.
  • Obviously a screen, keyboard, cables…

It’s aliiiiiiive! Ubuntu boot screen from the additional GPU running at 4x

At this point, we have proof it is possible, it’s time to start building a real cluster.

Adding little brothers Bill of material

For each of the workers, you will need:

Then for management nodes, 2x of the same above NUCs but without the GPU and with a smaller SSD.

And now overall components:

  • PSU: Corsair AX1500i
  • Switch Netgear GS108PE. You can take a lower end switch, I had one available that’s all. I didn’t do anything funky on the network side.
  • Raspberry Pi: Anything 2 or 3 version with 32GB micro SD
  • Spacers
  • ATX PSU Switch
Execution

If it does not fit in the box, remove the box. So first the motherboard of the NUC has to be extracted. Using a PVC 3mm sheet and the spacers, we can have a nice layout.

GPU View

On one side for the PVC, we attach the GPU so the power connector is visible at the bottom, and the PCI-e port just slightly rises over the edge. The holes are 2.8mm so that the M3 spacers goes through but you need to screw them a little bit and they don’t move.

Intel NUC View

On the other side, we drill the fixation holes fro SSD and Intel NUC so that the PCI-e riser cable is aligned in front of the PCI-e port of the GPU. You’ll also have to drill the SSD metal support a little bit.

As you can see on the picture, we place the riser between the PEC and the NUC so it looks nicer

We repeat the operation 4 times for each node. Then using our 50mm M3 hexa, we attach them with 3 screws between each “blade”, book up everything to the network and… Tadaaaaa!!

Close up view from the NUC side

From the GPU side

Software: Installation of the cluster

Giving life to the cluster will require quite a bit of work on the software side.

Instead of a manual process, we will leverage powerful management tooling. This will give us the ability to re-purpose our cluster in the future.

The tool to manage the metal itself is MAAS (Metal As A Service). It is developed by Canonical to manage bare metal server fleets, and already powers the Ubuntu Orange Box.

Then to deploy, we will be using Juju, Canonical’s modelling tool, which has bundles to deploy the Canonical Distribution of Kubernetes.

Bare Metal Provisioning : Installing MAAS

First of all we need to install MAAS on the Raspberry Pi 2.

For the rest of this, we will assume that you have ready:

  • A Raspberry Pi 2 or 3 installed with Ubuntu Server 16.04
  • The board ethernet port connected to a network that connects to internet, and configured
  • An additional USB to ethernet adapter, connected to our cluster switch
Network setup

The default Ubuntu image does not auto install the USB adapter. First we shall query ifconfig to see a eth1 (or other name) in addition to our eth0 interface:

$ /sbin/ifconfig -a eth0 Link encap:Ethernet HWaddr b8:27:eb:4e:48:c6 inet addr:192.168.1.138 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::ba27:ebff:fe4e:48c6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:150573 errors:0 dropped:0 overruns:0 frame:0 TX packets:39702 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:217430968 (217.4 MB) TX bytes:3450423 (3.4 MB) eth1 Link encap:Ethernet HWaddr 00:0e:c6:c2:e6:82 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

We can now edit /etc/network/interfaces.d/eth1.cfg with

# hwaddress 00:0e:c6:c2:e6:82 auto eth1 iface eth1 inet static address 192.168.23.1 netmask 255.255.255.0

then start eth1 with


$ sudo ifup eth1

and now we have a secondary interface setup

Base installation

Let’s install first the requirements:

$ sudo apt update && sudo apt upgrade -yqq $ sudo apt install -yqq — no-install-recommends \ maas \ bzr \ isc-dhcp-server \ wakeonlan \ amtterm \ wsmancli \ juju \ zram-config

Let’s also use the occasion to fix the very annoying Perl Locales bug that affects pretty much every Rapsberry Pi around:

$ sudo locale-gen en_US.UTF-8

Now let’s activate zram to virtually increase our RAM by 1GB by adding the below in /etc/rc.local

modprobe zram && \ echo $((1024*1024*1024)) | tee /sys/block/zram0/disksize && \ mkswap /dev/zram0 && \ swapon -p 10 /dev/zram0 && \ exit 0

and do an immediate activation via

$ sudo modprobe zram && \ echo $((1024*1024*1024)) | sudo tee /sys/block/zram0/disksize && \ sudo mkswap /dev/zram0 && \ sudo swapon -p 10 /dev/zram0 DHCP Configuration

DHCP will be handled by MAAS directly, so we don’t have to handle it. However the way it configures the default settings is pretty brutal, so you might want to tune that a little bit. Below is a /etc/dhcp/dhcpd.conf file that would work and is a little fancier

authoritative; ddns-update-style none; log-facility local7; option subnet-mask 255.255.255.0; option broadcast-address 192.168.23.255; option routers 192.168.23.1; option domain-name-servers 192.168.23.1; option domain-name “maas”; default-lease-time 600; max-lease-time 7200; subnet 192.168.23.0 netmask 255.255.255.0 { range 192.168.23.10 192.168.23.49; host node00 { hardware ethernet B8:AE:ED:7A:B6:92; fixed-address 192.168.23.10; } ... ... }

We need also to tell dhcpd to only serve requests on eth1 to prevent flowding our other networks. We do that by editing the INTERFACE option in /etc/default/isc-dhcp-server so it looks like

# On what interfaces should the DHCP server (dhcpd) serve DHCP requests? # Separate multiple interfaces with spaces, e.g. “eth0 eth1”. INTERFACES=”eth1"

and finally we restart DHCP with

$ sudo systemctl restart isc-dhcp-server.service

Simple Router Configuration

In our setup, the Raspberry Pi is the point of contention of the network. While MAAS provides DNS and DHCP by default it does not operate as a gateway. Hence our nodes may very well end up blind from the Internet, which we obviously do not want.

So first we activate IP forwarding in sysctl:

sudo touch /etc/sysctl.d/99-maas.conf echo “net.ipv4.ip_forward=1” | sudo tee /etc/sysctl.d/99-maas.conf sudo sh -c “echo 1 > /proc/sys/net/ipv4/ip_forward”

Then we need to link our eth0 and eth1 interfaces to allow traffic between them

$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE $ sudo iptables -A FORWARD -i eth0 -o eth1 -m state — state RELATED,ESTABLISHED -j ACCEPT $ sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

OK so now we have traffic passing, which we can test by plugging anything on the LAN interface and trying to ping some internet website.

And we save that in order to make it survive a reboot

sudo sh -c “iptables-save > /etc/iptables.ipv4.nat”

and add this line in /etc/network/interfaces.d/eth1.conf

up iptables-restore < /etc/iptables.ipv4.nat Configuring MAAS Pre requisites $ sudo maas createadmin — username=admin — email=it@madeden.com

Then let’s get access to our API key, and login from the CLI:

$ sudo maas-region apikey — username=admin

Armed with the result of this command, just do:

$ # maas login $ maas login admin http://localhost/MAAS/api/2.0

Or you can just in one command

$ sudo maas-region apikey — username=admin | \ maas login admin http://localhost/MAAS/api/2.0 -

Now via the GUI, in the network tab, we rename our fabrics to match LAN, WAN and WLAN.

Then we hit the LAN network, and via the Take Action button, we enable DHCP on it.

The only thing we have to do is to start the nodes once. They will be handled by MAAS directly, and will appear in the GUI after a few minutes.

They will have a random name, and nothing configured.

First of all we will rename them. To ease things up, in our experiment, we will use node00 and node01 for the first non GPU nodes, then node02 to 04 being the gpu nodes.

After we name them, we will also
* tag them **cpu-only** for the 2 first management nodes
* tag them **gpu** for the 4 workers.
* Set the power method to **Manual**

We then have something like

Commissioning nodes

This is where the fun begins. We need to “commission nodes”, said otherwise to record information about them (HDD, CPU count…)

There is a bug in MAAS that blocks the deployment of systems. Look at comment #14 and apply it by editing /etc/maas/preseeds/curtin_userdata. In the reboot section to add a delay so it looks like:

power_state: mode: reboot delay: 30

Then we commission via the Take Action button and selecting Commission, and leave unticked the all 3 other options. Right after that we manually power each of the nodes, and MAAS will do the rest, including power them down at the end of the process. The UI will then look like:

MAAS commissioning nodes

MAAS from New To Commissioned

When commissioning is successful, we see all the values for HDD size, nb cores and memory filled and the node also becomes Ready

MAAS Logs at commissioning

Deploying with Juju Bootstrapping the environment

First thing we need to do is connect Juju to MAAS. We create a configuration file for MAAS as provider, maas-juju.yaml, with contents:

maas: type: maas auth-types: [oauth1] endpoint: http:///MAAS

Understand the MAAS_IP address as that from which Juju will interact with MAAS, including the nodes that are deployed. So you can use, in our setup, that of eth1 (192.168.23.1 )

You can find more details on this page

Then we need to tell Juju to use MAAS:

$ juju add-cloud maas maas-juju.yaml $ juju add-credential maas

Juju needs to bootstrap which brings up a first control node, which will host the Juju Controller, the initial database and various other requirements. This node is the reason we have 2 management nodes. The second one will be our k8s Master.

In our setup our nodes have only manual power since WoL was removed from MAAS with v2.0. This means we’ll need to trigger the bootstrap, wait for the node to be allocated, then start it manually.

$ juju bootstrap maas-controller maas Creating Juju controller “maas-controller” on maas Bootstrapping model “controller” Starting new instance for initial controller Launching instance # This is where we start the node manually WARNING no architecture was specified, acquiring an arbitrary node — 4y3h8w Installing Juju agent on bootstrap instance Preparing for Juju GUI 2.2.2 release installation Waiting for address Attempting to connect to 192.168.23.2:22 Logging to /var/log/cloud-init-output.log on remote host Running apt-get update Running apt-get upgrade Installing package: curl Installing package: cpu-checker Installing package: bridge-utils Installing package: cloud-utils Installing package: tmux Fetching tools: curl -sSfw ‘tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ‘ — retry 10 -o $bin/tools.tar.gz <[https://streams.canonical.com/juju/tools/agent/2.0-beta15/juju-2.0-beta15-xenial-amd64.tgz]> Bootstrapping Juju machine agent Starting Juju machine agent (jujud-machine-0) Bootstrap agent installed Bootstrap complete, maas-controller now available.

And the MAAS GUI

Initial bundle deployment

We deploy the bundle file k8s.yaml below:

series: xenial services: “kubernetes-master”: charm: “cs:~containers/kubernetes-master-6” num_units: 1 to: — “0” expose: true annotations: “gui-x”: “800” “gui-y”: “850” constraints: tags=cpu-only flannel: charm: “cs:~containers/flannel-5” annotations: “gui-x”: “450” “gui-y”: “750” easyrsa: charm: “cs:~containers/easyrsa-3” num_units: 1 to: — “0” annotations: “gui-x”: “450” “gui-y”: “550” “kubernetes-worker”: charm: “cs:~containers/kubernetes-worker-8” num_units: 1 to: — “1” expose: true annotations: “gui-x”: “100” “gui-y”: “850” constraints: tags=gpu etcd: charm: “cs:~containers/etcd-14” num_units: 1 to: — “0” annotations: “gui-x”: “800” “gui-y”: “550” relations: — — “kubernetes-master:kube-api-endpoint” — “kubernetes-worker:kube-api-endpoint” — — “kubernetes-master:cluster-dns” — “kubernetes-worker:kube-dns” — — “kubernetes-master:certificates” — “easyrsa:client” — — “kubernetes-master:etcd” — “etcd:db” — — “kubernetes-master:sdn-plugin” — “flannel:host” — — “kubernetes-worker:certificates” — “easyrsa:client” — — “kubernetes-worker:sdn-plugin” — “flannel:host” — — “flannel:etcd” — “etcd:db” machines: “0”: series: xenial “1”: series: xenial

We can see that we have constraints on the nodes to force MAAS to pick up GPU nodes for the workers, and CPU node for the master. We pass the command

$ juju deploy k8s.yaml

That is it. This is the only command we will need to get a functional k8s running!

added charm cs:~containers/easyrsa-3 application easyrsa deployed (charm cs:~containers/easyrsa-3 with the series “xenial” defined by the bundle) added resource easyrsa annotations set for application easyrsa added charm cs:~containers/etcd-14 application etcd deployed (charm cs:~containers/etcd-14 with the series “xenial” defined by the bundle) annotations set for application etcd added charm cs:~containers/flannel-5 application flannel deployed (charm cs:~containers/flannel-5 with the series “xenial” defined by the bundle) added resource flannel annotations set for application flannel added charm cs:~containers/kubernetes-master-6 application kubernetes-master deployed (charm cs:~containers/kubernetes-master-6 with the series “xenial” defined by the bundle) added resource kubernetes application kubernetes-master exposed annotations set for application kubernetes-master added charm cs:~containers/kubernetes-worker-8 application kubernetes-worker deployed (charm cs:~containers/kubernetes-worker-8 with the series “xenial” defined by the bundle) added resource kubernetes application kubernetes-worker exposed annotations set for application kubernetes-worker created new machine 0 for holding easyrsa, etcd and kubernetes-master units created new machine 1 for holding kubernetes-worker unit related kubernetes-master:kube-api-endpoint and kubernetes-worker:kube-api-endpoint related kubernetes-master:cluster-dns and kubernetes-worker:kube-dns related kubernetes-master:certificates and easyrsa:client related kubernetes-master:etcd and etcd:db related kubernetes-master:sdn-plugin and flannel:host related kubernetes-worker:certificates and easyrsa:client related kubernetes-worker:sdn-plugin and flannel:host related flannel:etcd and etcd:db added easyrsa/0 unit to machine 0 added etcd/0 unit to machine 0 added kubernetes-master/0 unit to machine 0 added kubernetes-worker/0 unit to machine 1 deployment of bundle “k8s.yaml” completed

Which translates in the GUI as:

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active executing 2 192.168.23.5 (install) Container runtime available. kubernetes-worker/2 unknown allocating 3 192.168.23.7 Waiting for agent initialization to finish kubernetes-worker/3 unknown allocating 4 192.168.23.6 Waiting for agent initialization to finish MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 pending 192.168.23.7 4y3ha6 xenial default 4 pending 192.168.23.6 4y3ha4 xenial default

or

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS cuda false local cuda 0 ubuntu easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker 1.4.5 active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active idle 2 192.168.23.5 80/tcp,443/tcp Kubernetes worker running. flannel/2 active idle 192.168.23.5 Flannel subnet 10.1.100.1/24 kubernetes-worker/2 active idle 3 192.168.23.7 80/tcp,443/tcp Kubernetes worker running. flannel/3 active idle 192.168.23.7 Flannel subnet 10.1.14.1/24 kubernetes-worker/3 active idle 4 192.168.23.6 80/tcp,443/tcp Kubernetes worker running. flannel/4 active idle 192.168.23.6 Flannel subnet 10.1.83.1/24 MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 started 192.168.23.7 4y3ha6 xenial default 4 started 192.168.23.6 4y3ha4 xenial default

We now need kubectl to query the cluster. We need to relate to this k8s issue and the method for Hypriot OS

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ cat < /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF $ apt-get update $ apt-get install -y kubectl $ kubectl get nodes — show-labels NAME STATUS AGE LABELS node02 Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02 node03 Ready 1h beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node03 node04 Ready 57m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node04 node05 Ready 58m beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node05 Adding CUDA

CUDA does not have an official charm yet, so I wrote a hacky bash script to make it work, which you can find on GitHub

To build the charm you’ll want a x86 computer rather than the Rpi. You will need juju, charm and charm-tools installed there, then run

$ export JUJU_REPOSITORY=${HOME}/charms $ export LAYER_PATH=${JUJU_REPOSITORY}/layers $ export INTERFACE_PATH=${JUJU_REPOSITORY}/interfaces $ cd ${LAYER_PATH} $ git clone https://github.com/SaMnCo/layer-nvidia-cuda cuda $ cd juju-layer-cuda $ charm build

Which will create a new folder called builds in JUJU_REPOSITORY, and another called cuda in there. Just scp that to the Raspberry Pi in a charms subfolder of your home.

$ scp ${JUJU_REPOSITORY}/builds/cuda ${USER}@raspberrypi:/home/${USER}/charms/cuda $ git clone https://github.com/SaMnCo/layer-nvidia-cuda cuda $ cd juju-layer-cuda $ charm build

To deploy the charm we just created,

$ juju deploy — series xenial $HOME/charms/cuda $ juju add-relation cuda kubernetes-worker

This will take some time (CUDA downloads gigabytes of code and binaries…), but ultimately we get to

$ juju status MODEL CONTROLLER CLOUD/REGION VERSION default maas-controller maas 2.0-beta15 APP VERSION STATUS EXPOSED ORIGIN CHARM REV OS cuda false local cuda 0 ubuntu easyrsa 3.0.1 active false jujucharms easyrsa 3 ubuntu etcd 2.2.5 active false jujucharms etcd 14 ubuntu flannel 0.6.1 false jujucharms flannel 5 ubuntu kubernetes-master 1.4.5 active true jujucharms kubernetes-master 6 ubuntu kubernetes-worker 1.4.5 active true jujucharms kubernetes-worker 8 ubuntu RELATION PROVIDES CONSUMES TYPE juju-info cuda kubernetes-worker regular certificates easyrsa kubernetes-master regular certificates easyrsa kubernetes-worker regular cluster etcd etcd peer etcd etcd flannel regular etcd etcd kubernetes-master regular sdn-plugin flannel kubernetes-master regular sdn-plugin flannel kubernetes-worker regular host kubernetes-master flannel subordinate kube-dns kubernetes-master kubernetes-worker regular juju-info kubernetes-worker cuda subordinate host kubernetes-worker flannel subordinate UNIT WORKLOAD AGENT MACHINE PUBLIC-ADDRESS PORTS MESSAGE easyrsa/0 active idle 0 192.168.23.3 Certificate Authority connected. etcd/0 active idle 0 192.168.23.3 2379/tcp Healthy with 1 known peers. (leader) kubernetes-master/0 active idle 0 192.168.23.3 6443/tcp Kubernetes master running. flannel/0 active idle 192.168.23.3 Flannel subnet 10.1.57.1/24 kubernetes-worker/0 active idle 1 192.168.23.4 80/tcp,443/tcp Kubernetes worker running. cuda/2 active idle 192.168.23.4 CUDA installed and available flannel/1 active idle 192.168.23.4 Flannel subnet 10.1.67.1/24 kubernetes-worker/1 active idle 2 192.168.23.5 80/tcp,443/tcp Kubernetes worker running. cuda/0 active idle 192.168.23.5 CUDA installed and available flannel/2 active idle 192.168.23.5 Flannel subnet 10.1.100.1/24 kubernetes-worker/2 active idle 3 192.168.23.7 80/tcp,443/tcp Kubernetes worker running. cuda/3 active idle 192.168.23.7 CUDA installed and available flannel/3 active idle 192.168.23.7 Flannel subnet 10.1.14.1/24 kubernetes-worker/3 active idle 4 192.168.23.6 80/tcp,443/tcp Kubernetes worker running. cuda/1 active idle 192.168.23.6 CUDA installed and available flannel/4 active idle 192.168.23.6 Flannel subnet 10.1.83.1/24 MACHINE STATE DNS INS-ID SERIES AZ 0 started 192.168.23.3 4y3h8x xenial default 1 started 192.168.23.4 4y3h8y xenial default 2 started 192.168.23.5 4y3ha3 xenial default 3 started 192.168.23.7 4y3ha6 xenial default 4 started 192.168.23.6 4y3ha4 xenial default

Pretty awesome, we now have CUDERNETES!

We can individually connect on every GPU node and run

$ sudo nvidia-smi Wed Nov 9 06:06:44 2016 + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | NVIDIA-SMI 367.57 Driver Version: 367.57 | | — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106… Off | 0000:02:00.0 Off | N/A | | 28% 31C P0 27W / 120W | 0MiB / 6072MiB | 0% Default | + — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+

Good job!!

Enabling CUDA in Kubernetes

By default, CDK will not activate GPUs when starting the API server and the Kubelet on workers. We need to do that manually (though this is on the roadmap)

Master Update

On the master node, update /etc/default/kube-apiserver to add:

# Security Context KUBE_ALLOW_PRIV=” — allow-privileged=true”

Then restart the API service via

$ sudo systemctl restart kube-apiserver

So now the Kube API will accept requests to run privileged containers, which are required for GPU workloads.

Worker nodes

On every worker, /etc/default/kubelet to to add the GPU tag, so it looks like:

# Security Context KUBE_ALLOW_PRIV=” — allow-privileged=true” # Add your own! KUBELET_ARGS=” — experimental-nvidia-gpus=1 — require-kubeconfig — kubeconfig=/srv/kubernetes/config — cluster-dns=10.1.0.10 — cluster-domain=cluster.local”

Then restart the service via

$ sudo systemctl restart kubelet Testing the setup

Now that we have CUDA GPUs enabled in k8s, let us test that everything works. We take a very simple job that will just run nvidia-smi from a pod and exit on success.

The job definition is

apiVersion: batch/v1 kind: Job metadata: name: nvidia-smi labels: name: nvidia-smi spec: template: metadata: labels: name: nvidia-smi spec: containers: — name: nvidia-smi image: nvidia/cuda command: [ “nvidia-smi” ] imagePullPolicy: IfNotPresent securityContext: privileged: true resources: requests: alpha.kubernetes.io/nvidia-gpu: 1 limits: alpha.kubernetes.io/nvidia-gpu: 1 volumeMounts: — mountPath: /dev/nvidia0 name: nvidia0 — mountPath: /dev/nvidiactl name: nvidiactl — mountPath: /dev/nvidia-uvm name: nvidia-uvm — mountPath: /usr/local/nvidia/bin name: bin — mountPath: /usr/lib/nvidia name: lib volumes: — name: nvidia0 hostPath: path: /dev/nvidia0 — name: nvidiactl hostPath: path: /dev/nvidiactl — name: nvidia-uvm hostPath: path: /dev/nvidia-uvm — name: bin hostPath: path: /usr/lib/nvidia-367/bin — name: lib hostPath: path: /usr/lib/nvidia-367 restartPolicy: Never

What is interesting here is

  • We do not have the abstraction provided by nvidia-docker, so we have to specify manually the mount points for the char devices
  • We also need to share the drivers and libs folders
    In the resources, we have to both request and limit the resources with 1 GPU
  • The container has to run privileged

Now if we run this:

$ kubectl create -f nvidia-smi-job.yaml $ # Wait for a few seconds so the cluster can download and run the container $ kubectl get pods -a -o wide NAME READY STATUS RESTARTS AGE IP NODE default-http-backend-8lyre 1/1 Running 0 11h 10.1.67.2 node02 nginx-ingress-controller-bjplg 1/1 Running 1 10h 10.1.83.2 node04 nginx-ingress-controller-etalt 0/1 Pending 0 6m nginx-ingress-controller-q2eiz 1/1 Running 0 10h 10.1.14.2 node05 nginx-ingress-controller-ulsbp 1/1 Running 0 11h 10.1.67.3 node02 nvidia-smi-xjl6y 0/1 Completed 0 5m 10.1.14.3 node05

We see the last container has run and completed. Let us see the output of the run

$ kubectl logs nvidia-smi-xjl6y Wed Nov 9 07:52:42 2016 + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | NVIDIA-SMI 367.57 Driver Version: 367.57 | | — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 106… Off | 0000:02:00.0 Off | N/A | | 28% 33C P0 29W / 120W | 0MiB / 6072MiB | 0% Default | + — — — — — — — — — — — — — — — -+ — — — — — — — — — — — + — — — — — — — — — — — + + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | + — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -+

Perfect, we have the same result as if we had run nvidia-smi from the host, which means we are all good to operate GPUs!

Conclusion

So what did we achieve here? Kubernetes is the most versatile container management system around. It also shares some genes with Tensorflow, which itself got often demoed as containers, in a scale out fashion.

It is only natural to fasten Deep Learning workload by adding GPUs at scale. This poor man GPU cluster is an example of what can be done for a small R&D team if they wanted to experiment with multi node scalability.

We have a secondary benefit. You noticed that the deployment of k8s is completely automated here (beside the GPU inclusion), thanks to Juju and the team behind CDK. Well, the community behind Juju creates many charms, and there is a large collection of scale out applications that can be deployed at scale, like Hadoop, Kafka, Spark, Elasticsearch (…).

In the end, the investment is only MAAS and a few commands. Juju’s ROI in R&D is a matter of days.

Thanks

Huge thanks to Marco Ceppi, Chuck Butler and Matt Bruzek at Canonical for the fantastic work on CDK, and responsiveness to my (numerous) questions.

Dimitri John Ledkov: 2017 is the new 1984

Sun, 01/29/2017 - 15:23
1984: Library EditionNovel by George Orwell, cover picture by Google Search resultI am scared.
I am petrified.
I am confused.
I am sad.
I am furious.
I am angry.

28 days later I shall return from NYC.

I hope.

Julian Fernandes: Hello world!

Sat, 01/28/2017 - 19:01

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Kubuntu General News: Kubuntu 17.04 Alpha 2 released for testers

Sat, 01/28/2017 - 13:58

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) is released today. With this Alpha 2 pre-release, you can see what we are trying out in preparation for 17.04, which we will be releasing in April.

NOTE: This is Alpha 2 Release. Kubuntu Alpha Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Alpha 2
* Upgrade from 16.10, run do-release-upgrade from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive

Lubuntu Blog: Zesty Zapus Alpha 2 released

Sat, 01/28/2017 - 12:15
The second alpha of the Zesty Zapus (to become 17.04) has now been released! This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie. Pre-releases of the Zesty Zapus are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent […]

Ubuntu Insights: Ubuntu Core – how to enable aliases for your snaps commands

Sat, 01/28/2017 - 04:00

We are happy to announce that a new version of Ubuntu Core, based on snapd 2.21, has been released to the stable snaps channel yesterday.

As with any stable release, your Ubuntu Core devices will update and reboot automatically. If you are using snaps on the desktop, the release will reach you through a snapd package update on Ubuntu 16.04, 16.10 and 17.04.

This release comes with several improvements you can read about in the changelog, but let’s focus on a big feature that will help people who snap very large software, especially software that comes with many commands (such as OpenStack, ImageMagick, most databases…) and their users.

Introducing snap aliases

When you launch a snap from the command line, you need to use the name of the snap, then the name of a command it contains. In most cases, you don’t notice it, because snapd simplifies the process by collapsing <snap-name>.<command-name> into <command-name>, when both are the same. This way, you don’t need to type inkscape.inkscape, but simply inkscape and get a familiar software experience.

But when a snap contains multiple commands, with various names, things can become less familiar. If we take the PostgreSQL snap as an example, we can see it comes with many commands: initdb, createdb, etc. In this case, you have to run postgresql96.initdb, postgresql96.createdb, etc.

The alias feature of snapd lets snaps declare their own aliases, for users to manually enable after install or for snap stores to declare as “auto-aliases” that will be enabled upon install.

How to enable aliases

To have an overview of all available aliases on a system, you can use the snap aliases command.

$ snap aliases App Alias Notes firefox-devel.firefox firefox -

You can see I have a snap with the name firefox-devel, containing a firefox command and a firefox alias.

I can either use firefox-devel.firefox as my command to launch Firefox, or use snap alias <snap-name> <alias> to enable the alias.

$ snap alias firefox-devel firefox $ snap aliases App Alias Notes firefox-devel.firefox firefox enabled

I can now launch my firefox-devel snap with the firefox command.

You can also use snap unalias to disable aliases for a specific snap.

How to declare an alias

Declaring a new alias in your snap is as easy as adding one more entry to your snapcraft.yaml apps keys.

$ cat firefox-devel/snapcraft.yaml [...] apps: firefox-devel: command: bin/firefox aliases: [firefox] [...]

That’s it, heads-on to tutorials.ubuntu.com to make your own snap from scratch and give aliases a try!

Nathan Haines: We're looking for Ubuntu 17.04 wallpapers right now!

Sat, 01/28/2017 - 01:08
We're looking for Ubuntu 17.04 wallpapers right now!

Ubuntu is a testament to the power of sharing, and we use the default selection of desktop wallpapers in each release as a way to celebrate the larger Free Culture movement. Talented artists across the globe create media and release it under licenses that don't simply allow, but cheerfully encourage sharing and adaptation. This cycle's Free Culture Showcase for Ubuntu 17.04 is now underway!

We're halfway to the next LTS, and we're looking for beautiful wallpaper images that will literally set the backdrop for new users as they use Ubuntu 17.04 every day. Whether on the desktop, phone, or tablet, your photo or can be the first thing Ubuntu users see whenever they are greeted by the ubiquitous Ubuntu welcome screen or access their desktop.

Submissions will be handled via Flickr at the Ubuntu 17.04 Free Culture Showcase - Wallpapers group, and the submission window begins now and ends on March 5th.

More information about the Free Culture Showcase is available on the Ubuntu wiki at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase.

I'm looking forward to seeing the 10 photos and 2 illustrations that will ship on all graphical Ubuntu 17.04-based systems and devices on April 13th!

The Fridge: Zesty Zapus Alpha 2 Released

Fri, 01/27/2017 - 20:23

“Without deviation from the norm, progress is not possible.”

― Frank Zapus

The second alpha of the Zesty Zapus (to become 17.04) has now been released!

This milestone features images for Lubuntu, Kubuntu, Ubuntu MATE, Ubuntu Kylin, Ubuntu GNOME, and Ubuntu Budgie.

Pre-releases of the Zesty Zapus are not encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Alpha 2 includes a number of software updates that are ready for wider testing. This is still an early set of images, so you should expect some bugs.

While these Alpha 2 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Zesty Zapus. In particular, once newer daily images are available, system installation bugs identified in the Alpha 2 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 17.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Lubuntu 17.04 Alpha 2 can be found here:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu MATE 17.04 Alpha 2 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Kylin 17.04 Alpha 2 can be found here:

Kubuntu

Kubuntu is the KDE based flavor of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Kubuntu 17.04 Alpha 2 images can be downloaded from:

More information about Kubuntu 17.04 Alpha 2 can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Ubuntu GNOME 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu GNOME 17.04 Alpha 2 can be found here:

Ubuntu Budgie

Ubuntu Budgie is a flavor of Ubuntu featuring the Budgie desktop environment.

The Ubuntu Budgie 17.04 Alpha 2 images can be downloaded from:

More information about Ubuntu Budgie 17.04 Alpha 2 can be found here:

If you’re interested in following the changes as we further develop the Zesty Zapus, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a month or less) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Alpha release, and welcome Ubuntu Budgie!

Originally posted to the ubuntu-devel-announce mailing list on Fri Jan 27 21:16:28 UTC 2017 by Simon Quigley on behalf of the Ubuntu Release Team

Ubuntu Insights: Award-winning drone technology with Ubuntu

Fri, 01/27/2017 - 10:28

The market for drones is exploding as businesses and individuals embrace them. The global market for commercial applications of drone technology will balloon to as much as $127 billion by 2020 up from £2billion today (PWC.) Aerotenna is one of those innovators making this vision a reality.

Aerotenna’s award-winning technology seeks to solve the UAV autonomous flight-challenges – preventing UAVs from colliding with non-cooperative objects or other UAVs. Check out this video that shows it in action:

Autonomous Collision Avoidance- Mission from Aerotenna on Vimeo.

To learn more about Aerotenna’s award-winning technology download the case-study below. Highlights include:

  • Partnering with Intel® and Xilinx®, Aerotenna developed and released OcPoC with Altera Cyclone and Xilinx Zynq, with an industry-leading 100+ I/Os for sensor integration, and FPGA for sensor fusion, real-time data processing and deep learning
  • One such sensor is Aerotenna’s microwave radar that allows the drone to detect surrounding objects in all light conditions and environments, important for safe flying of UAVs
  • Ubuntu powers the OcPoC giving developers a familiar, extensible platform to build drone solutions based on the powerful combination of multiple sensors and complex robotics algorithms

Download the case study

Ubuntu Insights: ROS on arm64 with Ubuntu Core

Fri, 01/27/2017 - 10:03

This is a guest post by Kyle Fazzari, Engineer from Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Previous Robot Operating System (ROS) releases only supported i386, amd64, and armhf. I even tried building ROS Indigo from source for arm64 about a year ago, but ran into dependency issues with a missing sbcl. Well, with surprisingly little fanfare, ROS Kinetic was released with support for arm64 in their prebuilt archive! I thought it might be time to take it for a spin with Ubuntu Core and its arm64 reference board, the DragonBoard 410c, and thought I’d take you along for the ride.

Step 1: Install Ubuntu Core

The first order of business is to install Ubuntu Core on the DragonBoard. Just follow the documentation. I want to mention two things for this step, though:

  1. I’ve never had good luck with wifi on the DragonBoard, it seems incredibly unstable. I recommend not even bothering with it and using a USB to ethernet adapter, since with Ubuntu Core at least your first login must be over SSH.
  2. There’s a bug that causes the first boot wizard to take quite some time between entering your SSO password and finishing. Don’t worry, just leave it alone, it’ll finish (mine took about 7 minutes).
Step 2: Make sure it’s up-to-date

SSH into the machine (or if you set a password, login locally, it doesn’t matter), and run the following command to ensure everything is completely up-to-date:

$ sudo snap refresh

If it updated, you may need to reboot. Go ahead and do that now, we’ll wait.

Step 3: Install Snapcraft

You may or may not be familiar with Ubuntu Core, but the first thing people typically notice is that is doesn’t use Debian packages (.debs), it uses snaps (read up on them, they’re pretty awesome). However, in this case, they don’t give us what we need, which is a development environment. We need to build a ROS workspace into a snap, which means we’ll need to utilize ROS’s archive as well as the snap packaging tool, snapcraft. All of these are available as .debs. Fortunately, the environment we want is available as a snap:

$ snap install classic --edge --devmode $ sudo classic <unpacking stuff... snip> (classic)kyrofa@localhost:~$

The (classic) prompt modifier tells you that you’re now in a classic shell. Now you can do familiar things such as update the package index and install Snapcraft, both of which you should do now:

(classic)kyrofa@localhost:~$ sudo apt update (classic)kyrofa@localhost:~$ sudo apt install snapcraft Step 4: Workaround bug #1650207

Take a look at your Linux workstation (not the DragonBoard). For example, I’m running Ubuntu Xenial. The contents of my /etc/lsb-release file look like this:

DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS"

Multiple utilities, include Snapcraft and Catkin (the build system for ROS) use this file to determine the OS upon which it’s running. This file looks a bit different on Ubuntu Core:

DISTRIB_ID="Ubuntu Core" DISTRIB_RELEASE=16 DISTRIB_DESCRIPTION="Ubuntu Core 16"

As of this writing, due to a bug, the classic shell doesn’t replace this file with one that looks like Xenial’s, which means that neither Snapcraft nor Catkin (and various other tools) will work correctly. Fortunately that file is writable, so we can work around that just by making Ubuntu Core’s /etc/lsb-release looks like Xenial’s:

(classic)kyrofa@localhost:~$ cat << EOF | sudo tee /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=16.04 DISTRIB_CODENAME=xenial DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS" EOF Step 5: Build your snap

Probably the most simple ROS workspace you can imagine is a single talker and a single listener. Snapcraft has exactly that as one of its demos, so for this example we’ll just use that (you might consider reading that demo’s walkthrough). First though, that means fetching the Snapcraft source code:

(classic)kyrofa@localhost:~$ sudo apt install git (classic)kyrofa@localhost:~$ git clone https://github.com/snapcore/snapcraft (classic)kyrofa@localhost:~$ cd snapcraft/demos/ros

The ROS demo in question will by default actually build for Indigo. But you’re interested in arm64! You can’t use Indigo. You need something newer… something shinier, and perhaps less purple. You need to use Kinetic. You tell Snapcraft this by utilizing the rosdistro property. Open up the snapcraft.yaml and edit the appropriate section to look like this (added text bolded):

[...] parts: ros-project: plugin: catkin source: . rosdistro: kinetic catkin-packages: - talker - listener include-roscore: true

Save that file and exit. Finally, it’s time to run snapcraft and watch that workspace get built into a snap (this will take a bit, the DragonBoard is not a blazing workhorse; if you want something faster use the snap builders):

(classic)kyrofa@localhost:~/snapcraft/demos/ros$ snapcraft <snip pulling, building, staging> Priming ros-project Snapping 'ros-example' Snapped ros-example_1.0_arm64.snap

In the end you have a snap. You should exit out of the classic shell at this point (e.g. ctrl+d).

Step 6: Install and run it!

Now it’s time to install the snap you just built. Even though you’ve exited the classic shell, your $HOME remained the same, so you can install the snap like so:

$ cd snapcraft/demos/ros $ sudo snap install --dangerous ros-example_1.0_arm64.snap

(remember the –dangerous flag is because you just built the snap locally, so snapd can’t verify its publisher. You’re telling it that’s okay.)

Finally, run the application contained within the snap (it’ll launch the talker/listener system):

$ ros-example.launch-project <snip> SUMMARY ======== PARAMETERS * /rosdistro: kinetic * /rosversion: 1.12.6 NODES / listener (listener/listener_node) talker (talker/talker_node) auto-starting new master <snip> process[talker-2]: started with pid [25827] process[listener-3]: started with pid [25828] [ INFO] [1485394763.340461416]: Hello world 0 [ INFO] [1485394763.440354547]: Hello world 1 [ INFO] [1485394763.540334917]: Hello world 2 [ INFO] [1485394763.640330599]: Hello world 3 [ INFO] [1485394763.740335917]: Hello world 4 [ INFO] [1485394763.840366912]: Hello world 5 [ INFO] [1485394763.940342594]: Hello world 6 [ INFO] [1485394764.040321141]: Hello world 7 [ INFO] [1485394764.140323334]: Hello world 8 [ INFO] [1485394764.240328548]: Hello world 9 [ INFO] [1485394764.340319074]: Hello world 10 [INFO] [1485394764.341486]: I heard Hello world 10 [ INFO] [1485394764.440333194]: Hello world 11 [INFO] [1485394764.441476]: I heard Hello world 11 [ INFO] [1485394764.540333772]: Hello world 12 [INFO] [1485394764.541450]: I heard Hello world 12 Conclusion

I haven’t yet pushed ROS’s arm64 support very hard, but I’m thoroughly pleased that support is present. Particularly paired with snaps and Ubuntu Core, I think this opens the door to a lot of amazing robotic possibilities.

Original post can be found here.

Rhonda D&#39;Vine: Icona Pop

Fri, 01/27/2017 - 06:22

Last fall I went to a Silent Disco event. You get wireless headphones, a DJane and a DJ were playing music on different channels, and you enjoy the time with people around who can't hear what you hear. It's a pretty funny experience, and it was one of the last warm sunny days. There I heard a song that was just in the mood for the moment, and made me looking up the band to listen more closely to them.

The band was Icona Pop, they have a mood enlighening pop sound that cheers you up. Here are the songs I want to present you today:

  • I Love It: The first song I heard from them, and I Love It!
  • Girlfriend: Sweet song, and probably part of the reason they are well received in the LGBTIQ community.
  • All Night: A song/video with a message.

Like always, enjoy!

/music | permanent link | Comments: 0 |

Pages