Feed aggregator

Daniel Holbach: Writing snaps together

Planet Ubuntu - Tue, 09/27/2016 - 08:10

Working with a new technology often brings you to see things in a new light and re-think previous habits. Especially when it challenges the status quo and expectations of years of traditional use. Snaps are no exception in this regard. As one example twenty years ago we simply didn’t have today’s confinement technologies.

Luckily is using snapcraft a real joy: you write one declarative file, define your snap’s parts, make use of snapcraft‘s many plugins and if really necessary, you write a quick and simple plugin using Python to run your custom build.

Many of the first issues new snaps ran into were solved by improvements and new features in snapd and snapcraft. If you are still seeing a problem with your snap, we want you to get in touch. We are all interested in seeing more software as snaps, so let’s work together on them!

Enter the Sandpit

I mentioned it in my last announcement of the last Snappy Playpen event already, but as we saw many new snaps being added there in the last days, I wanted to mention it again. We started a new initiative called the Sandpit.

It’s a place where you can easily

  • list a snap you are working on and are looking for some help
  • find out at a glance if your favourite piece of software is already being snapped

It’s a very light-weight process: simply edit a wiki and get in touch with whoever’s working on the snap. The list grew quite quickly, so there’s loads of opportunities to find like-minded snap authors and get snaps online together.

You can find many of the people listed on the Sandpit wiki either in #snappy on Freenode or on Gitter. Just ask around and somebody will help.

Happy snapping everyone!

Ubuntu App Developer Blog: Learning to snap with codelabs

Planet Ubuntu - Tue, 09/27/2016 - 06:58

The background

I always felt that learning something new, especially new concepts and workflows usually works best if you see it first-hand and get to do things yourself. If you experience directly how your actions influence the system you're working with, the new connections in your brain form much more quickly. Didier and I talked a while about how to introduce the processes and ideas behind snapd and snapcraft to a new audience, particularly at a workshop or a meet-up and we found we were of the same opinion.

Didier put quite a bit of work into solving the infrastructure question. We re-used the work which was put into Codelabs already, so adding a new codelab merely became a question of creating a Google Doc and adding it using a management command. It works nicely, the UI is simple and easy to understand and lets you focus on the content at hand. It was a lot of fun to work on the content and refine the individual steps in a self-teaching workshop style. Thanks a lot everyone for the reviews!

It's now available for everyone

After some discussion it became clear that a very fitting way for the codelabs to go out would be to ship them as a snap themselves. It's beautifully simple to get started:

$ sudo snap install snap-codelabs

All you need to do afterwards is point your browser to http://localhost:8123/ - that's all. You will be greeted with something like this:

From thereon you can quickly start your snap adventure and get up and running in no time. It's a step-by-step workshop and you always know how much more time you need to complete it.

Expect more codelabs to be added soon. If you have feedback, please let us know here.

Have fun and when you're done with your first codelab, let us know in the comments!

(Re)Welcome New Membership Board Members!

The Fridge - Mon, 09/26/2016 - 16:56

The Community Council apologizes for the long wait to decide on which nominates will be included in this two (2) year round, but here they are:

Please help us to (re)welcome our Members for the Membership Board!

Originally posted to the ubuntu-news-team mailing list on Mon Sep 26 15:53:02 UTC 2016 by Svetlana Belkin

Ubuntu Weekly Newsletter Issue 482

The Fridge - Mon, 09/26/2016 - 16:19

Welcome to the Ubuntu Weekly Newsletter. This is issue #482 for the weeks of September 12 – 25, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Sirrs
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Dustin Kirkland: Container Camp London: Streamlining HPC Workloads with Containers

Planet Ubuntu - Mon, 09/26/2016 - 13:13

A couple of weeks ago, I delivered a talk at the Container Camp UK 2016.  It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.

You're welcome to view the slides or download them as a PDF, or watch my talk below.

And for the techies who want to skip the slide fluff and get their hands dirty, setup your OpenStack and LXD and start streamlining your HPC workloads using this guide.

Streamlining HPC Workloads with Containers from Dustin Kirkland

Rhonda D'Vine: LP

Planet Ubuntu - Mon, 09/26/2016 - 03:00

I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 |

Eric Hammond: Deleting a Route 53 Hosted Zone And All DNS Records Using aws-cli

Planet Ubuntu - Mon, 09/26/2016 - 02:30

fast, easy, and slightly dangerous recursive deletion of a domain’s DNS

Amazon Route 53 currently charges $0.50/month per hosted zone for your first 25 domains, and $0.10/month for additional hosted zones, even if they are not getting any DNS requests. I recently stopped using Route 53 to serve DNS for 25 domains and wanted to save on the $150/year these were costing.

Amazon’s instructions for using the Route 53 Console to delete Record Sets and a Hosted Zone make it look simple. I started in the Route 53 Console clicking into a hosted zone, selecting each DNS record set (but not the NS or SOA ones), clicking delete, clicking confirm, going back a level, selecting the next domain, and so on. This got old quickly.

Being lazy, I decided to spend a lot more effort figuring out how to automate this process with the aws-cli, and pass the savings on to you.

Steps with aws-cli

Let’s start by putting the hosted zone domain name into an environment variable. Do not skip this step! Do make sure you have the right name! If this is not correct, you may end up wiping out DNS for a domain that you wanted to keep.


Install the jq json parsing command line tool. I couldn’t quite get the normal aws-cli --query option to get me the output format I wanted.

sudo apt-get install jq

Look up the hosted zone id for the domain. This assumes that you only have one hosted zone for the domain. (It is possible to have multiple, in which case I recommend using the Route 53 console to make sure you delete the right one.)

hosted_zone_id=$( aws route53 list-hosted-zones \ --output text \ --query 'HostedZones[?Name==`'$domain_to_delete'.`].Id' ) echo hosted_zone_id=$hosted_zone_id

Use list-resource-record-sets to find all of the current DNS entries in the hosted zone, then delete each one with change-resource-record-sets.

aws route53 list-resource-record-sets \ --hosted-zone-id $hosted_zone_id | jq -c '.ResourceRecordSets[]' | while read -r resourcerecordset; do read -r name type <<<$(jq -r '.Name,.Type' <<<"$resourcerecordset") if [ $type != "NS" -a $type != "SOA" ]; then aws route53 change-resource-record-sets \ --hosted-zone-id $hosted_zone_id \ --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet": '"$resourcerecordset"' }]}' \ --output text --query 'ChangeInfo.Id' fi done

Finally, delete the hosted zone itself:

aws route53 delete-hosted-zone \ --id $hosted_zone_id \ --output text --query 'ChangeInfo.Id'

As written, the above commands output the change ids. You can monitor the background progress using a command like:

change_id=... aws route53 wait resource-record-sets-changed \ --id "$change_id" GitHub repo

To make it easy to automate the destruction of your critical DNS resources, I’ve wrapped the above commands into a command line tool and tossed it into a GitHub repo here:


You are welcome to use as is, fork, add protections, rewrite with Boto3, and generally knock yourself out.

Alternative: CloudFormation

A colleague pointed out that a better way to manage all of this (in many situations) would be to simply toss my DNS records into a CloudFormation template for each domain. Benefits include:

  • Easy to store whole DNS definition in revision control with history tracking.

  • Single command creation of the hosted zone and all record sets.

  • Single command updating of all changed record sets, no matter what has changed since the last update.

  • Single command deletion of the hosted zone and all record sets (my current challenge).

This doesn’t work as well for hosted zones where different records are added, updated, and deleted by automated processes (e.g., instance startup), but for simple, static domain DNS, it sounds ideal.

How do you create, update, and delete DNS in Route 53 for your domains?

Original article and comments: https://alestic.com/2016/09/aws-route53-wipe-hosted-zone/

Jono Bacon: Looking for a data.world Director of Community

Planet Ubuntu - Sun, 09/25/2016 - 21:16

Some time ago I signed an Austin-based data company called data.world as a client. The team are building an incredible platform where the community can store data, collaborate around the shape/content of that data, and build an extensive open data commons.

As I wrote about previously I believe data.world is going to play an important role in opening up the potential for finding discoveries in disparate data sets and helping people innovate faster.

I have been working with the team to help shape their community strategy and they are now ready to hire a capable Director of Community to start executing these different pieces. The role description is presented below. The data.world team are an incredible bunch with some strong heritage in the leadership of Brett Hurt, Matt Laessig, Jon Loyens, Bryon Jacob, and others.

As such, I am looking to find the team some strong candidates. If I know you, I would invite you to confidentially share your interest in this role by filling my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

This role will require candidates to either be based in Austin or be willing to relocate to Austin. This is a great opportunity, and feel free to get in touch with me if you have any questions.

Director of Community Role Description

data.world is building a world-class data commons, management, and collaboration platform. We believe that data.world is the very best place to build great data communities that can make data science fun, enjoyable, and impactful. We want to ensure we can provide the very best support, guidance, and engagement to help these communities be successful. This will involve engagement in workflow, product, outreach, events, and more.

As Director of Community, you will lead, coordinate, and manage our global community development initiatives. You will use your community leadership experience to shape our community experience and infrastructure, feed into the product roadmap with community needs and requirements, build growth and engagement, and more. You will help connect, celebrate, and amplify the existing communities on data.world and assist new ones as they form. You will help our users to think bigger, be the best they can be, and succeed more. You’ll work across teams within data.world to promote the community’s voice within our different internal teams. You should be a content expert, superb communicator, and humble facilitator.

Typical activities for this role include:

  • Building and executing programs that grow communities on data.world and empower them to do great work.
  • Taking a structured approach to community roles, on-boarding, and working with our teams to ensure community members have a simple and powerful experience.
  • Developing content that promotes the longevity and sustainability of fast growing, organically built data communities with high impact outcomes.
  • Building relationships within the industry and community to be their representative for data.world in helping to engage, be successful, and deliver great work and collaboration.
  • Working with product, user operations, and marketing teams on product roadmap for community features and needs.
  • Being a data.world representative and spokesperson at conferences, events, and within the media and external data communities.
  • Always challenging our assumptions, our culture, and being singularly focused on delivering the very best data community platform in the world.

Experience with the following is required:

  • 5-7 years of experience participating in and building communities, preferably data based, or technical in nature.
  • Experience with working in open source, open data, and other online communities.
  • Public speaking, blogging, and content development.
  • Facilitating complex and sensitive community management situations with humility, judgment, tact, and humor.
  • Integrating company brand, voice, and messaging into developed content. Working independently and autonomously, managing multiple competing priorities.

Experience with any of the following preferred:

  • Data science experience and expertise.
  • 3-5 years of experience leading community management programs within a software or Internet-based company.
  • Media training and experience in communicating with journalists, bloggers, and other media on a range of technical topics.
  • Existing network from a diverse set of communities and social media platforms.
  • Software development capabilities and experience

The post Looking for a data.world Director of Community appeared first on Jono Bacon.

Julian Andres Klode: Introducing TrieHash, a order-preserving minimal perfect hash function generator for C(++)

Planet Ubuntu - Sun, 09/25/2016 - 11:44

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).


APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0 WordLabel ~ Word OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation switch(string[0] | 32) { case 't': switch(string[1] | 32) { case 'a': switch(string[2] | 32) { case 'g': return Tag; } } } return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB plummer ppc64el 540 601 1914 2000 1345 eller mipsel 4728 5255 12018 7837 4087 asachi arm64 1000 1603 4333 2401 1625 asachi armhf 1230 1350 5593 5002 1784 barriere amd64 689 950 3218 1982 1776 x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench


See the script for POD documentation.

Filed under: General

Ubuntu Podcast from the UK LoCo: S09E30 – Pie Till You Die - Ubuntu Podcast

Planet Ubuntu - Thu, 09/22/2016 - 07:00

It’s Episode Thirty of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Most of us are here, but one of us is busy!

In this week’s show:

  • We discuss the Raspberry Pi hitting 10 Million sales and the impact the it has had.

  • We share a Command Line Lurve:

    • set -o vi – Which makes bash use vi keybindings.
  • We also discuss solving an “Internet Mystery” #blamewindows

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Salih Emin: Vivaldi browser: Interview with Jon Stephenson von Tetzchner

Planet Ubuntu - Wed, 09/21/2016 - 07:29
Vivaldi browser has taken the world of internet browsing by storm, and only months after its initial release it has found its way into the computers of millions of power users. In this interview, Mr.Jon Stephenson von Tetzchner talks about how he got the idea to create this project and what to expect in the future.

Eric Hammond: Developing CloudStatus, an Alexa Skill to Query AWS Service Status -- an interview with Kira Hammond by Eric Hammond

Planet Ubuntu - Mon, 09/19/2016 - 21:15

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?


  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

which was then copied, referenced, tweeted, and retweeted.

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

Launchpad News: Beta test: new package picker

Planet Ubuntu - Mon, 09/19/2016 - 17:37

If you are a member of Launchpad’s beta testers team, you’ll now have a slightly different interface for selecting source packages in the Launchpad web interface, and we’d like to know if it goes wrong for you.

One of our longer-standing bugs has been #42298 (“package picker lists unpublished (invalid) packages”).  When selecting a package – for example, when filing a bug against Ubuntu, or if you select “Also affects distribution/package” on a bug – and using the “Choose…” link to pop up a picker widget, the resulting package picker has historically offered all possible source package names (or sometimes all possible source and binary package names) that Launchpad knows about, without much regard for whether they make sense in context.  For example, packages that were removed in Ubuntu 5.10, or packages that only exists in Debian, would be offered in search results, and to make matters worse search results were often ordered alphabetically by name rather than by relevance.  There was some work on this problem back in 2011 or so, but it suffered from performance problems and was never widely enabled.

We’ve now resurrected that work from 2011, fixed the performance problems, and converted all relevant views to use it.  You should now see something like this:

Exact matches on either source or binary package names always come first, and we try to order other matches in a reasonable way as well.  The disclosure triangles alongside each package allow you to check for more details before you make a selection.

Please report any bugs you find with this new feature.  If all goes well, we’ll enable this for all users soon.

Update: as of 2016-09-22, this feature is enabled for all Launchpad users.

Valorie Zimmerman: Kubuntu needs some K/Ubuntu Developer help this week

Planet Ubuntu - Mon, 09/19/2016 - 15:53
Our packaging team has been working very hard, however, we have a lack of active Kubuntu Developers involved right now. So we're asking for Devels with a bit of extra time and some experience with KDE packages to look at our Frameworks, Plasma and Applications packaging in our staging PPAs and sign off and upload them to the Ubuntu Archive.

If you have the time and permissions, please stop by #kubuntu-devel in IRC or Telegram and give us a shove across the beta timeline!

Daniel Holbach: Get your software snapped tomorrow!

Planet Ubuntu - Mon, 09/19/2016 - 06:38

For a few weeks we have been running the Snappy Playpen as a pet/research project already. Many great things have happened since then:

  • With the Playpen we now have a repository of great best-practice examples.
  • We brought together a lot of people who are excited about snaps, who worked together, collaborated, wrote plugins together and improved snapcraft and friends.
  • A number of cloud parts were put together by the team as well.
  • We landed quite a few high-quality snaps in the store.
  • We had lots of fun.

Opening the Sandpit

With our next Snappy Playpen event tomorrow, 20th September 2016, we want to extend the scheme. We are opening the Sandpit part of the Playpen!

One thing we realised in the last weeks is that we treated the Playpen more and more like a place where well-working, tested and well-understood snaps go to inspire people who are new to snapping software. What we saw as well was that lots of fellow snappers kept their half-done snaps on their hard-disk instead of sharing them and giving others the chance to finish them or get involved in fixing. Time to change that, time for the Sandpit!

In the Sandpit things can get messy, but you get to explore and play around. It’s fun. Naturally things need to be light-weight, which is why we organise the Sandpit on just a simple wiki page. The way it works is that if you have a half-finished snap, you simply push it to a repo, add your name and the link to the wiki, so others get a chance to take a look and work together with you on it.

Tomorrow, 20th September 2016, we are going to get together again and help each other snapping, clean up old bits, fix things, explain, hang out and have a good time. If you want to join, you’re welcome. We’re on Gitter and on IRC.

  • WHEN: 2016-09-20
  • WHAT: Snappy Playpen event – opening the Sandpit
  • WHERE: Gitter and on IRC

Added bonus

As an added bonus, we are going to invite Michael Vogt, one of the core developers of snapd to the Ubuntu Community Q&A tomorrow. Join us at 15:00 UTC tomorrow on http://ubuntuonair.com and ask all the questions you always had!

See you tomorrow!

Paul Tagliamonte: DNSync

Planet Ubuntu - Sun, 09/18/2016 - 14:00

While setting up my new network at my house, I figured I’d do things right and set up an IPSec VPN (and a few other fancy bits). One thing that became annoying when I wasn’t on my LAN was I’d have to fiddle with the DNS Resolver to resolve names of machines on the LAN.

Since I hate fiddling with options when I need things to just work, the easiest way out was to make the DNS names actually resolve on the public internet.

A day or two later, some Golang glue, and AWS Route 53, and I wrote code that would sit on my dnsmasq.leases, watch inotify for IN_MODIFY signals, and sync the records to AWS Route 53.

I pushed it up to my GitHub as DNSync.

PRs welcome!

Aurélien Gâteau: Starting an interactive rebase from Tig

Planet Ubuntu - Fri, 09/16/2016 - 15:17

One of the tools I use a lot to work with git repositories is Tig. This handy ncurses tool let you browse your history, cherry-pick commits, do partial commits and a few other things. But one thing I wanted to do was to be able to start an interactive rebase from within Tig. This week I decided to dig into the documentation a bit to see if it was possible to do so.

Reading the manual I found out Tig is extensible: one can bind shortcut keys to trigger commands. The bound commands can use of several state variables such as the current commit or the current branch. This makes it possible to use Tig as a commit selector for custom commands. Armed with this knowledge, I added these lines to $HOME/.tigrc:

bind main R !git rebase -i %(commit)^ bind diff R !git rebase -i %(commit)^

That worked! If you add these two lines to your .tigrc file, you can start Tig, scroll to the commit you want and press Shift+R to start the rebase from it. No more copying the commit id and going back to the command line!

Note: Shift+R is already bound to the refresh action in Tig, but this action can also be triggered with F5, so it's not really a problem.

Jos&eacute; Antonio Rey: Juju Client Now Works Properly On All Linuxes!

Planet Ubuntu - Fri, 09/16/2016 - 09:57

Hello everyone! This is a guest post by Menno Smits, who works on the Juju team. He originally announced this great news in the Juju mailing list (which you should all subscribe to!), and I thought it was definitely worth to announce in the Planet. Stay tuned to the mailing list for more great announcements, which I feel are going to come now that we are moving to RCs of Juju 2.0.

Juju 2.0 is just around the corner and there’s so much great stuff in the release. It really is streets ahead of the 1.x series.

One improvement that’s recently landed is that the Juju client will now work on any Linux distribution. Up until now, the client hasn’t been usable on variants of Linux for which Juju didn’t have explicit support (Ubuntu and CentOS). This has now been fixed – the client will now work on any Linux distribution. Testing has been done with Fedora, Debian and Arch, but any Linux distribution should now be OK to use.

It’s worth noting that when using local Juju binaries (i.e. a `juju` and `jujud` which you’ve built yourself or that someone has sent you), checks in the `juju bootstrap` command have been relaxed. This way, a Juju client running on any Linux flavour can bootstrap a controller running any of the supported Ubuntu series. Previously, it wasn’t possible for a client running a non-Ubuntu distribution to bootstrap a Juju controller using local Juju binaries.

All this is great news for people who are wanting to get started with Juju but are not running Ubuntu on their workstations. These changes will be available in the Juju 2.0 rc1 release.

Ubuntu Podcast from the UK LoCo: S09E29 – Edward Snowden presents Disneyland - Ubuntu Podcast

Planet Ubuntu - Thu, 09/15/2016 - 13:15

It’s Episode Twenty-Nine of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress (just about) are here again.

Most of us are here, but one of us is busy and another was cut off part way through!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Costales: Aprende a crear una aplicación para Ubuntu Phone

Planet Ubuntu - Thu, 09/15/2016 - 12:27
Enlazo a un excelente tutorial (Licencia CC-BY-NC-SA 4) de Miguel Menéndez, que explica en español paso a paso cómo crear una aplicación para Ubuntu Phone de una manera amena y entretenida. Incluyendo ejemplos, ejercicios, apoyo en IRC y grupo de Telegram...

Importante indicar que es el curso aún está en desarrollo, con una entrega semanal, por lo que en las próximas semanas se irá completando.

Curso UTPara acceder, simplemente descarga la aplicación en tu Ubuntu Phone:
o visita esta URL:


Subscribe to Ubuntu Arizona LoCo Team aggregator