Feed aggregator

Nicholas Skaggs: Google Code In 2015

Planet Ubuntu - Wed, 11/18/2015 - 15:07
As you may have heard, ubuntu has been selected as a mentoring organization for Google Code In (GCI). GCI is a opportunity for high school students to learn about and participate in open source communities. As a mentoring organization, we'll create tasks and review the students work. Google recruits the students and provides rewards for those who do the best work. The 2015 contest runs from December 7, 2015 to January 25, 2016.

Are you excited?
On December 7th, we'll be gaining a whole slew of potential contributors. Interested students will select from the tasks we as a community have put forth and start working them. That means we need your help to both create those tasks, and mentor incoming students.

I know, I know, it sounds like work. And it is a bit of work, but not as much as you think. Mentors need to provide a task description and be available for questions if needed. Once the task is complete, check the work and mark the task complete. You can be a mentor for as little as a single task. The full details and FAQ can be found on the wiki. Volunteering to be a mentor means you get to create tasks to be worked, and you agree to review them as well. You aren't expected to teach someone how to code, write documentation, translate, do QA, etc, in a few weeks. Breathe easy.

You can help!
I know there is plenty of potential tasks lying in wait for someone to come along and help out. This is a great opportunity for us as a community to both gain a potential contributor, and get work done. I trust you will consider being a part of the process.

I'm still not sure
Please, do have a look at the FAQ, as well as the mentor guide. If that's not enough to convince you of the merits of the program, I'd invite you to read one student's feedback about his experience participating last year. Being a mentor is a great way to give back to ubuntu, get invovled and potentially gain new members.

I'm in, what should I do?
Contact myself, popey, or José who can add you as a mentor for the organization. This will allow you to add tasks and participate in the process. Here's to a great GCI!

Marcin Juszkiewicz: Unbricking APM Mustang

Planet Ubuntu - Wed, 11/18/2015 - 11:03

Firmware update usually ends well. Previous (1.15.19) firmware failed to boot on some of Mustangs at Red Hat but worked fine on one under my desk. Yesterday I got 1.15.22 plus slimpro update and managed to get machine into non-bootable state (firmware works fine on other machines).

So how to get APM Mustang back into working state?

  • Get a SD card and connect it to an PC Linux machine with reader support.
  • Download Mustang software from MyAPM (1.5.19 was latest available there).
  • Unpack “mustang_sq_1.15.19.tar.xz” and then “mustang_binaries_1.15.19.tar.xz” tarballs.
  • Write the boot loader firmware to the SD card: “dd if=tianocore_media.img of=/dev/SDCARD“.
  • Take FAT formatted USB drive and put there some files from “mustang_binaries_1.15.19.tar.xz” archive (all into root directory):
    • potenza/apm_upgrade_tianocore.cmd
    • potenza/tianocore_media.img
    • potenza/UpgradeFirmware.efi
  • Power off your Mustang
  • Configure the Mustang to boot from SD card via these jumpers change:
    • Find HDR9 (close to HDR8, which is next to PCIe port)
    • Locate pin 11-12 and 17-18.
    • Connect 11-12 and 17-18 with jumpers
  • Insert SD card to Mustang SD port
  • Connect serial cable to Mustang and your PC.
  • Run minicom/picocom/screen/other-preferred-serial-terminal and connect to Mustang serial port
  • Power up Mustang and you should boot with SD UEFI firmware:
X-Gene Mustang Board Boot firmware (version 1.1.0 built at 12:25:21 on Jun 22 2015) PROGRESS CODE: V3020003 I0 PROGRESS CODE: V3020002 I0 PROGRESS CODE: V3020003 I0 PROGRESS CODE: V3020002 I0 PROGRESS CODE: V3020003 I0 PROGRESS CODE: V3020002 I0 PROGRESS CODE: V3020003 I0 PROGRESS CODE: V3021001 I0 TianoCore 1.1.0 UEFI 2.4.0 Jun 22 2015 12:24:25 CPU: APM ARM 64-bit Potenza Rev A3 1400MHz PCP 1400MHz 32 KB ICACHE, 32 KB DCACHE SOC 2000MHz IOBAXI 400MHz AXI 250MHz AHB 200MHz GFC 125MHz Board: X-Gene Mustang Board The default boot selection will start in 2 second
  • Press any key to get into UEFI menu.
  • Select “Shell” option and you will be greeted with a list of recognized block devices and filesystems. Check which is USB (“FS6” in my case).
Shell> fs6: FS6:\> ls Directory of: FS6:\ 08/04/2015 00:28 39,328 UpgradeFirmware.efi 08/27/2015 19:20 56 apm_upgrade_tianocore.cmd 08/27/2015 19:20 2,098,176 tianocore_media.img
  • Flash firmware using “UpgradeFirmware.efi apm_upgrade_tianocore.cmd” command.
  • Power off
  • Change jumpers back to normal (11-12 and 17-18 to be open).
  • Eject SD card from Mustang
  • Power on

And your Mustang should be working again. You can also try to write other versions of firmware of course or grab files from internal hdd.

Related posts:

  1. And now something more enterprise — CentOS on AArch64
  2. Installation of Fedora 21 on clean APM Mustang
  3. Flashing U-Boot on Efika MX Smartbook

Jonathan Riddell: SQL CRUD: what’s good and what’s crud?

Planet Ubuntu - Wed, 11/18/2015 - 10:59

I maintain a membership database for my canoe club and I implemented the database years ago using a PHP library called Phormation which let me make an index page with simple code like:

query = “SELECT * FROM member WHERE year=2015″
show_table(column1, column2)

and an entry editing page with something like this:

query = “SELECT * FROM member WHERE id=$id”
widgets.append([column1, “name”, Textfield])
widgets.append([column2, “joined”, Date])

and voila I had a basic UI to edit the database.

Now I want to move to a new server but it seems PHP has made a backwards incompatible change between 5.0 and 5.5 and Phormation no longer runs and it’s no longer maintained.

So lazyweb, what’s the best way to make a basic web database editor where you can add some basic widgets for different field types and there’s two tables with a 1:many relationship which both need edited?



Daniel Pocock: Improving DruCall and JSCommunicator user interface

Planet Ubuntu - Wed, 11/18/2015 - 10:45

DruCall is one of the easiest ways to get up and running with WebRTC voice and video calling on your own web site or blog. It is based on 100% open source and 100% open standards - no binary browser plugins and no lock-in to a specific service provider or vendor.

On Debian or Ubuntu, just running a command such as

# apt-get install -t jessie-backports drupal7-mod-drucall

will install Drupal, Apache, MySQL, JSCommunicator, JsSIP and all the other JavaScript library packages and module dependencies for DruCall itself.

The user interface

Most of my experience is in server-side development, including things like the powerful SIP over WebSocket implementation in the reSIProcate SIP proxy repro.

In creating DruCall, I have simply concentrated on those areas related to configuring and bringing up the WebSocket connection and creating the authentication tokens for the call.

Those things provide a firm foundation for the module, but it would be nice to improve the way it is presented and optimize the integration with other Drupal features. This is where the projects (both DruCall and JSCommunicator) would really benefit from feedback and contributions from people who know Drupal and web design in much more detail.

Benefits for collaboration

If anybody wants to collaborate on either or both of these projects, I'd be happy to offer access to a pre-configured SIP WebSocket server in my lab for more convenient testing. The DruCall source code is a Drupal.org hosted project and the JSCommunicator source code is on Github.

When you get to the stage where you want to run your own SIP WebSocket server as well then free community support can also be provided through the repro-user mailing list. The free, online RTC Quick Start Guide gives a very comprehensive overview of everything you need to do to run your own WebRTC SIP infrastructure.

Charles Profitt: Reminder: Vote in the Ubuntu Community Council Election

Planet Ubuntu - Wed, 11/18/2015 - 08:43

Elections are an important opportunity for people to select who will represent them. That is the case in national elections as well as those for the Ubuntu Community. Currently there is an ongoing election for the Ubuntu Community Council and if you are an Ubuntu Member you have an opportunity to select the people who will serve on the Community Council. Last election 299 votes were cast out of 732 eligible voters. That is an election turnout of 41%. I am posting this as a reminder to all Ubuntu Members to cast their vote. It would be great to have a better turnout this election.

The current candidates are:

Our current turnout is 32% and there are eight days left to vote. Remember to vote! Lets ROCK the election!

Ronnie Tucker: HP Linux Imaging and Printing Driver Updated with Support for Ubuntu 15.10

Planet Ubuntu - Wed, 11/18/2015 - 02:55

HP has had the great pleasure of announcing a new release of its open source and freely distributed HPLIP (HP Linux Imaging and Printing) driver for GNU/Linux operating systems.

According to the release notes, HPLIP 3.5.11 adds support for the newly released Ubuntu 15.10 (Wily Werewolf), Fedora 23, and openSUSE Leap 42.1 GNU/Linux operating system. It also includes support for custom AppArmor profiles and SELinux Policy, along with support for automatic discovery of network scanners. A new knowledge base article has been added as well, and it includes information about unblocking ports and enabling mDNS and SLP services through the openSUSE Firewall tool.

Source: http://news.softpedia.com/news/hp-linux-imaging-and-printing-driver-update-with-support-for-ubuntu-15-10-496260.shtml
Submitted by: Arnfried Walbrecht

Ronnie Tucker: Red Hat Enterprise Linux 5 and CentOS 5 Receive an Important Kernel Update

Planet Ubuntu - Wed, 11/18/2015 - 02:52

According to the kernel bug fix advisory, two security flaws have been discovered and patched in the Linux kernel 2.6.18 packages. The first one is related to the incorrect setting of a utrace flag, which caused the kernel to no longer handle the NULL pointer reference in the utrace_unsafe_exec() function, leading to a system crash. The second vulnerability is about a delay in the reset execution of newly changed firmware.

“Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 5. The kernel packages contain the Linux kernel, the core of any Linux operating system. […] Users of kernel are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect,” reads the announcement.

Source: http://news.softpedia.com/news/red-hat-enterprise-linux-5-and-centos-5-receive-an-important-kernel-update-496257.shtml
Submitted by: Arnfried Walbrecht

Carla Sella: Ubuntu Phone update: OTA 8

Planet Ubuntu - Tue, 11/17/2015 - 13:08

The next OTA,  OTA 8  is due to land in the next day or two:

This is what we will find in it:

  • New weather application
  •  Improved Contacts sync (implements a new syncronisation engine)
  • The sound indicator now provides audio playback controls - currently play and pause only, skip forward/skip backward to follow
  • New Twitter scope includes the ability to tweet, comment, follow and unfollow
  • New Book aggregator scope, with lots of regional content
  •  The OTA version is now reported in Settings > About this phone
  • Location service now additionally provides location and heading information
  • Web browser now includes:
    • Media access permissions for sites
    • Top level bookmarks view
    • Thumbnails and grid view for Top Sites page
  • Ubuntu store: QtPurchasing based in-app-purchases(currently in pilot mode)
  • Various bug fix details can be found here.

Svetlana Belkin: Where’s Me Support?!

Planet Ubuntu - Tue, 11/17/2015 - 07:13

Over the two (2+) plus years, I started many projects within the Open * communities that I’m apart of. Most of these projects I started were meant to be worked on with two or more people (including me, of course) but I never had luck in getting anyone to work together with me. Okay, once it has succeeded and two (2) or three (3) times, it was close but still failed. That one time when it succeeded happened because I was on the Membership Board where the members had to be committed.

Because many projects meant for collaboration failed that means either that the communities don’t have enough people willing to work with me (or on anything!) (or a time commitment) or I have networking issues. The latter is within my control and the earlier is one of the problems that most of the Open * communities face.

Lacking support and the feeling of not getting things done over these two plus years is making me to lose motivation to volunteer within these communities. In fact, some of this has already affected four teams within the Ubuntu Community: Ubuntu Women, Ubuntu Ohio, Ubuntu Leadership Team, and Ubuntu Scientists and no news or any activity is shown. As for others, I’m close in removing myself from the communities, something that I don’t want to do and this is why I wrote this. It’s to answer my question of: Where’s my support?! (“me” in the title, but it’s for the lightheartedness that this post needs) I know of a few that maybe feeling this also.

As a thought, as I wrote this post, is what if I worked on a site that could serve as a volunteer board for projects within the Open * communities. Something like “Find a Task” started by Mozilla (I think) and brought over to the Ubuntu Community by Ian W, but maybe as a Discourse forum or Stack Exchange. The only problem that I will face is, again, support for people who want to post and to read. I had issues getting Open Science groups/bloggers/people to add their blog’s feed to Planet Open Science hosted by OKFN’s Open Science But that might be different if it will have almost all types of Open * movements will be represented. Who knows.

Readers, please don’t worry, as this post is written during the CC election in the Ubuntu Community, it will not affect my will to run for a chair. In fact, I think, being in the CC could help me to learn to deal with this issue if others are facing this but they are afraid to talk about in public.

I really, really don’t want to leave any of the Open Communities because of lack of support and I hope some of you can understand and help me. I would like your feedback/comments/advice on this one.

Thank you.

P.S. If this sounded like a rant, sorry, I had to get it out.

Ronnie Tucker: Gorgeous Chapeau 23 Linux Distro Is Now in Beta, Based on Fedora 23 and GNOME 3.18

Planet Ubuntu - Tue, 11/17/2015 - 03:50

Vince Pooley of Chapeau Linux has had the great pleasure of announcing the release and immediate availability for download and testing of the Beta build of the future Chapeau 23 Linux operating system.

Based on the recently released Fedora 23 Linux, as well as on a package from the RPMFusion of pre-release testing software repositories, Chapeau 23 Beta is derived from the Workstation edition, which means that it not only inherits all of its awesome features and functionality but also includes packages from its own repo.

“Chapeau 23 is on its way, in the meantime enjoy Chapeau 23 Beta,” says Vince Pooley, lead developer of Chapeau Linux. “If you love having the latest software and don’t mind the odd issue that may crop up go ahead and check it out. If you find an undocumented issue it would be appreciated if you report it either in the support forum.”

Submitted by: Arnfried Walbrecht

Stephen Michael Kellat: Boy Is It Ever Monday...

Planet Ubuntu - Mon, 11/16/2015 - 20:57
Backport Work

Currently there is a new version of Dianara that Mònica Ramírez Arceda got uploaded to Debian. It made the transition from Unstable to Testing on November 12, 2015. In keeping with normal practice in working with the Debian Maintainer as well as the upstream author I've done the usual testing on my end and filed the backporting bug LP #1516123 for the backports team to poke and prod my testing to see if we can update things. This proposed backporting the version currently in the release pocket for Xenial Xerus to 15.10, 15.04, and 14.04. No, there is no retconning of Dianara into 12.04. This was uploaded by Micah Gersten of the backports team while I was writing this blog post and should eventually show up once the archive copies it to the necessary places.

Why Bother?

In working with the upstream author, the goal is to keep the new features available in supported versions of the flavours of Ubuntu. Dianara serves as one of many clients to the pump.io social federation. Identica never went away but was part of a fundamental architectural shift that brings us the pump.io federation. Many servers make up this social federation. If you are so inclined, there is even a client implemented in emacs.

With all the new features that have been developed in the new architecture provided by pump.io, it makes sense to backport the new versions of Dianara to supported versions of Ubuntu. Backports are enabled by default normally so the goal is to not have people fooling around with PPAs unless truly necessary. Granted a PPA was used to build the testing versions of dianara for this backport request but there are many other things in that specific PPA that may potentially make your computer go crunch, bang, kaboom unless you pin packages. I hope I put up enough caution tape around it.

It should be noted that this is a case of Mint not exactly duplicating Ubuntu as they do not pick up any of the backports that have been done of Dianara here.

Isn't This Just Identica?

The developer of Dianara has an entire post publicly visible on part of the pump.io federation explaining what has changed in making the world so much bigger. Just remember that though Identica itself is closed to new registrations and is home base to old-timers who made an architectural jump, there are more servers out there in the federation to use. Hosting your own is possible, too.

Why Should I Care?

There actually is a getting started guide. Unlike the radical changes being introduced by Twitter lately, this is an attempt to see what can be done to build an architecture for all. There is some standards work in the background which may result in interoperation with MediaGoblin too.

If you haven't checked out the pump.io federation, check out the guide above and give it a try. You'll find plenty of discussion and the guide gives recommendations on people to follow. It certainly isn't Twitter. For many it is a different path to walk...and that's okay.

When Do I Get Back On Cadence?

When I find a job that is not as madcap as the current one. I really don't work for nice people. You could send a dollar via PayPal but unless I get some fundamentals shifted around I'll still be working for the not so nice people for a while longer.

Boy Is It Ever Monday... by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Adam Stokes: Extending Juju, Plugin basics in Go

Planet Ubuntu - Mon, 11/16/2015 - 20:51

This is a quick introduction on extending Juju with plugins written in Go. What we'll cover:

  • Setting up your Go environment
  • Getting the Juju source code
  • Writing a basic plugin named juju-lyaplugin short for (juju-learnyouaplugin)
  • End result will be a plugin that closely resembles what juju run would do.
  • Running on Ubuntu 14.04 or above
  • Go 1.2.1 or above (Article written using Go 1.2.1)
  • A basic understanding of the Go language, package imports, etc.
Setting up your Go environment

This is all a matter of preference but for the sake of this article we'll do it
my way :)

Install Go

On Trusty and above:

sudo apt-get install golang Go dependency management

2 projects I use are:

Install $ cd /tmp && git clone https://github.com/pote/gvp.git && cd gvp && sudo ./configure && sudo make install $ cd /tmp && git clone https://github.com/pote/gpm.git && cd gpm && sudo ./configure && sudo make install

Feel free to check out their project pages for additional uses.

Create your project directory $ mkdir ~/Projects/juju-learnyouaplugin $ cd ~/Projects/juju-learnyouaplugin Setup the project specific Go paths $ source gvp in

This will setup your $GOPATH and $GOBIN variables for use when resolving imports, compiling, etc.

$ echo $GOPATH /home/adam/Projects/juju-learnyouaplugin/.godeps $ echo $GOBIN /home/adam/Projects/juju-learnyouaplugin/.godeps/bin

From this point on all package dependencies will be stored in the project's .godeps directory.

Get the Juju code

From your project's directory run:

$ go get -d -v github.com/juju/juju/... Writing the plugin

Now that all the preparatory tasks are complete we can begin the fun
stuff. Using your favorite editor open up a new file main.go. Within this file we need to define a few
package imports that are necessary for the plugin.

import ( "fmt" "github.com/juju/cmd" "github.com/juju/juju/apiserver/params" "github.com/juju/juju/cmd/envcmd" "github.com/juju/juju/juju" "github.com/juju/loggo" "github.com/juju/names" "launchpad.net/gnuflag" "os" "time" _ "github.com/juju/juju/provider/all" )

Let's go through the imports and list why they are required.

  • github.com/juju/cmd - This import gives us access to the run context of a command DefaultContext
  • github.com/juju/juju/cmd/envcmd - Provides EnvCommandBase for creating new commands and giving us access to the API Client for making queries against the Juju state server.
  • github.com/juju/juju/apiserver/params - Provides access to 2 types RunParams and RunResults for executing the api call to Run and return the executed results.
  • github.com/juju/juju/juju - Provides access to InitJujuHome for initializing the necessary bits like charm cache and environment. Required before running any juju cli command.
  • github.com/juju/loggo - Provides access to juju's logging api
  • github.com/juju/names - This package provides some convenience functions in particular we'll use IsValidMachine
  • launchpad.net/gnuflag - Provides the interface for our command definition like setting arguments, usage information, and execution.
  • github.com/juju/juju/provider/all - Registers all known providers (amazon, maas, local, etc)

With that said let's spec out the plugin type. This will hold our embedded command base and cli arguments.

type LYAPluginCommand struct { envcmd.EnvCommandBase out cmd.Output timeout time.Duration machines []string services []string units []string commands string envName string description bool }

Once defined we can spec out our cli command and its functions.

The info function

First part of the command is the Info() function which returns
information about the particular subcommand, in our case that is lyaplugin

var doc = `Run a command on target machine(s) This example plugin mimics what "juju run" does. eg. juju lyaplugin -m 1 -e local "touch /tmp/testfile" ` func (c *LYAPluginCommand) Info() *cmd.Info { return &cmd.Info{ Name: "lyaplugin", Args: "<commands>", Purpose: "Run a command on remote target", Doc: doc, } } SetFlags function

Next we'll define what arguments are available to this new subcommand (lyaplugin).

func (c *LYAPluginCommand) SetFlags(f *gnuflag.FlagSet) { f.BoolVar(&c.description, "description", false, "Plugin Description") f.Var(cmd.NewStringsValue(nil, &c.machines), "machine", "one or more machine ids") f.Var(cmd.NewStringsValue(nil, &c.machines), "m", "") f.StringVar(&c.envName, "e", "local", "Juju environment") f.StringVar(&c.envName, "environment", "local", "") }

Here we are providing a --description argument to satisfy a Juju plugin requirement. In addition a target argument -m/--machine MACHINEID and the ability to define which juju environment to execute this in -e/--environment defaults to local environment.

Init function

Here we'll parse the cli arguments, do some basic sanity checking to make sure the passed arguments validate to our liking.

func (c *LYAPluginCommand) Init(args []string) error { if c.description { fmt.Println(doc) os.Exit(0) } if len(args) == 0 { return fmt.Errorf("no commands specified") } if c.envName == "" { return fmt.Errorf("Juju environment must be specified.") } c.commands, args = args[0], args[1:] if len(c.machines) == 0 { return fmt.Errorf("You must specify a target with --machine, -m") } for _, machineId := range c.machines { if !names.IsValidMachine(machineId) { return fmt.Errorf("(%s) not a valid machine id.", machineId) } } return cmd.CheckEmpty(args) }

Notice the names.IsValidMachine(machineId) which was imported above as this is the only place where we make use of that
particular package.

Run function

To the heart of the command where the execution based on the cli arguments take place. I'll describe inline what is happening:

func (c *LYAPluginCommand) Run(ctx *cmd.Context) error { c.SetEnvName(c.envName)

Set the environment name pulled from our arguments list so we known which environment to run our command against.

client, err := c.NewAPIClient() if err != nil { return fmt.Errorf("Failed to load api client: %s", err) } defer client.Close()

Grab the api client for the current environment.

var runResults []params.RunResult logger.Infof("Running cmd: %s on machine: %s", c.commands, c.machines[0]) params := params.RunParams{ Commands: c.commands, Timeout: c.timeout, Machines: c.machines, Services: c.services, Units: c.units, }

Prepare the RunParams for passing to the api's Run function.

runResults, err = client.Run(params) if err != nil { fmt.Errorf("An error occurred: %s", err) } if len(runResults) == 1 { result := runResults[0] logger.Infof("Result: out(%s), err(%s), code(%d)", result.Stdout, result.Stderr, result.Code) } return nil }

Execute the api Run function and return the results from the executed command on the machine.


The last bit of code is our main function which ties everything together.

func main() { loggo.ConfigureLoggers("<root>=INFO") err := juju.InitJujuHome()

Initialize the Juju environment based on the default paths or if $JUJU_HOME is defined.

if err != nil { panic(err) } ctx, err := cmd.DefaultContext() if err != nil { panic(err) }

Set the proper command context

c := &LYAPluginCommand{} cmd.Main(c, ctx, os.Args[1:]) }

Pass our plugin type/command into the supplied command Context and off you go.


With the code written, build and run the command.

$ go build -o juju-lyaplugin -v main.go

Place the executable somewhere in your $PATH

$ mv juju-lyaplugin ~/bin

See if Juju picks it up

$ juju help lyaplugin usage: lyaplugin [options] <commands> purpose: Run a command on remote target options: --description (= false) Plugin Description -e, --environment (= "local") Juju environment -m, --machine (= ) Run a command on target machine(s) This example plugin mimics what "juju run" does. eg. juju lyaplugin -m 1 -e local "touch /tmp/testfile"

See it in your list of plugins, requires juju-plugins to be installed:

$ juju help plugins Juju Plugins Plugins are implemented as stand-alone executable files somewhere in the user's PATH. The executable command must be of the format juju-<plugin name>. ... git-charm Clone and keep up-to-date a git repository containing a Juju charm for easy source managing. kill Destroy a juju object and reap the environment. lyaplugin Run a command on target machine(s) ...

This should hopefully give you a better idea where to start when you decide to dive into writing a juju plugin :)

Full source code for juju-learnyouaplugin

The Fridge: Ubuntu Weekly Newsletter Issue 442

Planet Ubuntu - Mon, 11/16/2015 - 12:29

Welcome to the Ubuntu Weekly Newsletter. This is issue #442 for the week November 9 – 15, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley (tsimonq2)
  • Daniel Beck
  • Paul White
  • Aaron Honeycutt
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 442

The Fridge - Mon, 11/16/2015 - 12:29

Welcome to the Ubuntu Weekly Newsletter. This is issue #442 for the week November 9 – 15, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley (tsimonq2)
  • Daniel Beck
  • Paul White
  • Aaron Honeycutt
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Daniel Pocock: Quick start using Blender for video editing

Planet Ubuntu - Mon, 11/16/2015 - 11:53

Updated 2015-11-16 for WebM

Although it is mostly known for animation, Blender includes a non-linear video editing system that is available in all the current stable versions of Debian, Ubuntu and Fedora.

Here are some screenshots showing how to start editing a video of a talk from a conference.

In this case, there are two input files:

  • A video file from a DSLR camera, including an audio stream from a microphone on the camera
  • A separate audio file with sound captured by a lapel microphone attached to the speaker's smartphone. This is a much better quality sound and we would like this to replace the sound included in the video file.
Open Blender and choose the video editing mode

Launch Blender and choose the video sequence editor from the pull down menu at the top of the window:

Now you should see all the video sequence editor controls:

Setup the properties for your project

Click the context menu under the strip editor panel and change the panel to a Properties panel:

The video file we are playing with is 720p, so it seems reasonable to use 720p for the output too. Change that here:

The input file is 25fps so we need to use exactly the same frame rate for the output, otherwise you will either observe the video going at the wrong speed or there will be a conversion that is CPU intensive and degrades the quality. Also check that the resolution_percentage setting under the picture dimensions is 100%:

Now specify output to PNG files. Later we will combine them into a WebM file with a script. Specify the directory where the files will be placed and use the # placeholder to specify the number of digits to use to embed the frame number in the filename:

Now your basic rendering properties are set. When you want to generate the output file, come back to this panel and use the Animation button at the top.

Editing the video

Use the context menu to change the properties panel back to the strip view panel:

Add the video file:

and then right click the video strip (the lower strip) to highlight it and then add a transform strip:

Audio waveform

Right click the audio strip to highlight it and then go to the properties on the right hand side and click to show the waveform:

Rendering length

By default, Blender assumes you want to render 250 frames of output. Looking in the properties to the right of the audio or video strip you can see the actual number of frames. Put that value in the box at the bottom of the window where it says 250:

Enable AV-sync

Also at the bottom of the window is a control to enable AV-sync. If your audio and video are not in sync when you preview, you need to set this AV-sync option and also make sure you set the frame rate correctly in the properties:

Add the other sound strip

Now add the other sound file that was recorded using the lapel microphone:

Enable the waveform display for that sound strip too, this will allow you to align the sound strips precisely:

You will need to listen to the strips to make an estimate of the time difference. Use this estimate to set the "start frame" in the properties for your audio strip, it will be a negative value if the audio strip starts before the video. You can then zoom the strip panel to show about 3 to 5 seconds of sound and try to align the peaks. An easy way to do this is to look for applause at the end of the audio strips, the applause generates a large peak that is easily visible.

Once you have synced the audio, you can play the track and you should not be able to hear any echo. You can then silence the audio track from the camera by right clicking it, look in the properties to the right and change volume to 0.

Make any transforms you require

For example, to zoom in on the speaker, right click the transform strip (3rd from the bottom) and then in the panel on the right, click to enable "Uniform Scale" and then set the scale factor as required:

Render the video output to PNG

Click the context menu under the Curves panel and choose Properties again.

Click the Animation button to generate a sequence of PNG files for each frame.

Render the audio output

On the Properties panel, click the Audio button near the top. Choose a filename for the generated audio file.

Look on the bottom left-hand side of the window for the audio file settings, change it to the ogg container and Vorbis codec:

Ensure the filename has a .ogg extension

Now look at the top right-hand corner of the window for the Mixdown button. Click it and wait for Blender to generate the audio file.

Combine the PNG files and audio file into a WebM video file

You will need to have a few command line tools installed for manipulating the files from scripts. Install them using the package manager, for example, on a Debian or Ubuntu system:

# apt-get install mjpegtools vpx-tools mkvtoolnix

Now create a script like the following:

#!/bin/bash -e # Set this to match the project properties FRAME_RATE=25 # Set this to the rate you desire: TARGET_BITRATE=1000 WORK_DIR=${HOME}/video1 PNG_DIR=${WORK_DIR}/frames YUV_FILE=${WORK_DIR}/video.yuv WEBM_FILE=${WORK_DIR}/video.webm AUDIO_FILE=${WORK_DIR}/audio-mixed.ogg NUM_FRAMES=`find ${PNG_DIR} -type f | wc -l` png2yuv -I p -f $FRAME_RATE -b 1 -n $NUM_FRAMES \ -j ${PNG_DIR}/%08d.png > ${YUV_FILE} vpxenc --good --cpu-used=0 --auto-alt-ref=1 \ --lag-in-frames=16 --end-usage=vbr --passes=2 \ --threads=2 --target-bitrate=${TARGET_BITRATE} \ -o ${WEBM_FILE}-noaudio ${YUV_FILE} rm ${YUV_FILE} mkvmerge -o ${WEBM_FILE} -w ${WEBM_FILE}-noaudio ${AUDIO_FILE} rm ${WEBM_FILE}-noaudio Next steps

There are plenty of more comprehensive tutorials, including some videos on Youtube, explaining how to do more advanced things like fading in and out or zooming and panning dynamically at different points in the video.

If the lighting is not good (faces too dark, for example), you can right click the video strip, go to the properties panel on the right hand side and click Modifiers, Add Strip Modifier and then select "Color Balance". Use the Lift, Gamma and Gain sliders to adjust the shadows, midtones and highlights respectively.

Eric Hammond: Using AWS CodeCommit With Git Repositories In Multiple AWS Accounts

Planet Ubuntu - Mon, 11/16/2015 - 11:06

set up each local CodeCommit repository clone to use a specific cross-account IAM role with git clone --config and aws codecommit credentials-helper

When I started testing AWS CodeCommit, I used the Git ssh protocol with uploaded ssh keys to provide access, because this is the Git access mode I’m most familiar with. However, using ssh keys requires each person to have an IAM user in the same AWS account as the CodeCommit Git repository.

In my personal and work AWS usage, each individual has a single IAM user in a master AWS account, and those users are granted permission to assume cross-account IAM roles to perform operations in other AWS accounts. We cannot use the ssh method to access Git repositories in other AWS accounts, as there are no IAM users in those accounts.

AWS CodeCommit comes to our rescue with an alternative https access method that supports Git Smart HTTP, and the aws-cli offers a credential-helper feature that integrates with the git client to authenticate Git requests to the CodeCommit service.

In my tests, this works perfectly with cross-account IAM roles. After the initial git clone command, there is no difference in how git is used compared to the ssh access method.

Most of the aws codecommit credential-helper examples I’ve seen suggest you set up a git config --global setting before cloning a CodeCommit repository. A couple even show how to restrict the config to AWS CodeCommit repositories only so as to not interfere with GitHub and other repositories. (See “Resoures” below)

I prefer to have the configuration associated with the specific Git repositories that need it, not in the global setting file. This is possible by passing in a couple --config parameters to the git clone command.

Create/Get CodeCommit Repository

The first step in this demo is to create a CodeComit repository, or to query the https endpoint of an existing CodeCommit repo you might already have.

Set up parameters:

repository_name=... # Your repository name repository_description=$repository_name # Or more descriptive region=us-east-1

If you don’t already have a CodeCommit repository, you can create one using a command like:

repository_endpoint=$(aws codecommit create-repository \ --region "$region" \ --repository-name "$repository_name" \ --repository-description "$repository_description" \ --output text \ --query 'repositoryMetadata.cloneUrlHttp') echo repository_endpoint=$repository_endpoint

If you already have a CodeCommit repository set up, you can query the https endpoint using a command like:

repository_endpoint=$(aws codecommit get-repository \ --region "$region" \ --repository-name "$repository_name" \ --output text \ --query 'repositoryMetadata.cloneUrlHttp') echo repository_endpoint=$repository_endpoint

Now, let’s clone the repository locally, using our IAM credentials. With this method, there’s no need to upload ssh keys or modify the local ssh config file.

git clone

The git command line client allows us to specify specific config options to use for a clone operation and will add those config settings to the repository for future git commands to use.

Each repository can have a specific aws-cli profile that you want to use when interacting with the remote CodeCommit repository through the local Git clone. The profile can specify a cross-account IAM role to assume, as I mentioned at the beginning of this article. Or, it could be a profile that specifies AWS credentials for an IAM user in a different account. Or, it could simply be "default" for the main profile in your aws-cli configuration file.

Here’s the command to clone a Git repository from CodeCommit, and for authorized access, associate it with a specific aws-cli profile:

profile=$AWS_DEFAULT_PROFILE # Or your aws-cli profile name git clone \ --config 'credential.helper=!aws codecommit --profile '$profile' --region '$region' credential-helper $@' \ --config 'credential.UseHttpPath=true' \ $repository_endpoint cd $repository_name

At this point, you can interact with the local repository, pull, push, and do all the normal Git operations. When git talks to CodeCommit, it will use aws-cli to authenticate each request transparently, using the profile you specified in the clone command above.

Clean up

If you created a CodeCommit repository to follow the example in this article, and you no longer need it, you can wipe it out of existence with this command:

# WARNING! DESTRUCTIVE! CAUSES DATA LOSS! aws codecommit delete-repository \ --region "$region" \ --repository-name $repository_name

You might also want to delete the local Git repository.

With the https access method in CodeCommit, there is no need to need upload or to delete any uploaded ssh keys from IAM, as all access control is performed seamlessly through standard AWS authentication and authorization controls.


Here are some other articles that talk about CodeCommit and the aws-cli credential-helper.

In Setup Steps for HTTPS Connections to AWS CodeCommit Repositories on Linux, AWS explains how to set up the aws-cli credential-helper globally so that it applies to all repositories you clone locally. This is the simplistic setting that I started with before learning how to apply the config rules on a per-repository basis.

In Using CodeCommit and GitHub Credential Helpers, James Wing shows how Amazon’s instructions cause problems if you have some CodeCommit repos and some GitHub repos locally and how to fix them (globally). He also solves problems with Git credential caches for Windows and Mac users.

In CodeCommit with EC2 Role Credentials, James Wing shows how to set up the credential-helper system wide in cloud-init, and uses CodeCommit with an IAM EC2 instance role.

Original article and comments: https://alestic.com/2015/11/aws-codecommit-iam-role/

Canonical Design Team: Jujucharms.com homepage redesign

Planet Ubuntu - Mon, 11/16/2015 - 05:27

After many hours of research, testing and never-ending questions about structure, design, aesthetics and function, we’re very happy to announce that Jujucharms has a new homepage!

All through this site redesign, our main aim has been to make complex content easy to digest and more fun to read. We’ve strived to create a website that not only looks beautiful, but also succeeds in thoroughly explaining what Juju is and the way it can improve your workflow.

Key content is now featured more prominently. We wanted the homepage to be illustrative and functional, hence the positioning of a bold headline and clear CTA which users immediately see as they access the site.

After scrolling, visitors encounter a section which allows direct access into the store, encouraging them to explore the wide range of services it offers. This allows for a more hands-on discovery and understanding of what Juju is – users can either start designing straight away, test it, or explore the site if they wish to find more information before trying it out.

Another key change between the old homepage and the new is the addition of two visual diagrams, which we have made sure to optimise for whichever device users may be accessing the site with. The first diagram explains the most relevant aspects of the service and how users can incorporate it into their workflow (Fig. 1). The second explains the different elements that compose Juju and the way the service works at a technical level (Fig. 2).

Figure 1.

Figure 2.

Overall, we’ve made sure to re-design our homepage in a way that truly benefits our audience. In order to do so we conducted numerous user testing sessions throughout the development of the site and re-iterated the designs based on our user’s feedback. This phase was crucial – It’s easy to focus on the design alone and not the functionality, but this step enabled us to understand which content and elements should be prioritised and define a list of actions to undertake.

Our ideas were constantly reviewed with different members of the team to get different opinions and figure out what steps to take next. After quite a few iterations we hope to have designed a homepage which reflects the core concept and benefits of Juju, and that it becomes something that users will want to come back to.

We hope you like it and look forward to hearing your thoughts about it!

Ronnie Tucker: Netrunner Rolling 2015.11 Linux distro is here

Planet Ubuntu - Mon, 11/16/2015 - 02:46

Like many of you, dear BetaNews readers, I use various operating systems throughout the day, such as iOS, Windows and Ubuntu. On the desktop, Linux is my true love. While Ubuntu is the reliable friend that is always there for me, I love other distros too, such as Fedora.

One of my favorite distros, however, is not particularly popular, but it should be. Netrunner is a brilliant KDE-focused operating system that works well for beginners and experts alike. Despite KDE’s arguably confusing settings, I really like it as an operating system for someone transitioning from Windows. It feels familiar, is very polished, and comes loaded with great software. The latest version of its Manjaro/Arch-based rolling variant is now available and it looks great. Beginners should sit this out, however, and stick with the more-stable Kubuntu-based variant.

Submitted by: Arnfried Walbrecht

Ronnie Tucker: Kodi 16.0 Beta Is a Massive Update

Planet Ubuntu - Mon, 11/16/2015 - 02:43

The Kodi developers are making good progress with their application, and it looks like they are approaching the final version. More important features have been added to the media hub, and it appears that it’s going to be one of the most interesting releases made until now.

We already know that the developers have been working to implement multi-touch support for the Linux platform, but they are making changes to the Windows builds, like the support for Direct X 11, which was sorely lacking.

“After four months of alpha versions we have changed to the beta stage and working towards a final Kodi 16 release. The past four months the developers worked hard behind the scenes on further improving what is already a great piece of software. Lot’s of code clean-up and improving stability, with a dash of features added here and there,” write the developers in the official announcement.

Submitted by: Arnfried Walbrecht


Subscribe to Ubuntu Arizona LoCo Team aggregator