Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 12 min 57 sec ago

Jonathan Riddell: Plasma 5 is Here! All Ready to Eat Your Babies

Tue, 07/15/2014 - 13:32
KDE Project:

A year and a half ago Qt 5 was released giving KDE the opportunity and excuse to do the sort of tidying up that software always needs every few years. We decided that, like Qt, we weren't going for major rewrites of the world as we did for KDE 4. Rather we'd modularise, update and simplify. Last week I clicked the publish button on the story for KDE Frameworks 5, the refresh of kdelibs. Interesting for developers. Today I clicked the publish button on the story of the first major piece of software to use KDE Frameworks, Plasma 5.

Plasma is KDE's desktop. It's also the tablet interface and media centre but those ports are still a work in progress. The basic layout of the desktop hasn't changed, we know you don't want to switch to a new workflow for no good reason. But it is cleaner and more slick. It's also got plenty of bugs in it, this release won't be the default in Kubuntu, but we will make a separate image for you to try it out. We're not putting it in the Ubuntu archive yet for the same reason but you can try it out if you are brave.

Three options to try it out:

1) On Kubuntu, Project Neon is available as PPAs which offers frequently updated development snapshots of KDE Frameworks. Packages will be installed to /opt/project-neon5 and will co-install with your normal environment and installs to 14.04.

sudo apt-add-repository ppa:neon/kf5 apt update apt install project-neon5-session project-neon5-utils project-neon5-konsole

Log out and in again
2) Releases of KDE Frameworks 5 and Plasma 5 are being packaged in the next PPA. These will replace your Plasma 4 install and installs to Utopic development version.

sudo apt-add-repository ppa:kubuntu-ppa/next sudo apt-add-repository ppa:ci-train-ppa-service/landing-005 apt update apt install kubuntu-plasma5-desktop apt full-upgrade

Log out and in again
3) Finally the Neon 5 Live image, updated every Friday with latest source from Git to run a full system from a USB disk.

Good luck! let us know how you get on using #PlasmaByKDE on Twitter or posting to Kubuntu's G+ or Facebook pages.

Sebastian Kügler: Plasma 5 Ingredients

Tue, 07/15/2014 - 05:54

Plasma 5.0 is out. I’ve compiled a (non-exhaustive) list of ingredients and that have been put into this release to give the reader an estimate of the dimensions of the project and the achievement of this milestone:

  • 46 kilo of espresso (pure arabica)
  • The milk of 3 cows
  • a Swiss mountain of chocolate
  • 140 sleepless nights mulling over code
  • 354 liters of pressurized air breathed during scuba dives
  • One encounter with a Mantis shrimp
  • The total length of 43 bathtubs full of tiger tails fixed in pixel-alignment problems
  • 817 hours spent in front of webcams
  • 189MB of irc lines written (compressed)
  • 80.000 automated builds to keep us in check
  • 2403 bugs in the code that had to die
  • A swimming-pool full of tears cried over graphics driver problems and crashers buried deep down in scripting engines, scenegraphs and (the pool allegedly was previously used for skateboarding by Greg KH)
  • 5 magic wands
  • 800 million pixels
  • 37843200000 frames rendered
  • Too many puppies
  • 7 virtual goats sacrificed during a total of 28 full moon ceremonies
  • 450 ml of holy water
  • 76 rock bands
  • 119 beats per minute
  • 8 bits alpha channels
  • 52 WTFs
  • The equivalent of 3 dead trees in recycled paper
  • 2 small branches of cederwood for pencils
  • 1 box of crayons

Nothing like entirely made-up statistics.

tl;dr:

Plasma == ♥

… but also some really hard work, made possible by the sacrifices (see above) of many great people.

Lubuntu Blog: Box support for MATE

Mon, 07/14/2014 - 16:47
The Box theme support continues growing, covering more and more environments. Now we're celebrating that the MATE desktop environment, a GTK3 fork of the traditional Gnome2, will have its own Ubuntu flavour, named Ubuntu MATE Remix. Once tested, I noticed I missed something familiar, our beloved Lubuntu spirit on it. So here begins the (experimental) theme support. It'll be available to download

Nicholas Skaggs: Utopic Test Writing Hackfest

Mon, 07/14/2014 - 11:09
We're having our first hackfest of the utopic cycle this week on Tuesday, July 15th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. Everything you need to know can be found on the wiki page for the event.

During the hangout, we'll be demonstrating writing a new manual testcase, as well as reviewing writing automated testcases. We'll be answering any questions you have as well about contributing a testcase.

We need your help to write some new testcases! We're targeting both manual and automated testcase, so everyone is welcome to pitch in.

We are looking at writing and finishing some testcases for ubuntu studio and some other flavors. All you need is some basic tester knowledge and the ability to write in English.

If you know python, we are also going to be hacking on the toolkit helper for autopilot for the ubuntu sdk. That's a mouthful! Specifically it's the helpers that we use for writing autopilot tests against ubuntu-sdk applications. All app developers make use of these helpers, and we need more of them to ensure we have good coverage for all components developers use. 

Don't worry about getting stuck, we'll be around to help, and there's guides to well, guide you!

Hope to see everyone there!

Ubuntu App Developer Blog: Content Hub to replace Friends API

Mon, 07/14/2014 - 09:52

As part of the continued development of the Ubuntu platform, the Content Hub has gained the ability to share links (and soon text) as a content type, just as it has been able to share images and other file-based content in the past. This allows applications to more easily, and more consistently, share things to a user’s social media accounts.

Consolidating APIs


Thanks to the collaborative work going on between the Content Hub and the Ubuntu Webapps developers, it is now possible for remote websites to be packaged with local user scripts that provide deep integration with our platform services. One of the first to take advantage of this is the Facebook webapp, which while displaying remote content via a web browser wrapper, is also a Content Hub importer. This means that when you go to share an image from the Gallery app, the Facebook webapp is displayed as an optional sharing target for that image. If you select it, it will use the Facebook web interface to upload that image to your timeline, without having to go through the separate Friends API.

This work not only brings the social sharing user experience inline with the rest of the system’s content sharing experience, it also provide a much simpler API for application developers to use for accomplishing the same thing. As a result, the Friends API is being deprecated in favor of the new Content Hub functionality.

What it means for App Devs

Because this is an API change, there are things that you as an app developer need to be aware of. First, though the API is being deprecated immediately, it is not being removed from the device images until after the release of 14.10, which will continue to support the ubuntu-sdk-14.04 framework which included the Friends API. The API will not be included in the final ubuntu-sdk-14.10 framework, or any new 14.10-dev frameworks after -dev2.

After the 14.10 release in October, when device images start to build for utopic+1, the ubuntu-sdk-14.04 framework will no longer be on the images. So if you haven’t updated your Click package by then to use the ubuntu-sdk-14.10 framework, it won’t be available to install on devices with the new image. If you are not using the Friends API, this would simply be a matter of changing your package metadata to the new framework version.  For new apps, it will default to the newer version to begin with, so you shouldn’t have to do anything.

David Tomaschik: Passing Android Traffic through Burp

Sun, 07/13/2014 - 13:57

I wanted to take a look at all HTTP(S) traffic coming from an Android device, even if applications made direct connections without a proxy, so I set up a transparent Burp proxy. I decided to put the Proxy on my Kali VM on my laptop, but didn't want to run an AP on there, so I needed to get the traffic to there.

Network Setup

The diagram shows that my wireless lab is on a separate subnet from the rest of my network, including my laptop. The lab network is a NAT run by IPTables on the Virtual Router. While I certainly could've ARP poisoned the connection between the Internet Router and the Virtual Router, or even added a static route, I wanted a cleaner solution that would be easier to enable/disable.

Setting up the Redirect

I decided to use IPTables on the virtual router to redirect the traffic to my Kali Laptop. Furthermore, I decided to enable/disable the redirect based on logging in/out via SSH, but I needed to make sure the redirect would get torn down even if there's not a clean logout: i.e., the VM crashes, the SSH connection gets interrupted, etc. Enter pam_exec. By using the pam_exec module, we can have an arbitrary command run on log in/out, which can setup and reset the IPTables REDIRECT via an SSH tunnel to my Burp Proxy.

In order to get the command executed on any login/logout, I added the following line to /etc/pam.d/common-session:

session optional pam_exec.so log=/var/log/burp.log /opt/burp.sh

This launches the following script, that checks if its being invoked for the right user, for SSH sessions, and then inserts or deletes the relevant IPTables rules.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32#!/bin/bash BURP_PORT=8080 BURP_USER=tap LAN_IF=eth1 set -o nounset function ipt_command { ACTION=$1 echo iptables -t nat $ACTION PREROUTING -i $LAN_IF -p tcp -m multiport --dports 80,443 -j REDIRECT --to-ports $BURP_PORT\; echo iptables $ACTION INPUT -i $LAN_IF -p tcp --dport $BURP_PORT -j ACCEPT\; } if [ $PAM_USER != $BURP_USER ] ; then exit 0 fi if [ $PAM_TTY != "ssh" ] ; then exit 0 fi if [ $PAM_TYPE == "open_session" ] ; then CMD=`ipt_command -I` elif [ $PAM_TYPE == "close_session" ] ; then CMD=`ipt_command -D` fi date echo $CMD eval $CMD

This redirects all traffic incoming from $LAN_IF destined for ports 80 and 443 to local port 8080. This does have the downside of missing traffic on other ports, but this will get nearly all HTTP(S) traffic.

Of course, since the IPTables REDIRECT target still maintains the same interface as the original incoming connection, we need to allow our SSH Port Forward to bind to all interfaces. Add this line to /etc/ssh/sshd_config and restart SSH:

GatewayPorts clientspecified Setting up Burp and SSH

Burp's setup is pretty straightforward, but since we're not configuring a proxy in our client application, we'll need to use invisible proxying mode. I actually put invisible proxying on a separate port (8081) so I have 8080 setup as a regular proxy. I also use the per-host certificate setting to get the "best" SSL experience.

It turns out that there's an issue with OpenJDK 6 and SSL certificates. Apparently it will advertise algorithms not actually available, and then libnss will throw an exception, causing the connection to fail, and the client will retry with SSLv3 without SNI, preventing Burp from creating proper certificates. It can be worked around by disabling NSS in Java. In /etc/java-6-openjdk/security/java.security, comment out the line with security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg.

Forwarding the port over to the wifilab server is pretty straightforward. You can either use the -R command-line option, or better, set things up in ~/.ssh/config.

Host wifitap User tap Hostname wifilab RemoteForward *:8080 localhost:8081

This logs in as user tap on host wifilab, forwarding local port 8081 to port 8080 on the wifilab machine. The * for a hostname is to ensure it binds to all interfaces (0.0.0.0), not just localhost.

Setting up Android

At this point, you should have a good setup for intercepting traffic from any client of the WiFi lab, but since I started off wanting to intercept Android traffic, let's optimize for that by installing our certificate. You can install it as a user certificate, but I'd rather do it as a system cert, and my testing tablet is already rooted, so it's easy enough.

You'll want to start by exporting the certificate from Burp and saving it to a file, say burp.der.

Android's system certificate store is in /system/etc/security/cacerts, and expects OpenSSL-hashed naming, like a0b1c2d3.0 for the certificate names. Another complication is that it's looking for PEM-formatted certificates, and the export from Burp is DER-formatted. We'll fix all that up in one chain of OpenSSL commands:

(openssl x509 -inform DER -outform PEM -in burp.der; openssl x509 -inform DER -in burp.der -text -fingerprint -noout ) > /tmp/`openssl x509 -inform DER -in burp.der -subject_hash -noout`.0

Android before ICS (4.0) uses OpenSSL versions below 1.0.0, so you'll need to use -subject_hash_old if you're using an older version of Android. Installing is a pretty simple task (replace HASH.0 with the filename produced by the command above):

$ adb push HASH.0 /tmp/HASH.0 $ adb shell android$ su android# mount -o remount,rw /system android# cp /tmp/HASH.0 /system/etc/security/cacerts/ android# chmod 644 /system/etc/security/cacerts/HASH.0 android# reboot

Connect your Android device to your WiFi lab, ssh wifitap from your Kali install running Burp, and you should see your HTTP(S) traffic in Burp (excepting apps that use pinned certificates, that's another matter entirely). You can check your installed certificate from the Android Security Settings.

Good luck with your Android auditing!

Colin King: a final few more features in stress-ng

Sun, 07/13/2014 - 09:47
While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:
  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Lubuntu Blog: PCManFM 1.2.1

Sat, 07/12/2014 - 08:34
Another update of our file manager PCManFM, tones of bug fixes and new implementations: fixed dragging and dropping icons behavior fixed icons positioning fixed resetting cursor in location bar corrected folder popup update on loading reordered ‘View’ menu item implemented drawing icons of dragged items etc. Also a huge update and bug fixing in libfm libraries (1.2.1) too. You can use

Darcy Casselman: New Motherboard: ASUS Z97-A (and Ubuntu)

Fri, 07/11/2014 - 22:31

My old desktop was seeing random drive errors on multiple drives, including a drive I only got a few months ago. And since my motherboard was about 5 years old, I decided it was time to replace it.

I asked the KWLUG mailing list if they had any advice on picking motherboards. The consensus seems to be pretty much “it’s still a crapshoot.” But I bit the bullet and reported back:

I bought a motherboard! An ASUS Z97-A

Mostly because I wanted Intel integrated graphics and I’ve got 3 monitors it needs to drive. And I was hoping the mSATA SSD card I got to replace the one in my Dell Mini 9 (that didn’t work) would fit in the m.2 slot. It doesn’t. Oh well.

I wanted to get it all set up while I was off for Canada Day. Except Canada Computers didn’t have any of my preferred CPU options. So I’ll be waiting for that to come in via NewEgg.

I gave myself a budget of about $500 for mobo, CPU and RAM and I’ll end up going over a little bit (mostly tax and shipping), and tried to build the best machine I could for that.

One of the things I did this time that I hadn’t done before was spec out a desktop machine at System76 and used that as a starting point. System76 is more explicit about things like chipsets for desktops than Zareason is. Which would be great, except they’re using the older H87 chipsets.

…Like the latest Ars System Guide Hot Rod But that’s over 6 months old now. And >they’re balancing their budget against having to buy a graphics card, which I don’t want to do.

I still have some unanswered questions about the Z97 chipset. It’s only been out for about a month. So who knows?

My laptop has mostly been my desktop for the last few years. But I want to knock that off because I’ve been developing back and neck problems. My desktop layout is okay ergonomically, at least better than anything I have for the laptop (including and especially my easy chair with a lapdesk, which is comfy, but kind of horrible on the neck). One of the things that’s holding me back is my desktop is 5 years old and was built cheap because I was mostly using it as a server by that point. I really want to make it something I want to use over the laptop (which is a very nice laptop). Which is why I ended up going somewhat upper-mid range.

That’s one of the nice things about building from parts, despite the lack of useful information: This is the 3rd motherboard I’ve put in this case. I replaced the PSU once a couple years ago so it’s quite sufficient to handle the new stuff. I’m keeping my old harddrives. I could keep the graphics card. I’ll need to buy an adapter for the DVD burner (and I’ve yet to decide if I’m going to do that, or buy a new SATA one or just go without). And I can keep my (frankly pretty awesome) monitors. So $500 gets me a kick-ass whole new machine.

Anyway, long story short, I still have a lot of questions about whether this was the best purchase, but I’m hopeful it’s a good one.

Aside: is Canada Computers really the only store in town that keeps desktop CPUs in stock anymore? I couldn’t get into the UW Tech Shop, but since they’re mostly iPads and crap now, I’m not optimistic. Computer XS doesn’t (at least the Waterloo one). Future Shop and Best Buy don’t. I even went into Neutron for the first time in over 15 years. Nope. Nobody.

It… didn’t go as well as I’d hoped:

So, anyway, I got the motherboard, CPU and put it all in my old case.

I booted up and all three monitors came up without any fuss, which has never happened for me. Awesome! This is great!

Then I tried to play game.

Apparently the current snd_intel_hda ALSA drivers don’t like H97 and Z97 chipsets. The sound was staticky, crackly and distorted.

I’ve spent more than a few hours over the last week hunting around for a fix. I installed Windows on a spare harddrive to make sure it wasn’t a hardware problem (for which I needed to spend the $20 to get a new SATA DVD drive so I could run the Windows driver disk to actually get actual video, networking and sound support :P). And I found this thing on the Arch WIki which, while not fixing the problem, did actually make it worse, leading me to conclude there was some sort of sound driver/pulseaudio problem.

Top tip: when trying to sort out sound driver problems for specific hardware the best thing to do is search for the hardware product id (in my case “8ca0″). That’s how I found this:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321421

Hurray! The workaround works great and now I’m back in business!

So I got burned by going with the bleeding edge, and I should know better. But, even though the information isn’t widely diseminated yet, there is a fix. And a workaround. I’m sure Ubuntu 14.10 will have no problem with it. It’s not as bad as the bleeding edge was years ago. If the fix was easier to find (and I’m going to work on that), it was easier getting going with Ubuntu than it was with Windows.

Paul Tagliamonte: Satuday's the new Sunday

Fri, 07/11/2014 - 17:41

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

Dan Chapman: One week in and Dekko has 41 users

Fri, 07/11/2014 - 05:47

For those of you who haven’t seen Dekko in the software store, it’s a native IMAP email client for Ubuntu Touch. Dekko is essentially my development/ideas branch of my work on Trojita, which in the end is intended to replace Dekko in the store.

The reasoning behind publishing Dekko is for a few reasons really, firstly Trojita prides itself on being standards compliant, already has a desktop client that uses QtWidgets, supports both Qt4 & Qt5 and also has a technical preview Harmattan qml front-end, which was great as most of the initial work for the IMAP parts was in place, so we didn’t need to “re-invent the wheel” (for the most-part anyway), but we soon hit a point where we had surpassed what had previously been done and now was the job of unwinding the intertwined style that QtWidget UI’s naturally ensue. So that we can share the same business logic between all front-ends without losing standards compliance, support both Qt4 & Qt5 and maintain Trojita’s robust quality standards.

I am still relatively new to C++ so this is like one of those “in at the deep end” scenario’s, resulting in the ietf rfc specifications and Qt’s documentation having become the majority of my daily reading.  Dekko was born out of the need to understand the separation (call it a learning project) and to devise a way to create common components that can be shared between all front-ends. This “learning project” resulted in a functional but limited capability email client, so I decided to publish it with the hope of getting as much feedback, bug reports or design ideas as possible, and use this to ensure Trojita becomes a rock solid native email client for Ubuntu.

A quick list of current features in Dekko,

  • Support for viewing of plain text messages. We cannot show html messages, due to not being able to block network requests with QtWebKits custom url scheme functionality (If you are an Oxide dev who happens to be reading this “wink wink”  ). But is great for viewing all your launchpad mail.
  • Navigating the mailbox hierarchy, it’s not entirely obvious at first (Open to new ideas here) If you see a progression arrow on a mailbox, tapping the arrow displays the nested mailbox’s. Otherwise tapping elsewhere shows the messages within that mailbox.
  • Composing and replying to messages, this utilizes the bottom edge so pulling up on an opened message will set up a reply to the opened message. One thing to note with replying to messages, at the moment it basically does a “reply all” action so you need to delete or add recipients to the message manually until support for mailing lists and other reply modes are implemented.
  • Supports defining a single sender identity for mail submission.
  • Mark message as deleted, expunge mailbox and auto-expunge on marked for deletion options.
  • Mark all messages as read.
  • Offline, Online and Bandwidth saving mode, perfect for mobile data connections

There is a known bug with the message list view sometimes not updating properly, but can usually be resolved by closing and reopening that mailbox.

So if you haven’t already please give it a try, and if you have any design/implementation ideas, issues, bugs or anything else you wish to say, please report them to the dekko project on launchpad https://launchpad.net/dekko.

Note: Please don’t file bugs against upstream Trojita, unless you are using a build of Trojita and not a Dekko build. 

And finally a few snaps to wet the appetite

 

Ronnie Tucker: PHP Fixes OpenSSL Flaws in New Releases

Fri, 07/11/2014 - 00:00

The PHP Group has released new versions of the popular scripting language that fix a number of bugs, including two in OpenSSL. The flaws fixed in OpenSSL don’t rise to the level of the major bugs such as Heartbleed that have popped up in the last few months. But PHP 5.5.14 and 5.4.30 both contain fixes for the two vulnerabilities, one of which is related to the way that OpenSSL handles timestamps on some certificates, and the other of which also involves timestamps, but in a different way.

Source:

http://threatpost.com/php-fixes-openssl-flaws-in-new-releases/106908

Submitted by: Dennis Fisher

 

Ubuntu Podcast from the UK LoCo: S07E15 – The One with the Thumb

Thu, 07/10/2014 - 13:02

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Fifteen of the Ubuntu Podcast!

 Download OGG Play in Popup  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing David Hermann about his MiracleCast project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Ronnie Tucker: Why did Microsoft join the Linux Foundation’s AllSeen Alliance?

Wed, 07/09/2014 - 23:59

When people think of open source they don’t usually associate Microsoft with it. But the company recently surprised many when it joined the Linux Foundation’s open source AllSeen Alliance. The AllSeen Alliance’s mission is to create a standard for device communications.

Has Microsoft changed its attitude toward open source in general or is there another reason for its uncharacteristic behavior? Computerworld speculates on what might have motivated Microsoft to join the AllSeen Alliance.

Source:

http://www.itworld.com/open-source/425651/why-did-microsoft-join-linux-foundations-allseen-alliance

Submitted by: Jim Lynch

Dustin Kirkland: Scalable, Parallel Video Transcoding on Linux

Wed, 07/09/2014 - 23:02
Transcoding video is a very resource intensive process.  It can take many minutes to process a small clip, or even hours to process a full movie.
And that's on the home video scale.  When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.
While Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.
There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpeg, mencoder, and handbrake.
Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.
I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!
While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.
I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):
avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deploy, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.
http://www.w3schools.com/html/html5_video.asp
The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.
Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 
In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Joe Liau: Utopic Unicron[sic]

Wed, 07/09/2014 - 21:14

I was playing around with possible designs for a Utopic Unicorn t-shirt when an inevitable (and awesome) typo lead me to…

SVG version here.

 

 

 

Paul Tagliamonte: Dell XPS 13

Wed, 07/09/2014 - 19:38

More hardware adventures.

I got my Dell XPS13. Amazing.

The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.


The bad news is that the Intel Wireless card needs non-free (I’ll be replacing that shortly), and the touchpad’s driver isn’t totally implemented until Kernel 3.16. I’m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We’ll see if that works. Ubuntu Trusty already has the patch, but it didn’t get upstreamed. That kinda sucks.

It also shipped with UEFI disabled, and was defaulting to boot in ‘legacy’ mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.

Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.

Ronnie Tucker: XFCE App Launcher `WHISKER MENU` sees new release

Tue, 07/08/2014 - 23:58

Whisker Menu is an application menu / launcher for Xfce that features a search function so you can easily find the application you want to launch. The menu supports browsing apps by category, you can add applications to favorites and more. The tool is used as the default Xubuntu application menu starting with the latest 14.04 release and in Linux Mint Xfce starting with version 15 (Olivia).

The Whisker Menu PPA was updated to the latest 1.4.0 version recently and you can use to both upgrade to the latest version obviously, as well as to install the tool in (X)Ubuntu versions for which Whisker Menu isn’t available in the official repositories (supported versions: Ubuntu 14.04, 13.10 and 12.04 and the corresponding Linux Mint versions). For see what is the  difference with the previous release, see the changelog in its main website.

Source:

http://www.webupd8.org/2014/06/xfce-app-launcher-whisker-menu-sees-new.html

Submitted by: Andrew

Nicholas Skaggs: Utopic Bug Hug and Testing Day

Tue, 07/08/2014 - 12:38
The first testing day of the utopic cycle is coming this week on Thursday, July 10th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. We'll be demonstrating running and testing the development release of ubuntu, reporting test results, reporting bugs, and doing triage work. We'll also be availible to answer your questions and help you get started testing as well.

Please join us in testing utopic and helping the next release of ubuntu become the best it can be. Hope to see everyone there!

P.S. We have a team calendar that can help you keep track of the release schedule along with this and other events. Check it out!

The Fridge: Ubuntu Online Summit dates: 4-6 Nov 2014

Tue, 07/08/2014 - 11:54

in discussions at the last Online Summit and afterwards it became clear that we need to bring the summit dates closer to our release dates again. With the Unicorn being released on Oct 23, we decided to pick the following dates for the next Online Summit:

4th – 6th November 2014

This unfortunately won’t leave too much room for a mid-cycle UOS, as it’d get too close to Feature Freeze and other release/freeze dates. Michael Hall will start a discussion on ubuntu-devel-discuss@ about the subject of Ubuntu Online Summit soon, so we can discuss changes and start general planning. Your feedback and help are much appreciated.

If you should want to have any ad-hoc, public planning sessions before the next UOS, we’d like to remind you of Ubuntu On Air, which is a good way to get your discussion recorded and where you can very easily get people involved for the subject. Find out more info on https://wiki.ubuntu.com/OnAir

Originally posted to the community-announce mailing list on Tue Jul 8 10:42:20 UTC 2014 by Daniel Holbach

Pages