Feed aggregator

Ubuntu App Developer Blog: 100,000 App Downloads

Planet Ubuntu - Thu, 07/03/2014 - 07:43

It was less than a month that we announced crossing the 10,000 users milestone for Ubuntu phones and tablets, and we’ve already reached another: 100,000 app downloads!

Downloads

The new Ubuntu store used by phones, tablets, and soon the desktop as well, provides app developers with some useful statistics about how many times their app was downloaded, which version was downloaded, and what country the download originated from. This is very useful as it it lets the developer gauge how many users they currently have for their app, and how quickly they are updating to new versions.  One side-effect of these statistics is that we can see how many total downloads there have been across all of the apps in the store, and this week we reached (and quickly passed) the 100,000th download.

Users

We’re getting close to having Ubuntu phones go on sale from our partners at Bq and Meizu, but there are still no devices on the market that came with Ubuntu.  This means that we’ve reached this milestone solely from developers and enthusiasts who have installed Ubuntu on one of their own devices (probably a Nexus device) or the device emulator.  

The continued growth in the download number validates the earlier milestone of 10,000 users, a large number of them are clearly still using Ubuntu on their device (or emulator) and keeping their apps up to date (the number represents new app installs and updates). This means that not only are people trying Ubuntu already, many of them are sticking with it too.  Yet another datapoint in support of this is the 600 new unique users who have been using the store since the last milestone announcement.

Pioneers

To supply all of these users with the apps they want, we’re continuing to build our community of app developers around Ubuntu. The first of these have already received their limited edition t-shirts, and are listed on the Ubuntu Pioneers page of the developer portal.

There is still time to get your app published, and claim your place on that page and your t-shirt, but they’re filling up fast so don’t delay. Go to our Developer Portal and get started today, you could be only a few hours away from publishing your first app in the store!

Mohamad Faizul Zulkifli: Sharing Wireless Internet To Ethernet

Planet Ubuntu - Thu, 07/03/2014 - 00:04
Sometimes we need to share our wireless internet to an ethernet connected clients or anything else such as wireless access points or network switches.

In my situation, my smartphone can not connect directly to my preferred primary wireless access point. So i need to setup a simple wireless repeater.
First is first, i need to connect to the primary wireless access point using my pc.

After that, i configure my pc as a client to relay the internet to my secondary access point which is tp-link wireless access point and it already has its own DHCP server. I dont need to configure any DHCP server then.
My wireless adapter on my pc connected to the primary wireless access point is wlan1. This is how i do it.
internet --> wlan1 (on my pc) --> ethernet (on my pc too) --> network cable --> tp-link wireless access point --> smartphone
For your info, i already configured my tp-link wireless access point to match my ethernet configurations (IPs, Subnets, gateways).
All i need to do after connecting to primary wireless access point is run this commands.
ip link set up dev eth0ip addr add 192.168.137.1/24 dev eth0 sysctl net.ipv4.ip_forward=1iptables -t nat -A POSTROUTING -o wlan1 -j MASQUERADEiptables -A FORWARD -i eth0 -o wlan1 -j ACCEPTiptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
take note that my ethernet is eth0.

Svetlana Belkin: Doc Team Wiki Page Clean Up

Planet Ubuntu - Wed, 07/02/2014 - 15:36

Today the Wiki Sub-Team of the Ubuntu Doc Team had a mini-sprint over the team’s homepage (https://wiki.ubuntu.com/DocumentationTeam).  We have cleaned it up by removing some of the unwanted sub-pages, rewrote some of the pages to make it easier for new people to get involved with us, and much more.

Hopefully this is the best step to help people to understand our team’s workflow.


The Fridge: Community Donations Funding Report, Q1 2014

Planet Ubuntu - Wed, 07/02/2014 - 14:03

When users visit the Ubuntu download page from our main website, ubuntu.com, they are given the option to make a donation to show their appreciation and help further the work that goes into making and distributing Ubuntu. Donations ear-marked for “Community projects” made available to members of our community, and events that our members attend.

In keeping with our core principles of openness and transparency, the way these community funds were spent is detailed in a regular report and made available for everybody to see. These requests for funding are an opportunity for us to invest in our community, and every dollar spent has benefited the Ubuntu project through improved contributions, organization and outreach.

Once again everybody on this list deserves a big thanks for their continued contributions to Ubuntu. The funds they were given may cover immediate expenses, but they in no way cover all of the time, energy, and passion that these contributors have put into our community.

The latest funding report, using the same format as the previous one, can be viewed here.

Published on behalf of Michael Hall of the Community Team

Community Donations Funding Report, Q1 2014

The Fridge - Wed, 07/02/2014 - 14:02

When users visit the Ubuntu download page from our main website, ubuntu.com, they are given the option to make a donation to show their appreciation and help further the work that goes into making and distributing Ubuntu. Donations ear-marked for “Community projects” made available to members of our community, and events that our members attend.

In keeping with our core principles of openness and transparency, the way these community funds were spent is detailed in a regular report and made available for everybody to see. These requests for funding are an opportunity for us to invest in our community, and every dollar spent has benefited the Ubuntu project through improved contributions, organization and outreach.

Once again everybody on this list deserves a big thanks for their continued contributions to Ubuntu. The funds they were given may cover immediate expenses, but they in no way cover all of the time, energy, and passion that these contributors have put into our community.

The latest funding report, using the same format as the previous one, can be viewed here.

Published on behalf of Michael Hall of the Community Team

Benjamin Kerensa: Release Management Work Week

Planet Ubuntu - Wed, 07/02/2014 - 12:46

Team discussing goals

Last week in Portland, Oregon, we had our second release management team work week of the year focusing on our goals and work ahead in Q3 of 2014. I was really excited to meet the new manager of the team, our new intern and two other team members I had not yet met.

It was quite awesome to have the face-to-face time with the team to knock out some discussions and work that required the kind of collaboration that a work week offers. One thing I liked working on the most was discussing the current success of the Early Feedback Community Release Manager role I have had on the team (I’m the only non-employee on the team currently) and discussing ideas for improving the pathways for future contributors in the team while also creating new opportunities and a new pathway for me to continue to grow.

One thing unique about this work week is we also took some time to participate in Open Source Bridge a local conference that Mozilla happened to be sponsoring at The Eliot Center and that Lukas Blakk from our team was speaking at. Lukas used her keynote talk to introduce her awesome project she is working on called the Ascend Project which she will be piloting soon in Portland.

Lukas Blakk Ascend Project Keynote at Open Source Bridge 2014

While this was a great work week and I think we accomplished a lot, I hope in future work weeks that they are either out of town or that I can block off other life obligations to spend more time on-site as I did have to drop off a few times for things that came up or run off to the occasional meeting or Vidyo call.

Thanks to Lawrence Mandel for being such an awesome leader of our team and seeing the value in operating open by default. Thanks to Lukas for being a great mentor and awesome person to contribute alongside. Thanks to Sylvestre for bringing us French Biscuits and fresh ideas. Thanks to Bhavana for being so friendly and always offering new ideas and thanks to Pranav for working so hard on picking up where Willie left off and giving us a new tool that will help our release continue to be even more awesome.

 

José Antonio Rey: Juju’ing with t1.micro instances on AWS

Planet Ubuntu - Tue, 07/01/2014 - 14:13

Since February, when I decided I didn’t want to use the local provider with Juju because my internet connection has a download speed of 400KBps, I opened an AWS account. This gave me 750 hours per month to use on t1.micro instances, which are awesome for Juju testing… until you hit some problems.

Main problem with the t1.micro instances is that they only have 613MB RAM. This is good for testing charms which do not require a lot of memory, but there are some (as nova-cloud-controller) which do require some more memory to run properly. Even worst, they require memory to finish installing.

I should note that, in general, my experience with t1.micro instances and the AWS free tier has been awesome, but in this cases there is no other solution than getting a bigger instance. If you are testing in cloud and you see a weird error you don’t understand, it may be that the machine has ran out of memory to allocate, so try in a bigger instance. If it doesn’t solve it, go ahead and report a bug. If it’s something on a charm’s code, we’ll look into it.

Happy deploying!


Chris J Arges: using ktest.pl with ubuntu

Planet Ubuntu - Tue, 07/01/2014 - 13:10
Bisecting the kernel is one of those tasks that's time-consuming and error prone. Ktest.pl is a script that lives in the linux kernel source tree [1] that helps to automate this process. The script is extremely extensible and as such takes times to understand which variables need to be set and where. In this post, I'll go over how to perform a kernel bisection using a VM as the target machine. In this example I'm using 'ubuntu' as the VM name.

First ensure you have all dependencies correctly setup:
sudo apt-get install libvirt-bin qemu-kvm cpu-checker virtinst uvtool git
sudo apt-get build-dep linux-image-`uname -r`
Ensure kvm works:kvm-ok
In this example we are using uvtool to create VMs using cloud images, but you could just as easily use a preseed install or a manual install via an ISO.First sync the cloud image:uvt-simplestreams-libvirt sync release=trusty arch=amd64
Clone the necessary git directory:git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux.git
Copy ktest.pl outside of the linux kernel (since bisecting it also changes the script, this way it remains constant):cd linux
cp tools/testing/ktest/ktest.pl
cp -r tools/testing/ktest/examples/include ..
cd ..
Create directories for script:mkdir configs build
mkdir configs/ubuntu build/ubuntu
Get an appropriate config for the bisect you are using and ensure it can reasonable 'make oldconfig' with the kernel version you are using. For example, if we are bisecting v3.4 kernels, we can use an Ubuntu v3.5 series kernel config and yes '' | make oldconfig to ensure it is very close. Put this config into configs/ubuntu/config-min. For convenience I have a put a config that works here for this example:
http://people.canonical.com/~arges/amd64-config.flavour.generic

Create the VM, ensure you have ssh keys setup on your local machine first:uvt-kvm create ubuntu release=trusty arch=amd64 --password ubuntu --unsafe-caching
Ensure the VM can be ssh'ed to via 'ssh ubuntu@ubuntu':echo "$(uvt-kvm ip ubuntu) ubuntu" | sudo tee -a /etc/hosts
SSH into VM with ssh ubuntu@ubuntu.

Set up the initial target kernel to boot on the VM:
sudo cp /boot/vmlinuz-`uname -r` /boot/vmlinuz-test
sudo cp /boot/initrd.img-`uname -r` /boot/initrd.img-test
Ensure SUBMENUs are disabled on the VM, as the grub2 detection script in ktest.pl fails with submenus, and update grub.echo "GRUB_DISABLE_SUBMENU=y" | sudo tee -a /etc/default/grub
sudo update-grub
Ensure we have a serial console on the VM with /etc/init/ttyS0.conf, and ensure that agetty automatically logs in as root. If you ran with the above script you can do the following:sudo sed -i 's/exec \/sbin\/getty/exec \/sbin\/getty -a root/' /etc/init/ttyS0.conf
Ensure that /root/.ssh/authorized_keys on the VM contains the host keys so that ssh root@ubuntu works automatically. If you are using the above commands you can do:sudo sed -i 's/^.*ssh-rsa/ssh-rsa/g' /root/.ssh/authorized_keys
Finally add a test case to /home/ubuntu/test.sh inside of the ubuntu VM. Ensure it is executable.#!/bin/bash
# Make a unique string
STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32)
> /var/log/syslog
echo $STRING > /dev/kmsg
# Wait for things to settle down...
sleep 5
grep $STRING /var/log/syslog
# This should return 0.

Now exit out of the machine and create the following configuration file for ktest.pl called ubuntu.conf. This will bisect from v3.4 (good) to v3.5-rc1 (bad), and run the test case that we put into the VM.# Setup default machine
MACHINE = ubuntu

# Use virsh to read the serial console of the guest
CONSOLE = virsh console ${MACHINE}
CLOSE_CONSOLE_SIGNAL = KILL

# Include defaults from upstream
INCLUDE include/defaults.conf
DEFAULTS OVERRIDE

# Make sure we load up our machine to speed up builds
BUILD_OPTIONS = -j8

# This is required for restarting VMs
POWER_CYCLE = virsh destroy ${MACHINE}; sleep 5; virsh start ${MACHINE}

# Use the defaults that update-grub spits out
GRUB_FILE = /boot/grub/grub.cfg
GRUB_MENU = Ubuntu, with Linux test
GRUB_REBOOT = grub-reboot
REBOOT_TYPE = grub2

DEFAULTS

# Do a simple bisect
TEST_START
RUN_TEST = ${SSH} /home/ubuntu/test.sh
TEST_TYPE = bisect
BISECT_GOOD = v3.4
BISECT_BAD = v3.5-rc1
CHECKOUT = origin/master
BISECT_TYPE = test
TEST = ${RUN_TEST}
BISECT_CHECK = 1

Now we should be ready to run the bisection (this will take many, many hours depending on the speed of your machine):./ktest.pl ubuntu.conf
  1. http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/ktest?id=HEAD

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 01, 2014

Planet Ubuntu - Tue, 07/01/2014 - 10:17
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140701 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

We have rebased our Utopic kernel “unstable” branch to v3.16-rc3 and
uploaded to our ckt ppa (ppa:canonical-kernel-team/ppa). Please test.
I don’t anticipate an official v3.16 based upload to the archive until
further testing and baking has taken place.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~3 weeks away)
Thurs Jul 31 – 14.10 Alpha 2 (~4 weeks away)
Thurs Aug 07 – 12.04.5 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 1):

  • Lucid – Kernel Prep
  • Precise – Kernel Prep
  • Saucy – Kernel Prep
  • Trusty – Kernel Prep

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Jono Bacon: Getting Started in Community Management

Planet Ubuntu - Tue, 07/01/2014 - 09:31

If there is one question I get more than most, it is the proverbial:

How do I get started in community management?

While there are many tactical things to learn about building strong communities (which I cover in depth in The Art of Community), the main guidance I am keen to share is the importance of leadership.

Last night, while working in my hotel room, I bored myself writing up my thoughts in a blog post and just fired up my webcam:

Can’t see it? See it here

If you want to get involved in community management, be sure to join the awesome community that is forming on the Community Leadership Forum and if possible, join us on the 18th and 19th July 2014 in Portland for the Community Leadership Summit.

Martin Pitt: deb, click, schroot, LXC, QEMU, phone, cloud: One autopkgtest to Rule Them All!

Planet Ubuntu - Tue, 07/01/2014 - 08:15

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create" $ nova keypair-list [...] | pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 | # find a suitable image $ nova image-list [...] | ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image | ACTIVE | $ nova flavor-list [...] | 100 | standard.xsmall | 1024 | 10 | 10 | | 1 | 1.0 | N/A | # now run the tests: please be patient, this takes a few mins! $ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \ -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti [...] adt-run [16:23:16]: test build: - - - - - - - - - - results - - - - - - - - - - build PASS adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

"x-test": { "unit": "tests/unittests", "smoke": { "path": "tests/smoketest", "depends": ["shunit2", "moreutils"], "restrictions": ["allow-stderr"] }, "another": { "command": "echo hello > /tmp/world.txt" } }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

"x-test": { "autopilot": "ubuntu_calculator_app" }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app # built straight from that branch; TODO: where is the official" download URL? $ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click $ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \ ssh -s /usr/share/autopkgtest/ssh-setup/adb [..] Traceback (most recent call last): File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number self._assert_result("0.33333333") File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result self.main_view.get_result, Eventually(Equals(expected_result))) File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3' Ran 33 tests in 295.586s FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click $ sudo lxc-start -n click # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there # then "sudo powerdown" # current apparmor profile doesn't allow remounting something read-only $ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \ --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺.

Feedback appreciated!

José Antonio Rey: FOSSETCON in Florida – Coming Next September!

Planet Ubuntu - Mon, 06/30/2014 - 21:49

Next September, from the 11th to the 13th, FOSSETCON will be held in the Rosen Plaza, in Orlando, Florida.

FOSSETCON stands for Free and Open Source Software Expo and Technology Conference. Organized by the awesome Bryan Smith, it will have a variety of workshops, talks, certifications, and a HUGE Expo Hall, where even the Ubuntu community will be featured! If you want your company to be featured during the conference, this is the perfect place to apply to be a sponsor. On the other hand, if you want to see many awesome things with regards to the Free and Open Source world and it’s close to you, then it’s the right place for you! Plus, attendees get a special discount in their tickets for Universal Studios parks, and more deals coming ;)

Best thing about it is that in Day 0 (11th September) there will be an Ubucon! That’s correct, a whole day dedicated to Ubuntu and the state of the art! What else could one ask for in a conference? Beer? I’m sure they will have some.

If you are planning to be around make sure to visit the Ubuntu booth, there’s still plenty of time to plan your trip!


Lubuntu Blog: LXQt now has “full” Qt5 support

Planet Ubuntu - Mon, 06/30/2014 - 04:17
Repost from LXDE Blog: After the first official public release 0.7, the LXQt team is working on making it better. Our recent focus is fixing existing bugs and migrating from Qt4 to Qt5, which is required if we want to support Wayland. Now we had something to show. The latest source code in our git repository can be compiled with Qt5 (by just passing -DUSE_QT5=ON flag to cmake). Building with

Riccardo Padovani: How to know proprieties and values of an object in QML

Planet Ubuntu - Mon, 06/30/2014 - 00:00

Hi all,
few days ago, working on an app for Ubuntu for Phones I needed to know all proprieties and values of an object during the execution of the app.
It’s a very easy thing to do, but a bit boring to write the code to debug every time, so I wrote a little function to do this, with formattation of the output.

Hope it will be useful to someone :-)

So, to call the function we only need to write debug (objectId, 0) whenever we need to debug an object.
0 is because it’s a recursive function, and it indicates the level of formattation

function debug(id, level) { function debug(id, level) { var level_string = '';   // If isn't a first level function, add some formattation for (var i = 0; i < level; i++) { if (i+1 === level) { level_string += '|--------'; } else { level_string += ' '; } }   if (level === 0) { level_string = 'property '; } else { level_string += '> '; }   // For every value in the object for (var value in id) {   // We need to don't take care of these elements because the output is too long. I mean, do you want to print all children of the parent? :-) // If you are interesting in the output of anchors, set a maximum to leveles of recursion if (value != 'parent' && value != 'anchors' && value != 'data' && value != 'resources' && value != 'children') { // Functions haven't children and aren't property if (typeof(id[value]) === 'function') { if (level === 0) { console.log('function ' + value + '()'); } else { console.log(level_string + 'function ' + value + '()'); } } // Objects have children else if (typeof(id[value]) === 'object') { console.log(level_string + value + ' [object]'); debug(id[value], level+1); } // Of all others things we print value and type :-) else { console.log(level_string + value + ': ' + id[value] + ' [' + typeof(id[value]) + ']'); } } } }

Ciao,
R.

This work is licensed under Creative Commons Attribution 3.0 Unported

Stuart Langridge: Facebook and the button of happiness

Planet Ubuntu - Sun, 06/29/2014 - 16:14

Recently, Facebook published a paper in the Proceedings of the National Academy of Sciences (warning: PDF) about how they intentionally manipulated over half a million FB users’ news feeds to exclude either “happy” or “sad” posts and then (using the same algorithm which detected happiness or sadness) see whether the users’ own posts became happier or sadder as a result. Turns out: they did. Slate then wrote an article excoriating this as unethical, and today it seems to have blown up on Twitter a bit.

First, let us address and dismiss the issue of ethics. Was it unethical for Facebook to publish a paper on this? Yes, yes it was. The key issue here is informed consent: basically, you’re not allowed to do experiments on people without them knowing. It’s wrong of Facebook in my opinion to have, in liedra’s memorable wording, “gussied it up as ‘science’”, and also in my opinion PNAS ought to be asking a lot more questions about what they publish, surely? This study fails the most homeopathically weak example imaginable of “informed consent”, unless we’re counting the Facebook EULA here as giving informed consent to this sort of thing. @liedra did her PhD specifically on this subject, so I trust her views. But most of the upset I’ve seen about this has not been about academic standards and rules for research, or that a paper was published. It’s because Facebook did this thing at all.

I shall now pause to tell a little story. Supermarket loyalty cards are not there just to give you money off for being a regular customer. They’re there to let the shops build up a truly terrifying data warehouse and then mine it in extremely advanced ways to determine both what the average person wants to buy and what you specifically want to buy. Store design is a deeply complex, well-understood science, and that it goes on is almost unknown to the public. A store planner can tell you what every square foot of your store is for and how to maximise the amount of time customers spend in the shop, where to put the highest profit goods to improve their sales over others, and at bottom how to make more money by how you lay everything out. Loyalty schemes do the same thing with your purchases. At a very obvious level, sending you vouchers for stuff you buy a lot can work, but the data mining drills way, way deeper than that. In one memorable event, the US store Target identified that a woman was pregnant from seemingly innocent purchases such as unscented lotion, and then sent her coupons for baby products… before she’d told her father about the pregnancy. These people are mathematically well-equipped, they’re able to deduce a startling amount of things about you that you might wish they didn’t know, they’re doing it with data you’ve given them voluntarily, and they’re doing it to make their own service more compelling and so make more money at your expense. Is this any different from Facebook?

There is an undercurrent of fatalism in some of the responses to publication of this study. “Man, if you expect Facebook to do anything other than shove a live rattlesnake up your arse in pursuit of profit, you’re a naive child.” I don’t agree with that. We should expect more, demand more, hope for more from those who act as custodians of our data. Whether the law requires it or not. (European data protection laws are considerably more constraining than those in the US, in my opinion correctly, but acting only just as decently as the law requires is the minimum requirement, and we should ask for better.) But I honestly don’t see the difference between what Facebook did and what Target did. Yes, someone with depression could be adversely affected (perhaps very severely) by Facebook making their main channel of friendly communication be markedly less friendly. But consider if the pregnant woman who hadn’t told her father had had a miscarriage, and then received a book of baby milk vouchers in the mail.

This is not to minimise the impact of what Facebook did. What concerns me is that Facebook are not the only culprit here. They may not even be the most egregious culprit. The world of modern targeted advertising is considerably more sophisticated than most people suspect, and excoriating one firm for doing something that basically everybody else is doing too won’t stop it happening. It’ll just drive it further underground. Firms are going to mine my data. Indeed, I largely want them to; we’ve decided collectively that we want to fund things through advertising, so I might as well get adverts for things I actually want to buy. Facebook ran a study to discover whether they have the power to make people happier or sadder, and it turns out that they do. But they already had that power. In order for them to use it responsibly they should study it scientifically and learn about it. Then they can use it for good things.

Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.

Matt Zimmerman: Scaling Human Systems: Management

Planet Ubuntu - Sun, 06/29/2014 - 13:05

This is part 6 in a series on organizational design and growth.

“The change from a business that the owner-entrepreneur can run with “helpers” to a business that requires management is a sweeping change. [...] One can compare the two kinds of business to two different kinds of organism: the insect, which is held together by a tough, hard skin, and the vertebrate animal, which has a skeleton. Land animals that are supported by a hard skin cannot grow beyond a few inches in size. To be larger, animals must have a skeleton. Yet the skeleton has not evolved out of the hard skin of the insect; for it is a different organ with different antecedents. Similarly, management becomes necessary when an organization reaches a certain size and complexity. But management, while it replaces the “hard-skin” structure of the owner-entrepreneur, is not its successor. It is, rather, its replacement.”

Peter Drucker

What it means

Management is the art of enabling people to cooperate in achieving shared goals. I’ve written elsewhere about what management is not. Management is a multifaceted discipline which is centered on people and the environment in which they work.

Why it’s important

In very small organizations, management can be comparatively easy, and happen somewhat automatically, especially between people who have worked together before. But as organizations grow, management becomes a first-class concern, requiring dedicated practice and a higher degree of skill. Without due attention to management, coordination becomes excessively difficult, working systems are outgrown and become strained, and much of the important work described in this series just won’t happen. Management is part of the infrastructure of the organization, and specifically the part which enables it to adapt and change as it grows.

Old Status Quo

People generally “just do stuff”, meaning there is little conscious understanding of the system in which people are working. If explicit managers exist, their jobs are poorly understood. Managers themselves may be confused or uncertain about what their purpose is, particularly if they are in such a role for the first time. The organization itself has probably developed more through accretion than deliberate design.

New Status Quo

People work within systems which help coordinate their work. These systems are consciously designed, explicitly communicated, and changed as often as necessary. Managers guide and coordinate the development and continuous improvement of these systems. The role of managers in the organization is broadly understood, and managers receive the training, support and coaching they need to be successful.

Behaviors that help
  • It can be helpful to bring more experienced managers into the organization at this stage, especially if there isn’t much management experience in house.
  • Show everyone in the organization (including managers themselves) what managers do and why it matters.
  • Consider very carefully whether someone should become a manager.
  • If someone does take on a management role, treat this as a completely new job, which requires handing off their existing responsibilities and learning a new discipline. Don’t treat it as just an extension of their work. Write a new job description and discuss it up front.
Obstacles that stand in our way
  • Management misbeliefs
  • Granting “promotions” to management roles as rewards for performance
  • Many people, when they experience what management work is like, don’t enjoy it and aren’t motivated by it. It can be hard to predict when this will be the case, and people can feel “trapped” in a management role that they don’t want. Make sure there are mechanisms to gracefully transition out of roles that don’t fit for the people holding them.

Matt Zimmerman: Scaling Human Systems: Roles and Responsibilities

Planet Ubuntu - Sun, 06/29/2014 - 13:05

This is part 5 in a series on organizational design and growth.

What it means

Each of us has a job to do, probably more than one, and our teammates should know what they are.

Why it’s important

Roles are a kind of standing commitment we make to each other. They’re a way of dividing up work which is easy to understand and simple to apply. Utilizing this tool will make it easier for us to coordinate our day to day work, and manage the changes and growth we’re going through.

Old Status Quo

Roles are vague or nonexistent. Management roles in particular are probably not well understood. Many people juggle multiple roles, all of which are implicit. Moving to a new team means learning from scratch what other people do. People take responsibility for tasks and decisions largely on a case-by-case basis, or based on implicit knowledge of what someone does (or doesn’t do). In many cases, there is only one person in the company who knows how to perform a certain function. When someone leaves or goes on vacation, gaps are left behind.

New Status Quo

Each individual has a clear understanding of the scope of their job. We have a handful of well defined roles, which are used by multiple teams and have written definitions. People who are new or transfer between teams have a relatively easy time understanding what the people around them are doing. Many day to day responsibilities are defined by roles, and more than one person can fill that role. When someone leaves a team, another person can cover their critical roles.

Behaviors that help
  • Define project roles: when starting something new, make it explicit who will be working on it. Often this will be more than one person, often from different teams. For example, customer-facing product changes should have at least a product owner and an engineering owner. This makes it easy to tell if too many concurrent projects are dependent on a single person, which is a recipe for blockage.

  • Define team roles: Most recurring tasks should fall within a defined role. An owner of a technical service is an example of a role. An on-call engineer is an example of a time-limited role. There are many others which will depend on the team and its scope.

  • Define job roles: Have a conversation with your teammates and manager about what the scope of your job is, which responsibilities are shared with other members of the team and which are yours alone.

Obstacles that stand in our way
  • Getting hung up on titles as ego gratification. Roles are tools, not masters.

  • Fear that a role limits your options, locks you into doing one thing forever. Roles can be as flexible as we want them to be.


Colin King: idlestat: a tool to measure times in idle and operational states

Planet Ubuntu - Sun, 06/29/2014 - 10:05
Linaro's idlestat is another useful tool in the arsenal of CPU monitoring utilities.  Idlestat monitors and captures CPU C-state and P-state transitions using the kernel Ftrace tracer and outputs statistics based on entering/exiting each state for each CPU.  Idlestat  also captures IRQ activity as well which ones caused a CPU to exit an idle state -  knowing why a processor came out of a deep C state is always very useful way to help diagnose power consumption issues.

Using idlestat is easy, to capture 20 seconds of activity into a log file called example.log, run:
sudo idlestat --trace -f example.log -t 20
..and this will display the per CPU C-state and P-state and IRQ statistics for that run.

One can also take the saved log file and parse it again to calculate the statistics again using:
idlestat --import -f example.log

One can get the source from here and I've packaged version 0.3 (plus a bunch of minor fixes that will land in 0.4) for Ubuntu 14.10 Utopic Unicorn.

Valorie Zimmerman: Final 10 days of the Randa fundraiser - Please help!

Planet Ubuntu - Sun, 06/29/2014 - 01:03
Hi folks, we're heading to the deadline. Please help put us over the top!

If you've already given money, please help by spreading the word. Small contributions not only count up quickly, they demonstrate that the free community stands behind our work. Mario gives a nice wrap-up here: blogs.fsfe.org/mario/?p=234. Show us you care.

Personally, I'm scared and excited by the prospect of writing another book, again with people who over-the-top smart, creative and knowledgeable. I will personally appreciate widespread support of the work we'll be doing.

If you already know about the Randa Meetings, and what our confederated sprints can produce, just proceed directly to http://www.kde.org/fundraisers/randameetings2014/index.php and kick in some shekels!

And then please pass along the word.

Thanks!

Paul Tagliamonte: Apple and Debian: A tragic love story

Planet Ubuntu - Sat, 06/28/2014 - 18:22

One which ends in tears, I’m afraid.

A week or so ago, I got an MacBook Air 13” (MacBook Air 6-2), to take with me when I head out to local hack sessions, and when I travel out of state for short lengths of time. My current Thinkpad T520i is an amazing machine, and will remain my daily driver for a while to come.

After getting it, I unboxed the machine, and put in a Debian install USB key. I wiped OSX (without even booting into it) and put Debian on the drive.

To my dismay, it didn’t boot up after install. Using the recovery mode of the disk, I chrooted in and attempted an upgrade (to ensure I had an up-to-date GRUB). I couldn’t dist-upgrade, the terminal went white. After C-c’ing the terminal, I saw something about systemd’s sysvinit shim, so I manually did a purge of sysvinit and install of systemd.

I hear this has been resolved now. Thanks :)

My machine still wasn’t booting, so I checked around. Turns out I needed to copy over the GRUB EFI blob to a different location to allow it to boot. After doing that, I could boot Debian whilst holding Alt.

After booting, things went great! Until I shut my lid. After that point, my screen was either 0% bright (absolutely dark) or 100% (facemeltingly bright).

I saw mba6x_bl, which claims to fix it, and has many happy users. If you’re in a similar suggestion, I highly suggest looking at it.

Unfortunately, this caused my machine to black out on boot and I couldn’t see the screen at all anymore. A rescue disk later, and I was back.

Annoyingly, the bootup noise’s volume is stored in NVRAM, which gets written out by OSX when it shuts down. After consulting the inimitableMatthew Garrett, I was told the NVRAM can be hacked about with by mounting efivarfs on /sys/firmware/efi/efivars type efivarfs. Win!

After hitting enter, I got a kernel panic and some notes about hardware assertions failing.

That’s when I returned my MacBook and got a Dell XPS 13.


This is a problem, folks.

If I can’t figure out how to make this work, how can we expect our users?

Such a shame the most hardware on the planet gets no support, locking it’s users into even less freedom.

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator