Planet Ubuntu

Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 54 sec ago

Serge Hallyn: Ambient capabilities

Fri, 07/24/2015 - 20:16

There are several problems with posix capabilities. The first is the name: capabilities are something entirely different, so now we have to distinguish between “classical” and “posix” capabilities. Next, capabilities come from a defunct posix draft. That’s a serious downside for some people.

But another complaint has come up several times since file capabilities were implemented in Linux: people wanted an easy way for a program, once it has capabilities, to keep them. Capabilities are re-calculated every time the task executes a new file, taking the executable file’s capabilities into account. If a file has no capabilities, then (outside of the special exception for root when SECBIT_NOROOT is off) the resulting privilege set will be empty. And for shellscripts, file capabilities are always empty.

Fundamental to posix capabilities is the concept that part of your authority stems from who you are, and part stems from the programs you run. In a world of trojan horses and signed binaries this may seem sensible, but in the real world it is not always desirable. In particular, consider a case where a program wants to run as non-root user, but with a few capabilities – perhaps only cap_net_admin. If there is a very small set of files which the program may want to execute with privilege, and none are scripts, then cap_net_admin could be added to the inheritable file privileges for each of those programs. Then only processes with cap_net_admin in their inheritable process capabilities will be able to run those programs with privilege. But what if the program wants to run *anything*, including scripts and without having to predict what will be executed? This currently is not possible.

Christopher Lameter has been facing this problem for some time, and requested an enhancement of posix capabilities to allow him to solve it. Not only did he raise the problem and provide a good, real use case, he also sent several patches for discussion. In the end, a concept of “ambient capabilities” was agreed to and implemented (final patch by Andy Lutomirski). It’s currently available in -mm.

Here is how it works:

(Note – for more background on posix capabilities as implemented in linux, please see this Linux Symposium paper. For an example of how to use file capabilities to run as non-root before ambient capabilities, see this Linux Journal article. The ambient capability set has gotten several LWN mentions as well.)

Tasks have a new capability set, pA, the ambient set. As Andy Lutomirski put it, “pA does what most people expect pI to do.” Bits can only be set in pA if they are in pP or pI, and they are dropped from pA if they are dropped from pP or pI. When a new file is executed, all bits in pA are enabled in pP. Note though that executing any file which has file capabilities, or using the SECBIT_KEEPCAPS prctl option (followed by setresuid), will clear pA after the next exec.

So once a program moves CAP_NET_ADMIN into its pA, it can proceed to fork+exec a shellscript doing some /sbin/ip processing without losing CAP_NET_ADMIN.

How to use it (example):

Below is a test program, originally by Christopher, which I slightly modified. Write it to a file ‘ambient.c’. Build it, using

$ gcc -o ambient ambient.c -lcap-ng

Then assign it a set of file capabilities, for instance:

$ sudo setcap cap_net_raw,cap_net_admin,cap_sys_nice,cap_setpcap+p ambient

I was lazy and didn’t add interpretation of capabilities to ambient.c, so you’ll need to check /usr/include/linux/capability.h for the integers representing each capability. Run a shell with ambient capabilities by running, for instance:

$ ./ambient.c -c 13,12,23,8 /bin/bash

In this shell, check your capabilities:

$ grep Cap /proc/self/status
CapInh: 0000000000803100
CapPrm: 0000000000803100
CapEff: 0000000000803100
CapBnd: 0000003fffffffff
CapAmb: 0000000000803100

You can see that you have the requested ambient capabilities. If you run a new shell there, it retains those capabilities:

$ bash -c “grep Cap /proc/self/status”
CapInh: 0000000000803100
CapPrm: 0000000000803100
CapEff: 0000000000803100
CapBnd: 0000003fffffffff
CapAmb: 0000000000803100

What if we drop all but cap_net_admin from our inheritable set? We can test that using the ‘capsh’ program shipped with libcap:

$ capsh –caps=cap_net_admin=pi — -c “grep Cap /proc/self/status”
CapInh: 0000000000001000
CapPrm: 0000000000001000
CapEff: 0000000000001000
CapBnd: 0000003fffffffff
CapAmb: 0000000000001000

As you can see, the other capabilities were dropped from our ambient, and hence from our effective set.

================================================================================
ambient.c source
================================================================================
/*
* Test program for the ambient capabilities. This program spawns a shell
* that allows running processes with a defined set of capabilities.
*
* (C) 2015 Christoph Lameter
* (C) 2015 Serge Hallyn
* Released under: GPL v3 or later.
*
*
* Compile using:
*
* gcc -o ambient_test ambient_test.o -lcap-ng
*
* This program must have the following capabilities to run properly:
* Permissions for CAP_NET_RAW, CAP_NET_ADMIN, CAP_SYS_NICE
*
* A command to equip the binary with the right caps is:
*
* setcap cap_net_raw,cap_net_admin,cap_sys_nice+p ambient_test
*
*
* To get a shell with additional caps that can be inherited by other processes:
*
* ./ambient_test /bin/bash
*
*
* Verifying that it works:
*
* From the bash spawed by ambient_test run
*
* cat /proc/$$/status
*
* and have a look at the capabilities.
*/

#include
#include
#include
#include
#include
#include
#include

/*
* Definitions from the kernel header files. These are going to be removed
* when the /usr/include files have these defined.
*/
#define PR_CAP_AMBIENT 47
#define PR_CAP_AMBIENT_IS_SET 1
#define PR_CAP_AMBIENT_RAISE 2
#define PR_CAP_AMBIENT_LOWER 3
#define PR_CAP_AMBIENT_CLEAR_ALL 4

static void set_ambient_cap(int cap)
{
int rc;

capng_get_caps_process();
rc = capng_update(CAPNG_ADD, CAPNG_INHERITABLE, cap);
if (rc) {
printf(“Cannot add inheritable cap\n”);
exit(2);
}
capng_apply(CAPNG_SELECT_CAPS);

/* Note the two 0s at the end. Kernel checks for these */
if (prctl(PR_CAP_AMBIENT, PR_CAP_AMBIENT_RAISE, cap, 0, 0)) {
perror(“Cannot set cap”);
exit(1);
}
}

void usage(const char *me) {
printf(“Usage: %s [-c caps] new-program new-args\n”, me);
exit(1);
}

int default_caplist[] = {CAP_NET_RAW, CAP_NET_ADMIN, CAP_SYS_NICE, -1};

int *get_caplist(const char *arg) {
int i = 1;
int *list = NULL;
char *dup = strdup(arg), *tok;

for (tok = strtok(dup, “,”); tok; tok = strtok(NULL, “,”)) {
list = realloc(list, (i + 1) * sizeof(int));
if (!list) {
perror(“out of memory”);
exit(1);
}
list[i-1] = atoi(tok);
list[i] = -1;
i++;
}
return list;
}

int main(int argc, char **argv)
{
int rc, i, gotcaps = 0;
int *caplist = NULL;
int index = 1; // argv index for cmd to start

if (argc < 2)
usage(argv[0]);

if (strcmp(argv[1], "-c") == 0) {
if (argc <= 3) {
usage(argv[0]);
}
caplist = get_caplist(argv[2]);
index = 3;
}

if (!caplist) {
caplist = (int *)default_caplist;
}

for (i = 0; caplist[i] != -1; i++) {
printf("adding %d to ambient list\n", caplist[i]);
set_ambient_cap(caplist[i]);
}

printf("Ambient_test forking shell\n");
if (execv(argv[index], argv + index))
perror("Cannot exec");

return 0;
}
================================================================================


Jos&eacute; Antonio Rey: How UbuConLA 2015 evolved in the past months

Fri, 07/24/2015 - 20:09

Wow, what can I say. To be honest, I am really impressed on how UbuConLA is shaping. So far there’s been a lot of stuff going through my head since I just finished finals, but now that I have a clear view of the entire panorama I really like how this conference is turning out to be.

In the past I’ve staffed booths and given talks in conferences, as well as organized small-ish events. However, this is my first big conference, and you can’t just imagine the excitement smile I have on my face when I see things are running as expected. It’s been some rough past months trying to balance conference planning with booth planning at TXLF, as well as studies and some other communities I contribute to. However, it’s been such an amazing experience.

Several months ago I was in the middle of a dillema, since the venue we were thinking of changed some requirements. Fortunately, my university, University of Lima, has been extremely helpful. I cannot thank enough all the people I have met on the way and who have given me such a big help whenever things were starting to turn in the wrong direction, and all we have accomplished so far.

In the past couple days I have received the name badges and banners we will be using for the conference, and even though there’s still some stuff in the way I can’t be more excited about how things are starting to look. Last week everything was just in ideas, and we’re starting to see the digital come into a physical object. That, for me, is one of the most exciting parts.

Luckily (and at the same time unfortunately) we had to close the registration form yesterday. The auditorium that was given to us has a capacity of 233 people, and counting volunteers, speakers, staff and more reduces that a bit. How many people have registered so far, you ask? Three. Hundred. That means we’re gonna fill that auditorium! A full room, what more can we ask for. If you haven’t registered for the conference, do not fear. The registration is just a fast track and it’s first-come, first-served. So make sure to keep an eye on all the social media pages for information on how to attend.

The next couple weeks are going to be the most difficult ones. We have a public holiday coming up until Thursday here in Peru, and from then on we need to start the final preparations. This is looking so good, I hope you all are surprised when you come to the conference.

After going through some of the process of organizing a medium-sized conference I can now really appreciate all the effort it takes to organize a big, and even a medium-sized conference. If you go to a conference and see the chairs or organizers, make sure to give them a pat on the back and thank them for all the efforts. They’re the people we need to thank for keeping the human touch and interaction alive!

And I think that’s all. Hope to see you in August, can’t wait for UbuConLA to happen!


Ronnie Tucker: Linux Mint 17.2 offers desktop familiarity and responds to user wants

Fri, 07/24/2015 - 19:12

These days, the desktop OSes grabbing headlines have, for the most part, left the traditional desktop behind in favor of what’s often referred to as a “shell.” Typically, such an arrangement offers a search-based interface. In the Linux world, the GNOME project and Ubuntu’s Unity desktop interfaces both take this approach.

This is not a sea change that’s limited to Linux, however. For example, the upheaval of the desktop is also happening in Windows land. Windows 8 departed from the traditional desktop UI, and Windows 10 looks like it will continue that rethinking of the desktop, albeit with a few familiar elements retained. Whether it’s driven by, in Ubuntu’s case, a vision of “convergence” between desktop and mobile or perhaps just the need for something new (which seems to be the case for GNOME 3.x), developers would have you believe that these mobile-friendly, search-based desktops are the future of, well, everything.

 

Source: http://arstechnica.com/gadgets/2015/07/rare-breed-linux-mint-17-2-offers-desktop-familiarity-and-responds-to-user-wants/

Submitted by: Scott Gilbertson

Elizabeth K. Joseph: OSCON 2015

Fri, 07/24/2015 - 17:27

Following the Community Leadership Summit (CLS), which I wrote about wrote about here, I spent a couple of days at OSCON.

Monday kicked off by attending Jono Bacon’s Community leadership workshop. I attended one of these a couple years ago, so it was really interesting to see how his advice has evolved with the change in tooling and progress that communities in tech and beyond has changed. I took a lot of notes, but everything I wanted to say here has been summarized by others in a series of great posts on opensource.com:

…hopefully no one else went to Powell’s to pick up the recommended books, I cleared them out of a couple of them.

That afternoon Jono joined David Planella of the Community Team at Canonical and Michael Hall, Laura Czajkowski and I of the Ubuntu Community Council to look through our CLS notes and come up with some talking points to discuss with the rest of the Ubuntu community regarding everything from in person events (stronger centralized support of regional Ubucons needed?) to learning what inspires people about the active Ubuntu phone community and how we can make them feel more included in the broader community (and helping them become leaders!). There was also some interesting discussion around the Open Source projects managed by Canonical and expectations for community members with regard to where they can get involved. There are some projects where part time, community contributors are wanted and welcome, and others where it’s simply not realistic due to a variety of factors, from the desire for in-person collaboration (a lot of design and UI stuff) to the new projects with an exceptionally fast pace of development that makes it harder for part time contributors (right now I’m thinking anything related to Snappy). There are improvements that Canonical can make so that even these projects are more welcoming, but adjusting expectations about where contributions are most needed and wanted would be valuable to me. I’m looking forward to discussing these topics and more with the broader Ubuntu community.


Laura, David, Michael, Lyz

Monday night we invited members of the Oregon LoCo out and had an Out of Towners Dinner at Altabira City Tavern, the restaurant on top of the Hotel Eastlund where several of us were staying. Unfortunately the local Kubuntu folks had already cleared out of town for Akademy in Spain, but we were able to meet up with long-time Ubuntu member Dan Trevino, who used to be part of the Florida LoCo with Michael, and who I last saw at Google I/O last year. I enjoyed great food and company.

I wasn’t speaking at OSCON this year, so I attended with an Expo pass and after an amazing breakfast at Mother’s Bistro in downtown Portland with Laura, David and Michael (…and another quick stop at Powell’s), I spent Tuesday afternoon hanging out with various friends who were also attending OSCON. When 5PM rolled around the actual expo hall itself opened, and surprised me with how massive and expensive some of the company booths had become. My last OSCON was in 2013 and I don’t remember the expo hall being quite so extravagant. We’ve sure come a long way.

Still, my favorite part of the expo hall is always the non-profit/open source project/organization area where the more grass-roots tables are. I was able to chat with several people who are really passionate about what they do. As a former Linux Users Group organizer and someone who still does a lot of open source work for free as a hobby, these are my people.

Wednesday was my last morning at OSCON. I did another walk around the expo hall and chatted with several people. I also went by the HP booth and got a picture of myself… with myself. I remain very happy that HP continues to support my career in a way that allows me to work on really interesting open source infrastructure stuff and to travel the world to tell people about it.

My flight took me home Wednesday afternoon and with that my OSCON adventure for 2015 came to a close!

More OSCON and general Portland photos here: https://www.flickr.com/photos/pleia2/sets/72157656192137302

Canonical Design Team: Converting old guidelines to vanilla

Fri, 07/24/2015 - 04:02
How the previous guidelines worked

Guidelines essentially is a framework built by the Canonical web design team. The whole framework has an array of tools to make it easy to create a Ubuntu themed sites. The guidelines were a collaboration between developers and designers and followed consistent look which meant in-house teams and community websites could have a consistent brand feel.

It worked in one way, a large framework of modules, helpers and components which built the Ubuntu style for all our sites. The structure of this required a lot of overrides and work arounds for different projects and added to a bloated nature that the guidelines had become. Canonical & Cloud sites required a large set of overrides to imprint their own visual requirements and created a lot of duplication and overhead for each site.

There was no build system nor a way to update to the latest version unless using the hosted pre-compiled guidelines or pulled from our bazaar repository. Not having any form of build step meant having to rely on a local Sass compiler or setup a watcher for each project. Also we had no viable way to check linting errors or create a concrete coding standard.

The actual framework its self was a ported CSS framework into Sass. Not utilising placeholders or mixins correctly and with a bloated amount of variables. To change one colour for example or changing the size of an element wouldn’t be as easy as passing a mixin with set values or changing one variable.

Unlike how we have currently built in Vanilla, all preprocessor styles are created via mixins. Creating responsive changes would be done in a large media query at the end of any document and this again would be repeated for our Canonical or Cloud styles too.

Removing Ubuntu and Canonical from theme

Our first task in building Vanilla was referencing all elements which were ‘Ubuntu’ centric. Anything which had a unique class, colour or style. Once identified the team systematically took one section of each part of guidelines and removed the classes or variables and creating new versions. Once this stage was achieved the team was able to then look at refactoring and updating the code.

Clean-up and making it generic

We decided when starting this project to update how we write any new module / element. Linting was a big factor and when using a build system like gulp we finally had the ability to adhere to a coding standard. This meant a lot of modules / elements had to be rewritten and also improved upon, trimming down the Sass nesting, applying new techniques such as flex box and cleaning duplicated styles.

But the main goal was to make it generic, extendable and easy. Not the simplest of tasks, this meant removing any custom modules or specific style / classes but also building the framework to change via a variable update or a value change with in a mixin. We wanted the Vanilla theme to inherit another developers style and that would cascade through out the whole framework with ease. Setting the brand colour for example would effect the whole framework and change a multiple of modules / elements. But you are not restricted which we had as a bottle neck with the old guidelines.

Using Sass mixins

Mixins are a powerful part of Sass which we weren’t utilising. In guidelines they were used to create preprocessor polyfills, something which was annoying. Gulp now replaces that need. We used mixins to modularise the entire framework, thus giving flexibility over which parts of the framework a project requires.

The ability to easily turn on/off a section of vanilla felt very powerful but required. We wanted a developer to choose what was needed for their project. This was the opposite of guidelines where you would receive the entire framework. In Vanilla, each section our elements or modules would also be encapsulated with in mixins and on some have values which would effect them. For example the buttons mixin;

@mixin vf-button($button-color, $button-bg, $border-color) { @extend %button-pattern; color: $button-color; background: $button-bg; @if $border-color != null { border: 1px solid $border-color; } &:hover { background: darken($button-bg, 6.2%); @if $button-bg == $transparent { text-decoration: underline; } } }



The above code shows how this mixin isn’t attached to fixed styles or colours. When building a new Vanilla theme a few variable changes will style any button to the projects requirements. This is something we have replicated through out the project and creates a far better modular framework.

Creating new themes

As I have mentioned earlier a few changes can setup a whole new theme in Vanilla, using it as a base and then adding or extending new styles. Change the branding or a font family just requires overwriting the default value e.g $brand-colour: $orange !default; is set in the global variables document. Amending this in another document and setting it to $brand-colour: #990000; will change any element effected by brand colour thus creating the beginning of a new theme.

We can also take this per module mixin. Including the module into a new class or element and then extend or add upon it. This means themes are not constricted to just using what is there but gives more freedom. This method is particularly useful for the web team as we build themes for Ubuntu, Canonical and Cloud products.

An example of a live theme we have created is the Ubuntu vanilla theme. This is an extension of the Vanilla framework and is set up to override any required variables to give it the Ubuntu brand. Diving into the theme.scss It shows all elements used from Vanilla but also Ubuntu specific modules. These are exclusively used just for the Ubuntu brand but are also structured in the same manner as the Vanilla framework. This reduces complexity in maintaining these themes and developers can easily pick up what has been built or use it as a reference to building their own theme versions.

Jonathan Riddell: Mi Charla

Fri, 07/24/2015 - 03:49

Los transparencias de mi presentación de ayer a Akademy-ES es ahora en mi página web de charlas.

by

The Fridge: Ubuntu 14.10 (Utopic Unicorn) End of Life reached on July 23, 2015

Thu, 07/23/2015 - 16:11

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (July 23, 2015), Ubuntu 14.10 is no longer supported. No more package updates will be accepted to 14.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 14.10 (Utopic Unicorn) release almost 9 months ago, on October 23, 2014. As a non-LTS release, 14.10 has a 9-month month support cycle and, as such, the support period is now nearing its end and Ubuntu 14.10 will reach end of life on Thursday, July 23rd. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 14.10.

The supported upgrade path from Ubuntu 14.10 is via Ubuntu 15.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/VividUpgrades

Ubuntu 15.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 23 21:49:45 UTC 2015 by Adam Conrad

Ubuntu Podcast from the UK LoCo: S08E20 – Who’s Your Caddy? - Ubuntu Podcast

Thu, 07/23/2015 - 15:22

It’s Episode Twenty of Season Eight of the Ubuntu Podcast! Mark Johnson, Laura Cowen, and Martin Wimpress are together with guest presenter Joe Ressington and speaking to your brain.

In this week’s show:

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Carla Sella: Let's test Ubuntu Phone's Wi-Fi Hotspots (internet tethering) feature

Thu, 07/23/2015 - 12:48
In system settings under "cellular"
 the new "Wi-Fi" hotspot feature
Enabling Hotspot feature





Hot spot feature settings


We have a brand new Wi-Fi Hotspots (internet tethering) feature that's about to land in Ubuntu Phone with OTA-6.I know a lot of persons that have been waiting for this feature eagerly.So let's see how easy it is to help out testing it :-).You can test this feature on  both Ubuntu 15.04 (Vivid Vervet) and Ubuntu 15.10 (Wily Werewolf) based phone images
First you need to enable "Developer mode" on your Ubuntu Phone, to do this you go to system settings, "About this phone", swipe down right to the bottom and tap on "Developer mode", on the Developer mode page turn on "Developer mode" switch:


Now let's connect the phone to your Ubuntu desktop PC with a USB cable and in terminal write:
citrain device-upgrade <silo #> <pin/password on device>
so for testing this feature the command will be:
$ citrain device-upgrade  46 0000
where 0000 is your device's pin or password and 46 is the silo number.
If you don't have the phablet-tools-citrain package installed you need to:
$ sudo apt install phablet-tools-citrain 

Now to start the hotspot: 
  1. Ensure Wi-Fi is enabled.
  2. Go to System Settings -> Mobile/Cellular​
  3. Tap “Wi-Fi hotspot”
  4. Set up your hotspot
  5. Enable it.
If you hit an issue, here's how to report it:

- ​A client can't see the hotspot or the hotspot does not work:​
  * File against: https://bugs.launchpad.net/ubuntu/+source/indicator-network/+filebug
​​  ​* Please attach /var/log/syslog as well as ~/.cache/upstart/indicator-network.log​

- There's a problem with the System Settings UI:
  * File against: https://bugs.launchpad.net/ubuntu/+source/ubuntu-system-settings/+filebug
  * ​Please attach log files which you'll find here: ~/.cache/upstart/application-legacy-ubuntu-system-settings-.log​


Enjoy testing :-D.

Valorie Zimmerman: I'm loving Akademy!

Thu, 07/23/2015 - 10:20
And it hasn't even started. Scarlett and I flew to A Coruña arriving Tuesday, and spent yesterday seeing the town. Today is all about preparing for the e.V. AGM and the Akademy talks and BoFs following. 


Wish you were here!

Riccardo Padovani: Meizu MX4 is awesome after OTA-5

Thu, 07/23/2015 - 08:00

One month ago I wrote my first review about Meizu MX4 and I was disappointed about the lack of optimization that the phone received and some of the problems as well.

Now with the OTA-5 the phone is working as it should have from the very beginning. It’s a real shame that Meizu started selling them before this update, because a lot of reviews currently online are bad due to problems fixed in this update. It’s like a whole new phone!

Meizu MX4 specific improvements Battery

One of the things that I most appreciated in the BQ Aquaris E4.5 was the battery life which the MX4 was not on par with, until two day ago that is.

After the update I did a full recharge and I didn’t have to charge again for 36 hours. Maybe for some of you that is not a lot but for me it’s much longer than I was used to from Android. I always had 4G turned, watched at least 30 mins of videos on YouTube, received and replied to more than 500 messages, received and replied to emails, surfed the web, a couple of calls and more.

Battery life could be a killer feature of Ubuntu, and there is still a lot to improve.

Optimization

With this update the phone doesn’t lag and doesn’t get too hot. There’s also an increase of icons per row (following a change with the grid units) which is much better!

Oh yea and the the LED for notifications work! Yes!

General improvements

The speed improvement is terrific. With every update a lot of things change and you can spend hours finding all of them. If you’re passionate about technology you definitely have to buy an Ubuntu Phone (the MX4 or the Aquaris depending on your price range).

Some of the most interesting I found are: - Unity Rotation: finally it isn’t weird to use the phone in landscape mode. Though there are still some bugs and the dash doesn’t rotate which I hope they fix soon! - New icons: which look great, awesome job Design Team! they also look much clear and have better in the MX4’s resolution. Kudos! - Change reviews: time to update some old feedback I left when the apps where still in development. - New Tab in Browser: has been improved with some of the contributions I worked on in the last few months. I love the Browser and I love all the updates it is getting as well as in the Desktop

Now I’ve very happy with the phone and I still miss nothing from Android. Yes of it isn’t for everyone (yet) but the number of improvements it has every month is astonishing and I think it will become available to the masses very soon.

But it is still missing from apps which can be filled in when some big companies come and join on this trip!

Thanks to Aaron Honeycutt for helping me writing this article.

If you like my work and want to support me, just send me a Thank you! by email or offer me a beer:-)

Ciao,
R.

Ubuntu App Developer Blog: Announcing UbuContest 2015

Thu, 07/23/2015 - 06:04

Have you read the news already? Canonical, the Ubucon Germany 2015 team, and the UbuContest 2015 team, are happy to announce the first UbuContest! Contestants from all over the world have until September 18, 2015 to build and publish their apps and scopes using the Ubuntu SDK and Ubuntu platform. The competion has already started, so register your competition entry today! You don’t have to create a new project, submit what you have and improve it over the next two months.

But we know it's not all about shiny new apps and scopes! A great platform also needs content, great design, testing, documentation, bug management, developer support, interesting blog posts, technology demonstrations and all of the other incredible things our community does every day. So we give you, our community members, the opportunity to nominate other community members for prizes!

We are proud to present five dedicated categories:

  1. Best Team Entry: A team of up to three developers may register up to two apps/scopes they are developing. The jury will assign points in categories including "Creativity", "Functionality", "Design", "Technical Level" and "Convergence". The top three entries with the most points win.

  2. Best Individual Entry: A lone developer may register up to two apps/scopes he or she is developing. The rest of the rules are identical to the "Best Team Entry" category.

  1. Outstanding Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something "exceptional" on a technical level. The nominated candidate with the most jury votes wins.

  1. Outstanding Non-Technical Contribution: Members of the general public may nominate candidates who, in their opinion, have done something exceptional, but non-technical, to bring the Ubuntu platform forward. So, for example, you can nominate a friend who has reported and commented on all those phone-related bugs on Launchpad. Or nominate a member of your local community who did translations for Core Apps. Or nominate someone who has contributed documentation, written awesome blog articles, etc. The nominated candidate with the most jury votes wins.

  1. Convergence Hero: The "Best Team Entry" or "Best Individual Entry" contribution with the highest number of "Convergence" points wins. The winner in this category will probably surprise us in ways we have yet to imagine.

Our community judging panel members Laura Cowen, Carla Sella, Simos Xenitellis, Sujeevan Vijayakumaran and Michael Zanetti will select the winners in each category. Successful winners will be awarded items from a huge pile of prizes, including travel subsidies for the first-placed winners to attend Ubucon Germany 2015 in Berlin, four Ubuntu Phones sponsored by bq and Meizu, t-shirts, and bundles of items from the official Ubuntu Shop.

We wish all the contestants good luck!

Go to ubucontest.eu or ubucon.de/2015/contest for more information, including how to register and nominate folks. You can also follow us on Twitter @ubucontest, or contact us via e-mail at contest@ubucon.de.

 

Miley: Really getting there

Thu, 07/23/2015 - 05:36
Hi there everyone,
Success is on the cards for Africa. 15 of the 18 countries have joined the group
and it looks like we will soon be helping one country form a new LoCo there.
As you can see from https://launchpad.net/~ubuntu-africa/+members this group is growing in leaps and bounds. First brain storming meeting will be on the 29th of this month at 20.30 Africa time. ( UTC+2 ) I am hoping that a Council member or two can attend. And we have 3 membership board members so all is looking good. Everyone is welcome to join us at our first meeting. One of the Tunisia guys has been improving our wiki page as well
https://wiki.ubuntu.com/AfricanTeams
Keep well everyone.

Daniel Pocock: Unpaid work training Google's spam filters

Thu, 07/23/2015 - 01:49

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

Daniel Pocock: Recording live events like a pro (part 1: audio)

Thu, 07/23/2015 - 00:14

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.

Gotchas

Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.

Conclusion

If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video and doing post-production with free software.

Ubuntu GNOME: Feedback Time – Results

Wed, 07/22/2015 - 21:14

Hi,

I wish I could be as fast as time flies these days!

Last month, we needed your help by sharing your opinion about Ubuntu GNOME. We were amazed because so many have responded and helped within 10 days – thank you!

Now, this is the 3rd phase and it is time to publish the results:

Obviously, the next step will be the planning phase. The coming days should be very busy for Ubuntu GNOME and we, as always, shall keep you updated

Thank you!

Ubuntu LoCo Council: LoCo Council restaffed

Wed, 07/22/2015 - 12:29

Old news, I know, but as announced on the Fridge, we have a freshly restaffed LoCo Council.

Following the resignation of Stephen Kellat and the expiration of Marcos Alvarez Costales, we now have two new members: Walter Lapchynski and Mariam Hammouda.

Additionally, Pablo Rubianes is joining us for another term.

Please bear with us get wiki pages updated and what not.

Thomas Ward: Nginx 1.9.3 in PPAs, and retiring of Utopic Uploads for both PPAs

Wed, 07/22/2015 - 12:15

The latest Nginx Mainline version, 1.9.3, is now available in the Mainline PPA (link).

With this 1.9.3 upload to the PPAs, we are hereby retiring the Utopic release from both the NGINX Stable and NGINX Mainline PPAs. The Ubuntu Utopic 14.10 release EOLs tomorrow, July 23rd, 2015. We are not planning any additional uploads to affect Utopic, and are hereby considering those releases “disabled” for uploads and building. Packages as they exist in the PPA will continue to exist, but will not receive updates for Utopic.

Ubuntu Server blog: Server team meeting minutes: 2015-07-21

Wed, 07/22/2015 - 01:56

 

  • smoser to check with Odd_Bloke on status of high priority bug 1461242
  • Think about numad integration
  • Next meeting will be on Jul 29st 16:00:00 UTC in #ubuntu-meeting

Full agenda and log

 

Didier Roche: Arduino support and various fixes in Ubuntu Make 0.9

Tue, 07/21/2015 - 22:13

A warm summer has started in some part of the world and holidays: beach and enjoying various refreshements!

However, the unstoppable Ubuntu Make team wasn't on a pause and we continued making improvements thanks to the vibrant community around it!

What's new in this release? First Arduino support has been added with the vast majority of work done by Tin Tvrtković. Thanks to him for this excellent work! To be able to install arduino IDE, just run:

$ umake ide arduino

Note that your user will eventually be added to the right unix group if it was not already in. In that case, it will ask you to login back to be able to communicate with your arduino device. Then, you can enjoy the arduino IDE:

Some other hilights from this release is the deprecation of the Dart Editor framework and replacement by Dart SDK one. As of Dartlang 1.11, the Dart Editor isn't supported and bundled anymore (it still exists as an independent eclipse plugin though). We thus marked the Dart Editor framework for removal only and added this Dart SDK (adding the SDK to the user's PATH) instead. This is the new default for the Dart category.

Thanks to our extensive tests, we saw that the 32 bits of Visual Studio Code page changed and wasn't thus installable anymore. It's as fixed as of this release.

A lot of other improvements (in particular in the tests writing infra and other minor fixes) are also part of 0.9. A more detailed changelog is available here.

0.9 is already available in Wily, and through its ppa, for 14.04 LTS and 15.04 ubuntu releases! Get it while it's warmhot!

Pages