Feed aggregator

Diego Turcios: VII Central America Free Software Summit

Planet Ubuntu - Tue, 12/29/2015 - 15:25

I just want to clarify this event happened on August. But we were a small group organizing the event and this was a completely volunteer event.

What is the VII Central America Summit?Central American Free Software Summit is a space of articulation, coordination and exchange of ideas between Free Open source Software (FLOSS) communities that make up the SLCA agreements and strengthening ways of working together to facilitate and promote the use of Free Software development in the region(Central America).
Objectives of the Event
  • Strengthening the processes of social awareness, philosophy, and policy of Free software in Honduras and Central America.
  • A multidisciplinary space that will allow managers of social projects and achieve regional politicians present their initiatives and manage contact network collaboration and / or support.
  • Create an educational application during the hackathon that will take during the event, this app will benefit the 7 countries of Central America
  • Companies, organizations and sponsoring free software projects have thematic areas both at the meeting or side events to promote their products or recruit partners, supporters and / or collaborators.
Special Thanks To: !!!This event was a great success thanks to the following open source companies, universities foundations and local companies that believe in us.
Google was the first company who gave us the Yes! The Open Source Program was the department at Google that help us with the sponsor. A special thanks to Cat Allman, she was the person who believed in this event.

A special thanks to Canonical for beleiving in this region, its communities have done a terrific job promoting Ubuntu on the region. Thanks David Planella for all your help!


The Beagleboard.org foundation donated 10 Beagle Bone Blacks which where distributed in 6 universities from Central America. 4 Honduras, 2 Costa Rica, 2 El Salvador, 1 Guatemala and 1 Nicaragua. All of this was done with the help of Jason Kridner!!!

Mozilla Foundation helped giving 10 scholarships so Mozillans Devs could come to Honduras and show us the virtues of this magnificent browser. Thanks Guillermo Movia for your help!


Honduras Local University that was the place, where the event took place.
And many other people who believed in us and with their help, the event really rocked!!!
Some images of the eventGoogle Photos Album: https://goo.gl/photos/vNhPNeaQDDoDUsfm7Facebook: https://www.facebook.com/VIIECSL/













Some articles in local newspapers (In Spanish)
Sorry for the delay of informing about this event.

Colin King: pagemon: an ncurses based tool to monitor process memory

Planet Ubuntu - Tue, 12/29/2015 - 13:56
While developing stress-ng I wanted to be able to see if the various memory stressors were touching memory in the way I had anticipated.  While digging around in the Linux documentation I discovered the very useful soft/dirty bit on Page Table Entries (PTEs) that get set when a page is written to.  The mechanism to check for the soft/dirty bit is described in Documentation/vm/soft-dirty.txt; one needs to:
  1. Clear the soft-dirty bits on the PTEs on a chosen process by writing "4" to /proc/$PID/clear_refs
  2. Wait a while for some page activity to occur
  3. Read the soft-dirty bits on the PTEs to see which pages got written to.
Not too tricky, so how about using this neat feature? While on rather long and dull flight over the Atlantic back in August I hacked up a very crude ncurses based tool to continually check the PTEs of a given process and display the soft/dirty activity in real time.  During this Christmas break I picked this code up and re-worked into a more polished tool.  One can scroll up/down the memory maps and also select a page and view the contents changing in real time.  The tool identifies the type of memory mapping a page belongs to, so one can easily scan through memory looking at pages of memory belonging data, code, heap, stack, anonymous mappings or even swapped out pages.

Running it on X, compiz, firefox or thunderbird is quite instructive as one can see a lot of page activity on the large heap allocations.  The ability to see pages getting swapped out when memory pressure is high is also rather useful.

Page view of XorgMemory view of stackThe code is still early development quality (so expect some buglets!) and I need to work on optimising it in a lot of places, but for now, it works well enough to be a fairly interesting tool. I've currently got a package built for Ubuntu Xenial in ppa:colin-king/pagemon and the source can be cloned from http://kernel.ubuntu.com/git/cking/pagemon.git/

So, to install on Xenial, currently one needs to do:

sudo add-apt-repository ppa:colin-king/pagemon
sudo apt-get update
sudo apt-get install pagemon

I may be adding a few more features in the next few weeks, and then getting the tool into Ubuntu and Debian.

and as an example, running it on Xorg, it is invoked as:

sudo pagemon -p $(pidof Xorg)

Unfortunately sudo is required to allow one to dig so intrusively into a running process. For more details on how to use pagemon consult the pagemon man page, or press "h" or "?" while running pagemon.

Linux Padawan: An excellent resource

Planet Ubuntu - Tue, 12/29/2015 - 06:58
We are all ways looking for good quality free teaching books. It seems that we are not the only people. I have come accross an amazing list and am delighted to share it with everyone! https://github.com/vhf/free-programming-books/blob/master/free-programming-books.md

David Tomaschik: Offensive Security Certified Professional

Planet Ubuntu - Mon, 12/28/2015 - 22:32

It's been a little bit since I last updated, and it's been a busy time. I did want to take a quick moment to update and note that I accomplished something I'm pretty proud of. As of Christmas Eve, I'm now an Offensive Security Certified Professional.

Even though I've been working in security for more than two years, the lab and exam were still a challenge. Given that I mostly deal with web security at work, it was a great change to have a lab environment of more than 50 machines to attack. Perhaps most significantly, it gave me an opportunity to fight back a little bit of the impostor syndrome I'm perpetually afflicted with.

Up next: Offensive Security Certified Expert and Cracking the Perimeter.

Dustin Kirkland: More people use Ubuntu than anyone actually knows

Planet Ubuntu - Sun, 12/27/2015 - 09:44
People of earth, waving at Saturn, courtesy of NASA.“It Doesn't Look Like Ubuntu Reached Its Goal Of 200 Million Users This Year”, says Michael Larabel of Phoronix, in a post that it seems he's been itching to post for months.
Why the negativity?!? Are you sure? Did you count all of them?

No one has.  And no one can count all of the Ubuntu users in the world!
Canonical, unlike Apple, Microsoft, Red Hat, or Google, does not require each user to register their installation of Ubuntu.
Of course, you can buy laptops preloaded with Ubuntu from Dell, HP, Lenovo, and Asus.  And there are millions of them out there.  And you can buy servers powered by Ubuntu from IBM, Dell, HP, Cisco, Lenovo, Quanta, and compatible with the OpenCompute Project.
In 2011, hardware sales might have been how Mark Shuttleworth hoped to reach 200M Ubuntu users by 2015.
But in reality, hundreds of millions of PCs, servers, devices, virtual machines, and containers have booted Ubuntu to date!
Let's look at some facts...How many "users" of Ubuntu are there ultimately?  I bet there are over a billion people today, using Ubuntu -- both directly and indirectly.  Without a doubt, there are over a billion people on the planet benefiting from the services, security, and availability of Ubuntu today.
  • More people use Ubuntu than we know.
  • More people use Ubuntu than you know.
  • More people use Ubuntu than they know.
More people use Ubuntu than anyone actually knows.
Because of who we all are.
:-Dustin

Julian Andres Klode: Much faster incremental apt updates

Planet Ubuntu - Sat, 12/26/2015 - 12:15

APT’s performance in applying the Pdiffs files, which are the diff format used for Packages, Sources, and other files in the archive has been slow.

Improving performance for uncompressed files

The reason for this is that our I/O is unbuffered, and we were reading one byte at a time in order to read lines. This changed on December 24, by adding read buffering for reading lines, vastly improving the performance of rred.

But it was still slow, so today I profiled – using gperftools – the rred method running on a 430MB uncompressed Contents file with a 75 KB large patch. I noticed that our ReadLine() method was calling some method which took a long time (google-pprof told me it was some _nss method, but that was wrong [thank you, addr2line]).

After some further look into the code, I noticed that we set the length of the buffer using the length of the line. And whenever we moved some data out of the buffer, we called memmove() to move the remaining data to the front of the buffer.

So, I tried to use a fixed buffer size of 4096 (commit). Now memmove() would spend less time moving memory around inside the buffer. This helped a lot, bringing the run time on my example file down from 46 seconds to about 2 seconds.

Later on, I rewrote the code to not use memmove() at all – opting for start and end variables instead; and increasing the start variable when reading from the buffer (commit).

This in turn further improved things, bringing it down to about 1.6 seconds. We could now increase the buffer size again, without any negative effect.

Effects on apt-get update

I measured the run-time of apt-get update, excluding appstream and apt-file files, for the update from todays 07:52 to the 13:52 dinstall run. Configured sources are unstable and experimental with amd64 and i386 architectures. appstream and apt-file indexes are disabled for testing, so only Packages and Sources indexes are fetched.

The results are impressive:

  • For APT 1.1.6, updating with PDiffs enabled took 41 seconds.
  • For APT 1.1.7, updating with PDiffs enabled took 4 seconds.

That’s a tenfold increase in performance. By the way, running without PDiffs took 20 seconds, so there’s no reason not to use them.

Future work

Writes are still unbuffered, and account for about 75% to 80% of our runtime. That’s an obvious area for improvements.

Performance for patching compressed files

Contents files are usually compressed with gzip, and kept compressed locally because they are about 500 MB uncompressed and only 30MB compressed. I profiled this, and it turns out there is not much we can do about it: The majority of the time is spent inside zlib, mostly combining CRC checksums:

Going forward, I think a reasonable option might be to recompress Contents files using lzo – they will be a bit bigger (50MB instead of 30MB), but lzo is about 6 times as fast (compressing a 430MB Contents file took 1 second instead of 6).


Filed under: Debian, Uncategorized

Seif Lotfy: Skizze - A probabilistic data-structures service and storage (Alpha)

Planet Ubuntu - Fri, 12/25/2015 - 02:53

At my day job we deal with a lot of incoming data for our product, which requires us to be able to calculate histograms and other statistics on the data-stream as fast as possible.

One of the best tools for this is Redis, which will give you 100% accuracy in O(1) (except for its HyperLogLog implementation which is a probabilistic data-structure). All in all Redis does a great job.
The problem with Redis for me personally is that, when using it for 100 of millions of counters, I could end up with Gigabytes of memory.

I also tend to use Top-K, which is not implemented in Redis but via Lua scripting can be built on top of the ZSet data-structure. The Top-K data-structure is used to keep track of the top "k" heavy hitters in a stream without having to keep track all "n" flows (k < n), with a O(1) complexity.

Anyhow, dealing with a massive amount of data the interest is most of the time in heavy hitters, that could be estimated while using less memory with an O(1) complexity for reading and writing (that is if you don't care about a count being 124352435 or 124352011 because on the UI of an app you will be showing "over 124 Million").

There are a lot of algorithms floating around and used to solve counting, frequency, membership and top-k problems, which in practice are implemented and used as part of a data-stream pipeline where stuff is counted, merged then stored.

I couldn't find a one-stop-shop service to fire & forget my data at.

Basically in need of a solution where I can set up sketches to answer cardinality, frequency, membership as well as ranking queries about my data-stream (without having to reimplement the algorithm in a pipeline embedded in storm, spark, etc...) led to the development of Skizze (which is in alpha state).

What is Skizze?

Skizze ([ˈskɪt͡sə]: german for sketch) is a probabilistic data-structures (sketch) service & store to deal with all problems around counting and sketching using probabilistic data-structures. (https://github.com/seiflotfy/skizze)

Unlike a Key-Value store, Skizze does not store values, but rather appends values to sketches, to solve frequency and cardinality queries in near O(1) time, with minimal memory footprint.

Which data structures are supported?

Currently the following data structures are supported:

  • HyperLogLog++ to query cardinality of values in the sketch.
  • Count-Min-Log Sketch to query frequency of values in the sketch.
  • Top-K to list the top k values in the sketch.
  • Bloom Filter query membership of a value in the sketch.
  • Dictionary to 100% accurately query membership and frequency of values in the sketch.
What are upcoming data structures

Soon we intend to implement/integrate the following sketches:

How to use?

Skizze runs as a single service for now, and exposes a Restful API.

Who helped out?

I'd like to thank the following contributors who helped develop this project:

What else?

The project is in alpha state, and we intend to improve it in every possible way, e.g:

  • Benchmarks to test data structures against each other.
  • Referencing algorithms, instead of copying into the local source (once Go 1.6's vendoring lands)
  • Storage currently Skizze writes to disc after n seconds or m operations. Soon I'd like to be able to write only dirty segments of the sketch to disc (in case a sketch is large e.g: 1 GB).

Feel free to open issues or help out with more specs development of the project on GitHub. All input appreciated.

Ubuntu Podcast from the UK LoCo: S08E42 – Santa Claus Conquers the Martians - Ubuntu Podcast

Planet Ubuntu - Thu, 12/24/2015 - 02:00

It’s Episode Forty-two of Season Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

In this week’s show…

  • We installed a whole bunch of different Linux distros, went Go-Karting for Christmas, spoke at some conferences, did the San Francisco parkrun, and worked on the Ubuntu Pi Flavour Maker.
  • We look back at our 2015 predictions and make some new ones for 2016.
  • We have a command line love for removing all metadata tag information from an image: exiftool -all= -overwrite_original foo.png
  • We go over your feedback.

Our 2016 Predictions Laura
  • 3D printing will become more accessible and mainstream.
  • IoT – infrastructure, scaling, security – back to 70s computing.
  • IoT – privacy issues will become more prominent in the development of IoT products.
Alan
  • There will be 10 commercial Ubuntu Touch devices by the end of 2016.
  • A large motor vehichle manufacturer will switch to using Ubuntu Snappy for their in-car system.
  • Edward Snowden will leave Russia.
  • Julian Assange will leave the Ecuadorian Embassy.
Mark
  • 1 of the top 5 Linux distributions, according to Distrowatch.com, will have a desktop release with Wayland or Mir as the default display server. It won’t be well recieved due to missing features and/or poor driver support. Mint, Debian, Ubuntu, OpenSUSE, Fedora were the top 5 distros at the end of 2015.
  • The release of Ubuntu’s first converged device will be accompanied by significant mainstream marketing and media coverage, i.e. not just “tech news sites”.
  • A big game publisher with it’s own digital distribution platform (i.e. not Steam, someone like Blizzard with Battle.Net or EA with Origin) will release a Linux version of its client software.
Martin
  • Vulkan will be finalised and released. Linux drivers will be released for Intel IGPs (open source), NVIDIA (via proprietary drivers) and AMD (via proprietary drivers). SteamOS will include Vulkan support and, on equivalent hardware, will out perform Windows 10. Android will announce support for Vulkan but iOS will not.
  • Virtual Reality will continue to lack adoption. There will be no official VR headset products released for PlayStation 4, XBox One or Steam.
  • There will be at least 10 consumer products, not intended for makers and not manufactured by the Raspberry Pi Foundation, launched for sale in 2016 that use a Raspberry Pi (any model) at it’s heart.

That’s all for this week. And that’s it for Season Eight. We’ll be going for curry in 2016 to decide whether we’ll be back for a new season. Please send your comments and suggestions to:

Kevin DuBois: New Mir Release (0.18)

Planet Ubuntu - Tue, 12/22/2015 - 12:18

If a new Mir release was on your Christmas wishlist (like it was on mine), Mir 0.18 has been released! I’ve been working on this the last few days, and its out the door now.  Full text of changelog. Special thanks to mir team members who helped with testing, and the devs in #ubuntu-ci-eng for helping move the release along.

Graphics
  • Internal preparation work needed for Vulkan, hardware decoded multimedia optimizations, and latency improvements for nested servers.
  • Started work on plugin renderers. This will better prepare mir for IoT, where we might not have a Vulkan/GLES stack on the device, and might have to use the CPU.
  • Fixes for graphics corruption affecting Xmir (blocky black bars)
  • Various fixes for multimonitor scenarios, as well as better support for scaling buffers to suit the the monitor its on.
Input
  • Use libinput by default. We had been leaning on an old version of the Android input stack. Completely remove this in favor of using libinput.
Bugs
  • Quite a long list of bug correction. Some of these were never ‘in the wild’ but existed in the course of 0.18 development.
What’s next?

Its always tricky to pin down what exactly will make it into the next release, but I can at least comment on the stuff we’re working on, in addition to the normal rounds of bugfixing and test improvements:

  • various Internet-o-Things and convergence topics (eg, snappy, figuring out different rendering options on smaller devices).
  • buffer swapping rework to accommodate different render technologies (Vulkan!) accommodations for multimedia, and improve latency for nested servers.
  • more flexible screenshotting support
  • further refinements to our window management API
  • refinements to our platform autodetection
How can I help? Writing new Shells

A fun way to help would be to write new shells! Part of mir’s goals is to make this as easy to do as possible, so writing a new shell always helps us make sure we’re hitting this goals.

If you’re interested in the mir C++ shell API, then you can look at some of our demos, available in the ‘mir-demos’ package. (source here, documentation here)

Even easier than that might be writing a shell using QML like unity8 is doing via the qtmir plugin. An example of how to do that is here (instructions on running here).

Tinkering with technology

If you’re more of the nuts and bolts type, you can try porting a device, adding a new rendering platform to mir (OpenVG or pixman might be an interesting, beneficial challenge), or figuring out other features to take advantage of.

Standard stuff

Pretty much all open source projects recommend bug fixing or triaging, helping on irc (#ubuntu-mir on freenode) or documentation auditing as other good ways to start helping.

Dmitry Shachnev: ReText 5.3 released

Planet Ubuntu - Tue, 12/22/2015 - 04:00

On Sunday I have released ReText 5.3, and here finally comes the official announcement.

Highlights in this release are:

  • A code refactoring has been performed — a new “Tab” class has been added, and all methods that affect only one tab (not the whole window) have been moved there.

    From the user’s point of view this means two things:

    • The tabs are now draggable and reorderable (this was a feature requested long time ago).

    • Some operations are now faster and more efficient. For example, in the previous release turning the WebKit renderer on/off required removing all the tabs and then re-adding them back, this giant hack has been dropped now.

  • A new previewer feature was contributed by Jan Korte: now, if the document contains a local link like

    [click me](foo.mkd)

    and a file named foo.mkd exists, it is opened in a new ReText tab.

    It is also possible to specify names without the extension (just foo) or relative paths (../foo/bar.mkd).

  • The colors used in the editor are now fully configurable via the standard configuration mechanism. This is most useful for users of dark themes.

    For example, you can change the color of line numbers area, the cursor position box, and all colors used by the highlighter.

    The possible colors and the procedure to change them is described in the “Color scheme setting” section in the documentation.

  • The “Display right margin at column” feature now displays the line more precisely: in the previous version it was some pixels left to the cursor, now it is exactly on the same horizontal position as the cursor.

  • Some bug fixes have been made for users that install ReText using pip or setup.py install:

    • The desktop file no longer hardcodes the path to executable in the Exec field, it uses just retext now. This fix has been contributed by Buo-Ren Lin.

    • The setup.py script now installs the application logo into a location where ReText can find it. Note: this works only for installs into user’s home directory (with --user passed to pip or setup.py install), installing software globally this way is not recommended anyway.

  • The AppStream metadata included in the previous version was updated to fix some warnings from the appstream.debian.org metadata validator.

Also, a week before ReText 5.3 a new version of PyMarkups was released, bringing enhanced support for the Textile markup. You can now edit Textile files in ReText too, provided that python-textile module for Python 3 is installed.

As usual, you can get the latest release from PyPI or from the Debian/Ubuntu repositories.

Please report any bugs you find to our issue tracker.

The Fridge: Ubuntu Weekly Newsletter Issue 447

Planet Ubuntu - Mon, 12/21/2015 - 19:07

Welcome to the Ubuntu Weekly Newsletter. This is issue #447 for the week December 14 – 20, 2015, and the full version is available here.

Our next issue will be a two week issue covering December 21 – January 3rd. Happy Holidays!

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Paul White
  • Walter Lapchynski
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 447

The Fridge - Mon, 12/21/2015 - 19:07

Welcome to the Ubuntu Weekly Newsletter. This is issue #447 for the week December 14 – 20, 2015, and the full version is available here.

Our next issue will be a two week issue covering December 21 – January 3rd. Happy Holidays!

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Paul White
  • Walter Lapchynski
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Alessio Treglia: Creativity, Innovation and the “Included Middle” logic

Planet Ubuntu - Mon, 12/21/2015 - 02:26

The pressure of the post-modernism is establishing its bases on our general lack of ability to overcome a number of dualisms that have become ingrained in the modern way of thinking[1]. This is mainly due to the strong influence of past centuries’ scientific “Reductionism”, which postulated that any system – to be understood – had to be reduced to its minimum component elements.

However, a so defined system is a “closed” system, which does not interact with the surrounding environment and it can exist (not always) only in a reality-isolated laboratory. The logic of “Complexity”, instead, takes into account the “open” systems and all the interconnections and influences of the system itself with the world around it, in every physical, social, psychological and symbolic aspect…

<Read more…>

Aurélien Gâteau: QDir::Separator considered harmful

Planet Ubuntu - Sun, 12/20/2015 - 13:31

Suppose you are building a Qt application which must run on Linux, Mac OS and Windows. At some point, your application is likely to have to deal with file paths. Working on your Linux machine, but caring about your Windows users, you might be tempted to construct a file path like this:

QString filePath = someDir + QDir::separator() + "foo.ext";

Don't do this! Make your life simpler and just use:

QString filePath = someDir + "/foo.ext";

As QDir::separator() documentation says:

You do not need to use this function to build file paths. If you always use "/", Qt will translate your paths to conform to the underlying operating system. If you want to display paths to the user using their operating system's separator use toNativeSeparators().

Using QDir::separator() can actually cause subtle platform-dependent bugs. Let's have a look at this code snippet:

QString findBiggestFile(const QString &dirname) { QDir dir(dirname); int size = 0; QString path; Q_FOREACH(const QFileInfo &info, dir.entryInfoList(QDir::Files)) { if (info.size() > size) { path = info.absoluteFilePath(); size = info.size(); } } return path; }

So far so good. Now imagine you want to unit-test your code. You setup a set of files and expect the file named "file.big" to be the biggest, so you write something like this:

void testFindBiggestFile() { QString result = findBiggestFile(mTestDir); QString expected = mTestDir + QDir::separator() + "file.big"; QCOMPARE(biggestFile, expected); }

This test passes on a Linux system, but fails on a Windows system: findBiggestFile() returns a path created by QFileInfo, so assuming mTestDir is C:/build/tests, result will be C:/build/tests/file.big, but expected will be C:/build/tests\file.big.

This simpler test, on the other hand, works as expected, on all platforms:

void testFindBiggestFile() { QString result = findBiggestFile(mTestDir); QString expected = mTestDir + "/file.big"; QCOMPARE(biggestFile, expected); }

Though you might want to pass expected through QDir::cleanPath() so that if mTestDir ends with a slash, the test does not fail:

void testFindBiggestFile() { QString result = findBiggestFile(mTestDir); QString expected = QDir::cleanPath(mTestDir + "/file.big"); QCOMPARE(result, expected); } What about paths displayed in the user interface?

There are situations where you need to use native separators, for example when you are preparing paths which will be shown in your user interface or when you need to fork a process which expects native separators as command-line arguments.

In such situations, QDir::separator() is not a good idea either. It's simpler and more reliable to create the path with forward slashes, then pass it through QDir::toNativeSeparators(). This way you can be sure you won't let one forward slash go through.

Ian Weisser: Are you ready for new members?

Planet Ubuntu - Sun, 12/20/2015 - 07:40
In a few days, many Ubuntu users will unwrap new hardware, plug it in, and have a fantastic experience.

Some users will get inspired to join the community to solve bugs, add features, contribute code, and much more.


Support Gurus: use Find-a-TaskNew, enthusiastic users often show up in the many Ubuntu help forums.

Encourage them to try Find-a-Task to see the variety of ways they can help.
Just send them over, and we'll do the rest.


Team Leaders: Is your team ready?Is your team ready to welcome, train, and integrate these new volunteers?

Has your team looked at it's Find-a-Task roles for volunteers? It's easy to add or change your team's listings.

Is your team approachable? Can you be contacted easily by a new volunteer? Is your web page for new volunteers accurate?


Improving Find-a-TaskFind-a-Task is the Ubuntu community's job board for volunteers. Introduced in January 2015, Find-a-Task shows fellow volunteers the variety of tasks and roles available, and links those roles to the team web pages.

Please share your suggestions to improve Find-a-Task to the Ubuntu Community Team mailing list.

Sujeevan Vijayakumaran: Attending UbuCon Summit US in 01/2016

Planet Ubuntu - Sat, 12/19/2015 - 10:50

2016 will be my favourite „UbuCon-year“. The first UbuCon Summit in Pasadena will take place and at the end of the year the first UbuCon Europe will take place in Essen, Germany from 18th to 20th November. In the latter I'm the head of the organisation team.

The UbuCon Summit is just around the corner and I'm really looking forward to attending the event. It's the first time that I requested money from the Ubuntu Community Donations Fund which got thankfully accepted. The schedule is complete since a few days and there are many interesting talks, including the opening keynote by Mark Shuttleworth. I'm also going to give a talk about the Labdoo Project, which is a humanitarian social network to bring education around the globe. This will be also my first talk at a conference in English ;-).

If you live in South California and didn't hear about UbuCon Summit yet, you should definitely consider visiting this event. It'll be cohosted by the South California Linux Expo which alos have many interesting talks.

I'm looking forward to meeting all my old and new friends. Especially those which I didn't meet yet, like Richard Gaskin and Nathan Haines, who organises the UbuCon Summit. See you there!

Jonathan Riddell: Happy Christmas with Plasma Wayland Live Image

Planet Ubuntu - Fri, 12/18/2015 - 10:30

I just published a live Plasma image with Wayland. A great milestone in a multi-year project of the Plasma team led by the awesome Martin G.  Nowhere near end-user ready yet but the road forward is now visible to humble mortals who don’t know how to write their own Wayland protocol.  It’ll give a smoother and more secure graphics system when it’s done and ensures KDE’s software and Linux on the desktop stays relevant for another 30 years.

by

Martin Pitt: What’s new in autopkgtest: LXD, MaaS, apt pinning, and more

Planet Ubuntu - Thu, 12/17/2015 - 23:27

The last two major autopkgtest releases (3.18 from November, and 3.19 fresh from yesterday) bring some new features that are worth spreading.

New LXD virtualization backend

3.19 debuts the new adt-virt-lxd virtualization backend. In case you missed it, LXD is an API/CLI layer on top of LXC which introduces proper image management, seamlessly use images and containers on remote locations, intelligently caching them locally, automatically configure performant storage backends like zfs or btrfs, and just generally feels really clean and much simpler to use than the “classic” LXC.

Setting it up is not complicated at all. Install the lxd package (possibly from the backports PPA if you are on 14.04 LTS), and add your user to the lxd group. Then you can add the standard LXD image server with

lxc remote add lco https://images.linuxcontainers.org:8443

and use the image to run e. g. the libpng test from the archive:

adt-run libpng --- lxd lco:ubuntu/trusty/i386 adt-run libpng --- lxd lco:debian/sid/amd64

The adt-virt-lxd.1 manpage explains this in more detail, also how to use this to run tests in a container on a remote host (how cool is that!), and how to build local images with the usual autopkgtest customizations/optimizations using adt-build-lxd.

I have btrfs running on my laptop, and LXD/autopkgtest automatically use that, so the performance really rocks. Kudos to Stéphane, Serge, Tycho, and the other LXD authors!

The motivation for writing this was to make it possible to move our armhf testing into the cloud (which for $REASONS requires remote containers), but I now have a feeling that soon this will completely replace the existing adt-virt-lxc virt backend, as its much nicer to use.

It is covered by the same regression tests as the LXC runner, and from the perspective of package tests that you run in it it should behave very similar to LXC. The one problem I’m aware of is that autopkgtest-reboot-prepare is broken, but hardly anything is using that yet. This is a bit complicated to fix, but I expect it will be in the next few weeks.

MaaS setup script

While most tests are not particularly sensitive about which kind of hardware/platform they run on, low-level software like the Linux kernel, GL libraries, X.org drivers, or Mir very much are. There is a plan for extending our automatic tests to real hardware for these packages, and being able to run autopkgtests on real iron is one important piece of that puzzle.

MaaS (Metal as a Service) provides just that — it manages a set of machines and provides an API for installing, talking to, and releasing them. The new maas autopkgtest ssh setup script (for the adt-virt-ssh backend) brings together autopkgtest and real hardware. Once you have a MaaS setup, get your API key from the web UI, then you can run a test like this:

adt-run libpng --- ssh -s maas -- \ --acquire "arch=amd64 tags=touchscreen" -r wily \ http://my.maas.server/MAAS 123DEADBEEF:APIkey

The required arguments are the MaaS URL and the API key. Without any further options you will get any available machine installed with the default release. But usually you want to select a particular one by architecture and/or tags, and install a particular distro release, which you can do with the -r/--release and --acquire options.

Note that this is not wired into Ubuntu’s production CI environment, but it will be.

Selectively using packages from -proposed

Up until a few weeks ago, autopkgtest runs in the CI environment were always seeing/using the entirety of -proposed. This often led to lockups where an application foo and one of its dependencies libbar got a new version in -proposed at the same time, and on test regressions it was not clear at all whose fault it was. This often led to perfectly good packages being stuck in -proposed for a long time, and a lot of manual investigation about root causes.

.

These days we are using a more fine-grained approach: A test run is now specific for a “trigger”, that is, the new package in -proposed (e. g. a new version of libbar) that caused the test (e. g. for “foo”) to run. autopkgtest sets up apt pinning so that only the binary packages for the trigger come from -proposed, the rest from -release. This provides much better isolation between the mush of often hundreds of packages that get synced or uploaded every day.

This new behaviour is controlled by an extension of the --apt-pocket option. So you can say

adt-run --apt-pocket=proposed=src:foo,libbar1,libbar-data ...

and then only the binaries from the foo source, libbar1, and libbar-data will come from -proposed, everything else from -release.

Caveat:Unfortunately apt’s pinning is rather limited. As soon as any of the explicitly listed packages depends on a package or version that is only available in -proposed, apt falls over and refuses the installation instead of taking the required dependencies from -proposed as well. In that case, adt-run falls back to the previous behaviour of using no pinning at all. (This unfortunately got worse with apt 1.1, bug report to be done). But it’s still helpful in many cases that don’t involve library transitions or other package sets that need to land in lockstep.

Unified testbed setup script

There is a number of changes that need to be made to testbeds so that tests can run with maximum performance (like running dpkg through eatmydata, disabling apt translations, or automatically using the host’s apt-cacher-ng), reliable apt sources, and in a minimal environment (to detect missing dependencies and avoid interference from unrelated services — these days the standard cloud images have a lot of unnecessary fat). There is also a choice whether to apply these only once (every day) to an autopkgtest specific base image, or on the fly to the current ephemeral testbed for every test run (via --setup-commands). Over time this led to quite a lot of code duplication between adt-setup-vm, adt-build-lxc, the new adt-build-lxd, cloud-vm-setup, and create-nova-image-new-release.

I now cleaned this up, and there is now just a single setup-commands/setup-testbed script which works for all kinds of testbeds (LXC, LXD, QEMU images, cloud instances) and both for preparing an image with adt-buildvm-ubuntu-cloud, adt-build-lx[cd] or nova, and with preparing just the current ephemeral testbed via --setup-commands.

While this is mostly an internal refactorization, it does impact users who previously used the adt-setup-vm script for e. g. building Debian images with vmdebootstrap. This script is now gone, and the generic setup-testbed entirely replaces it.

Misc

Aside from the above, every new version has a handful of bug fixes and minor improvements, see the git log for details. As always, if you are interested in helping out or contributing a new feature, don’t hesitate to contact me or file a bug report.

Colin King: Incorporating and accessing binary data into a C program

Planet Ubuntu - Thu, 12/17/2015 - 10:17
The other day I needed to incorporate a large blob of binary data in a C program. One simple way is to use xxd, for example, on the binary data in file "blob", one can do:

xxd --include blob

unsigned char blob[] = {
0xc8, 0xe5, 0x54, 0xee, 0x8f, 0xd7, 0x9f, 0x18, 0x9a, 0x63, 0x87, 0xbb,
0x12, 0xe4, 0x04, 0x0f, 0xa7, 0xb6, 0x16, 0xd0, 0x70, 0x06, 0xbc, 0x57,
0x4b, 0xaf, 0xae, 0xa2, 0xf2, 0x6b, 0xf4, 0xc6, 0xb1, 0xaa, 0x93, 0xf2,
0x12, 0x39, 0x19, 0xee, 0x7c, 0x59, 0x03, 0x81, 0xae, 0xd3, 0x28, 0x89,
0x05, 0x7c, 0x4e, 0x8b, 0xe5, 0x98, 0x35, 0xe8, 0xab, 0x2c, 0x7b, 0xd7,
0xf9, 0x2e, 0xba, 0x01, 0xd4, 0xd9, 0x2e, 0x86, 0xb8, 0xef, 0x41, 0xf8,
0x8e, 0x10, 0x36, 0x46, 0x82, 0xc4, 0x38, 0x17, 0x2e, 0x1c, 0xc9, 0x1f,
0x3d, 0x1c, 0x51, 0x0b, 0xc9, 0x5f, 0xa7, 0xa4, 0xdc, 0x95, 0x35, 0xaa,
0xdb, 0x51, 0xf6, 0x75, 0x52, 0xc3, 0x4e, 0x92, 0x27, 0x01, 0x69, 0x4c,
0xc1, 0xf0, 0x70, 0x32, 0xf2, 0xb1, 0x87, 0x69, 0xb4, 0xf3, 0x7f, 0x3b,
0x53, 0xfd, 0xc9, 0xd7, 0x8b, 0xc3, 0x08, 0x8f
};
unsigned int blob_len = 128;

..and redirecting the output from xxd into a C source and compiling this simple and easy to do.

However, for large binary blobs, the C source can be huge, so an alternative way is to use the linker ld as follows:

ld -s -r -b binary -o blob.o blob

...and this generates the blob.o object code. To reference the data in a program one needs to determine the symbol names of the start, end and perhaps the length too. One can use objdump to find this as follows:

objdump -t blob.o
blob.o: file format elf64-x86-64
SYMBOL TABLE:
0000000000000000 l d .data 0000000000000000 .data
0000000000000080 g .data 0000000000000000 _binary_blob_end
0000000000000000 g .data 0000000000000000 _binary_blob_start
0000000000000080 g *ABS* 0000000000000000 _binary_blob_size

To access the data in C, use something like the following:

cat test.c

#include <stdio.h>
int main(void)
{
extern void *_binary_blob_start, *_binary_blob_end;
void *start = &_binary_blob_start,
*end = &_binary_blob_end;
printf("Data: %p..%p (%zu bytes)\n",
start, end, end - start);
return 0;
}

...and link and run as follows:

gcc test.c blob.o -o test
./test
Data: 0x601038..0x6010b8 (128 bytes)

So for large blobs, I personally favour using ld to do the hard work for me since I don't need another tool (such as xxd) and it removes the need to convert a blob into C and then compile this.

Daniel McGuire: The Ubuntu Firefox Startpage

Planet Ubuntu - Thu, 12/17/2015 - 06:53
The Users Speak Out

A user over on  Reddit posted a complaint about the default Ubuntu Firefox start-page.

User kristbaum writes:

I don’t mean to complain, but everyone who uses Ubuntu, sees this page[1] , and it really isn’t that useful, or good looking.

It is one other thing that you have to replace… Would it be possible to integrate links in there, to set Google or Duckduckgo as your start page? Or maybe also make it scale a little bit better?

It really isn’t that bad, but it is not a awesome first impression. What are your views on this?

I agree with his statement, it needs updating and this is something I feel as if I could help with. Since we are Ubuntu and we love convergence I don’t see a reason why we can’t have a scalable, beautiful and responsive homepage that looks part of the operating system.

There is now a bug report on launchpad about this issue.

What are the “others” doing?

When redesigning something that already exists on other platforms it’s always good to look at how they do it, we are not going to reinvent the wheel here.

Safari on Apple OSX

The above image is the start-screen of Safari on Apple OSX. There are a few key things:

  • Favourites – these are preassigned favourites of links to popular sites.
  • Frequently visited – List of sites visited by the user.
  • No search bar, this would be redundant since Safari shows one at the top of the application at all the times.

 

Vivaldi on Ubuntu 15.10

The above image is the start-screen of Vivaldi on Ubuntu 15.10. There are a few key things:

  • Speed Dial – these are preassigned favourites of links to popular sites.
  • No search bar- Like Safari this is redundant as there is always one at the top of the application.
Conclusion – Design roundup

Looking at current browsers on different platforms shows similar patterns; no search bar on page, list of popular or most frequent websites shown and both beautifully designed and structured. I’ll be taking this into consideration when I start the work on some working mockups for Ubuntu’s Firefox Startpage.


Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator