Feed aggregator

Stephen Kelly: CMake Daemon for user tools

Planet Ubuntu - Sun, 01/24/2016 - 15:33

I’ve been working for quite some time on a daemon mode for CMake in order to make it easier to build advanced tooling for CMake. I made a video about this today:

The general idea is that CMake is started as a long-running process, and can then be communicated with via a JSON protocol.

So, for example, a client sends a request like

{ "type": "code_completion_at",   "line": 50,   "path": "/home/stephen/dev/src/cmake-browser/CMakeLists.txt",   "column": 7 }

and the daemon responds with

{    "completion":{       "commands":[          "target_compile_definitions",          "target_compile_features",          "target_compile_options",          "target_include_directories",          "target_link_libraries",          "target_sources"       ],       "matcher":"target_"    } } Many more features are implemented such as semantic annotation, variable introspection, contextual help etc, all without the client having to implement it themselves. Aside from the daemon, I implemented a Qt client making use of all of the features, and a Kate plugin to use the debugging features in that editor. This is the subject of my talk at FOSDEM, which I previewed in Berlin last week. Come to my talk there to learn more!

Costales: Wake up! Divagaciones sobre Ubuntu y Ubuntu Phone

Planet Ubuntu - Fri, 01/22/2016 - 10:51
En el 2007 tuve mi primera toma de contacto con Ubuntu. Anda que no llovió ni nada :P En el 2008 publiqué mi primera aplicación: Gufw :) En el 2012 sacaba a la luz Folder Color y ANoise. En el 2015, con la aparición de Ubuntu Phone, publicaba uNav :)
¿Por qué crear todos estos programas? Buena pregunta, por hobby, como reto, porque esto es comunidad: demostrar que entre todos podemos hacer (en mi opinión) el mejor sistema operativo. Pero a través de estos programas, observé un cambio muy grande en el seno de Ubuntu...

Un día se introdujo a calzador Unity. ¿Recordáis? ¿Como olvidarlo, verdad? Un Unity incomprendido y defenestrado, que me gustaba pero que consumía excesivos recursos para mi adorable trashware. Así que migré a los sabores de Xfce y posteriormente MATE :)

Mientras, había una pérdida continúa de usuarios de la versión madre a sus sabores o incluso a otras distros. Cuando Ubuntu más debía de luchar innovando, entró en una rutina adormecedora en escritorio, incluso preocupante, excesivamente preocupante. Yo mismo critiqué la falta de novedades entre versiones. Parecía que la única diferencia entre Ubuntu 14.04 y 14.10 era el fondo de pantalla :((

Y mientras el escritorio languidecía con Unity 7, Canonical invertía esfuerzos titánicos en el móvil, pero... ¿por qué? ¿qué era el móvil? ¡Claro! ¡Unity 8! Comenzamos a enlazar cabos :) Iluso de mi, no estaba viendo the big picture.
Ubuntu estuvo dando pasos muy (repito muy) acertados, que por ser Ubuntu pasaban por debajo del radar (si lo hiciera la empresa de la manzana otro gallo cantaría...). Sólo si te fijabas muy bien se veía el bosque... ¿Qué bosque? El futuro.


Un Unity 8 que vuela en un hardware modesto como el del BQ E4.5, al que conecté ayer un ratón y teclado, ¿y qué pasó? Todo pasó. De una versión móvil a una versión de escritorio con sólo enchufar un periférico. Si exporto la salida del móvil a un monitor por HDMI, ¡mi móvil es mi CPU! ¡Impresionante!

Convegencia. Foto por Daniel Wood.¿Qué pasaría si migramos el escritorio a Unity 8? ¡Exacto! Un móvil, una tablet y un PC estarían dominados por el mismo sistema operativo, sin variaciones. Un Ubuntu convergente ¡Chapó! Cerrado el círculo ;)

¿Y cómo programador qué me aporta? Desarrollo una sóla vez y según donde corra la aplicación, se adapta a esa vista. A olvidarse de programar para decenas de plataformas, decenas de escritorios, decenas de sistemas operativos. Una empresa se puede permitir esto, un sólo programador, no.

Y aún hay más :D Ubuntu creó el ecosistema como nadie ha hecho hasta ahora: Un Server extremadamente potente y estable (ahí está mi cubieboard tirando durante años como una campeona), un Ubuntu convergente para el usuario de móvil/tablet/PC muy rápido, fácil, sencillo y vistoso, un Snappy, un JuJu y varias herramientas más que están catapultando a Ubuntu como número uno indiscutible en la nube y en el Internet de las Cosas.

Y aquí, enlazando con el comienzo del post, os digo que comencé con Gufw, el cual tiene millones de usuarios. Pero uNav, un casi recién nacido, está sorprendiéndome por el gran feedback e iteración de miles de usuarios. ¿Por qué pasa eso? Porque Ubuntu Touch es nuevo, es vibrante, es el futuro.
Y Unity es la clave de todo, se impone como una interface extraordinaria, porque está unificada y es única e igual entre dispositivos.

La comunidad ha vuelto a despertar y ha despertado como nunca. El futuro está más cerca de lo que crees :)

Martin Pitt: Results from proposed-migration virtual sprint

Planet Ubuntu - Fri, 01/22/2016 - 04:20

This week from Tuesday to Thursday four Canonical Foundations team members held a virtual sprint about the proposed-migration infrastructure. It’s been a loooong three days and nightshifts, but it was absolutely worth it. Thanks to Brian, Barry, and Robert for your great work!

I started the sprint on Tuesday with a presentation (slides) about the design and some details about the involved components, and showed how to deploy the whole thing locally in juju-local. I also prepared a handful of bite-size improvements which were good finger-exercises for getting familiar with the infrastructure and testing changes. I’m happy to report that all of those got implemented and are running in production!

The big piece of work which we all collaborated on was providing a web-based test retry for all Ubuntu developers. Right now this is limited to a handful of Canonical employees, but we want Ubuntu developers to be able to retry autopkgtest regressions (which stop their package from landing in Ubuntu) by themselves. I don’t know the first thing about web applications and OpenID, so I’m really glad that Barry and Robert came up with a “hello world” kind of Flask webapp which uses Ubuntu SSO authentication to verify that the requester is an Ubuntu Developer. I implemented the input variable validation and sending the actual test requests over AMQP.

Now we have a nice autopkgtest-retrier git with the required functionality and 100% (yes, complete!) test coverage. With that, requesting tests in a local deployment works! So what’s left to do for me now is to turn this into a CGI script, configure apache for it, enable SSL on autopkgtest.ubuntu.com, and update the charms to set this all up automatically. So this moved from “ugh, I don’t know where to start” from “should land next week” in these three days!

We are going to have similar sprints for Brian’s error tracker, Robert’s CI train, and Barry’s system-image builder in the next weeks. Let’s increase all those bus factors from the current “1” to at least “4” ☺ . Looking forward to these!

Jonathan Riddell: Blue Systems Sprint in Trieste

Planet Ubuntu - Thu, 01/21/2016 - 12:09


I like these Italians, they make good food and use canoes as plant pots


Oh it’s gorgeous

by

Lubuntu Blog: Box theme 0.58

Planet Ubuntu - Thu, 01/21/2016 - 09:03
A new re-merge with Ubuntu Themes has been done, providing Lubuntu of the following features to the GTK3 theme: revamped Unity controls improved theme file structure removed unnecessary objects added missing elements (e.g. translucent endings, etc) modified scroll sliders (now with autohide) fixed buttons curvature (overall improvements) other minor bug fixes Remember that this is only a feature […]

Scott Kitterman: Python Packaging Build-Depends

Planet Ubuntu - Wed, 01/20/2016 - 14:33

As a follow-up to my last post where I discussed common Python packaging related errors, I thought it would be worth to have a separate post on how to decide on build-depends for Python (and Python3) packages.

The python ecosystem has a lot of packages built around supporting multiple versions of python (really python3 now) in parallel.  I’m going to limit this post to packages you might need to build-depend on directly.

Python (2)

Since Jessie (Debian 8), python2.7 has been the only supported python version.  For development of Stretch and backports to Jessie there is no need to worry about multiple python versions.  As a result, several ‘all’ packages are (and will continue to be) equivalent to their non-‘all’ counterparts.  We well continue to provide the ‘all’ packages for backward compatibility, but they aren’t really needed any more.

python (or python-all)

This is the package to build-depend on if your package is pure Python (no C extensions) and does not for some other reason need access to the Python header files (there are a handful of packages this latter caveat applies to, if you don’t know if it applies to your package, it almost certainly doesn’t.

You should also build-depend on dh-python.  It was originally shipped as part of the python package (and there is still an old version provided), but to get the most current code with new bug fixes and features, build-depend on dh-python.

python-dev (or python-all-dev)

If your package contains compiled C or C++ extensions, this package either provides or depends on the packages that provide all the header files you need.

Do not also build-depend on python.  python-dev depends on it and it is just an unneeded redundancy.

python-dbg (or python-all-dbg)

Add this if you build a -dbg package (not needed for -dbgsym).

Other python packages

There is not, AFAICT, any reason to build-dep on any of the other packages provided (e.g. libpython-dev).  It is common to see things like python-all, python, python-dev, libpython-dev in build-depends.  This could be simplified just to python-all-dev since it will pull the rest in.

Python3

Build-depends selection for Python 3 is generally similar, except that we continue to want to be able to support multiple python3 versions (as we currently support python3.4 and python3.5).  There are a few differences:

All or not -all

Python3 transitions are much easier when C extensions are compiled for all supported versions.  In many cases all that’s needed if you use pybuild is to build-depend on python3-all-dev.  While this is preferred, in many cases this would be technically challenging and not worth the trouble.  This is mostly true for python3 based applications.

Python3-all is mostly useful for running test suites against all supported python3 versions.

Transitions

As mentioned in the python section above, build-depends on python3-{all-}dev is generally only needed for compiled C extensions.  For python3 these are also the packages that need to be rebuilt for a transition.  Please avoid -dev build-depends whenever possible for non-compiled packages.  Please keep your packages that do need rebuilding binNMU safe.

Transitions happen in three stages:

  1. A new python3 version is added to supported python3 versions and packages that need rebuilding due to compiled code and that support multiple versions are binNMUed to add support for the new version.
  2. The default python3 is changed to be the new version and packages that only support a single python3 version are rebuilt.
  3. The old python3 version is dropped from supported versions and packages will multiple-version support are binNMUed to remove support for the dropped version.

This may seem complex (OK, it is a bit), but it enables a seamless transition for packages with multi-version support since they always support the default version.  For packages that only support a single version there is an inevitable period when they go uninstallable once the default version has changed and until they can be rebuilt with the new default.

Specific version requirements

Please don’t build-depend against specific python3 versions.  Those don’t show up in the transition tracker.  Use X-Python3-Version (see python policy for details) to specify the version you need.

Summary

Please check your packages and only build-depend on the -dev packages when you need it.  Check for redundancy and remove it.  Try and build for all python3 versions.  Don’t build-depend on specific python3 versions.

Serge Hallyn: Containers – inspect, don’t introspect

Planet Ubuntu - Wed, 01/20/2016 - 12:52

You’ve got a whatzit daemon running in a VM. The VM starts acting suspiciously – a lot more cpu, memory, or i/o than you’d expect. What do you do? You could log in and look around. But if the VM’s been 0wned, you may be running trojaned tools in the VM. In that case, you’d be better off mounting the VM’s root disk and looking around from your (hopefully) safe root context.

The same of course is true in containers. lxc-attach is a very convenient tool, as it doesn’t even require you to be running ssh in the container. But you’re trusting the container to be pristine.

One of the cool things about containers is that you can inspect pretty flexibly from the host. While the whatzit daemon is still running, you can strace it from the host, you can look for instance at it’s proc filesystem through /proc/$(pidof whatzit)/root/proc, you can see its process tree by just doing ps (i.e. pstree, ps -axjf).

So, the point of this post is mainly to recommend doing so :) Importantly, I’m not claiming here “and therefore containers are better/safer” – that would be nonsense. (The trivial counter argument would be that the container shares – and can easily exploit – the shared kernel). Rather, the point is to use the appropriate tools and, then, to use them as well as possible by exploiting its advantages.


Xubuntu: Xubuntu at The Working Centre’s Computer Recycling Project

Planet Ubuntu - Wed, 01/20/2016 - 10:00

The Xubuntu team hears stories about how it is used in organizations all over the world. In this “Xubuntu at..” series of interviews, we seek to interview organizations who wish to share their stories. If your organization is using Xubuntu and you want to share what you’re doing with us please contact Elizabeth K. Joseph at lyz@ubuntu.com to discuss details about your organization.

Recently Charles McColm and Paul Nijjar took time to write to us about their work at The Working Centre’s Computer Recycling Project in Kitchener, Ontario, Canada where they’re using Xubuntu. So without further ado, here’s what they are working on, in their own words.

Please tell us a bit about yourself, The Working Centre and your Computer Recycling Project

The Working Centre was originally established back in the early 1980s as a response to growing unemployment and poverty in downtown Kitchener, Ontario, Canada. Joe and Stephanie Mancini saw the potential for building a community around responding to unemployment and poverty through creative and engaging action. Different projects have rose out of this vision over the years divided into six areas; the Job Search Resource Centre, St. John’s Kitchen, Community Tools, Access to Technology, Affordable Supportive Housing and the Waterloo School for Community Development.

The Computer Recycling Project started as the result of creative thinking by an individual who had some serious obstacles to employment. The person couldn’t work, but wanted to help others find work. So in the late 1980s the individual put together a few computers for people to create resumes on. Other people became interested in helping out and the Computer Recycling Project was born.

Computer Recycling and the other projects have the following qualities:

  • They take donated or low cost materials, in our case computers and computer parts.
  • The apply volunteer labour to convert (repair/test) these materials into valuable goods. These goods are offered to the community at accessible prices (our computers generally range from $30 to $140)
  • Projects provide opportunities for people unable to work to contribute back to the community and help those looking to find jobs and upgrade their skills.
  • Projects also focus on creating open, friendly, and inclusive environments where everyone can contribute.

What influenced your decision to use Open Source software in your organization?

Computer Recycling didn’t always use Xubuntu and wasn’t originally a full time venture. Linux adoption started slowly. In 2001, thanks in part to our local Linux User Group, several volunteers (Paul Nijjar, Charles McColm and Daniel Allen) started working on a custom Linux distribution called Working Centre Linux Project (WCLP). At the time Computer Recycling had no legal options for distributing computers with Windows, so free and open source software was very appealing.

A couple of years later the project became a Microsoft Authorized Refurbisher. However restrictions on licensing (computers had to have a pre-existing Certificate of Authenticity and could only be sold to qualified low-income individuals) prevented us from installing Windows on all computers.

Linux didn’t have these kinds of restrictions so we continued to use it on a large number of computers. Over the years the Microsoft program changed and we became a Microsoft Registered Refurbisher (MRR). Microsoft dropped the “must have a pre-existing COA” on the computers we refurbish for low income individuals and provided another (more expensive) license for commercial sales, but we’ve continued to install both Windows and Linux. Last month Xubuntu Linux machines accounted for 63% of the computers we sold (due in part to the fact that we only sell Notebooks/Laptops with Xubuntu).

Price was definitely a factor for us in preferring open source software over other options. Most of the people we work with don’t have a lot of money, so accessing a low-cost ecosystem of reliable software was important. Each closed source license (Windows/Office) we buy costs the project money and this in turn translates into cost we have to pass on to the person buying a computer. We do a fair amount of quality assurance on systems, but if a system suffers some catastrophic failure (as we’ve had with a certain line of systems) we end up spending money on 4 licenses (2 for the original Windows/Office system and 2 for the replacement system). With Xubuntu we can often just pull the hard drive, put it in a different system and it’ll “just work” or need a little bit of work to get going. With Xubuntu the only cost to us is the effort we put into refurbishing the hardware.

Malware and viruses have also driven up the demand for Xubuntu systems. In the past several years we’ve seen many more people adopting Xubuntu because they’re fed up with how easy Windows systems get infected. Although centralized reliable software repositories are starting to become more popular in the proprietary software world, for years the availability and trustability of APT repositories was a big selling point.

What made you select Xubuntu for your deployments?

We mentioned earlier that Computer Recycling didn’t originally use Xubuntu. Instead, we started with The Working Centre Linux Project, our mix of IceWM on Debian designed to look like Windows 9x. Maintaining WCLP proved to be challenging because all 3 of the volunteers had separate jobs and a lot of other projects were starting to appear that made putting an attractive Linux desktop on donated hardware a lot easier. One of those projects was the Ubuntu project.

For a few years Computer Recycling adopted Ubuntu as its Linux OS of choice… right up until Ubuntu 10.10 (Unity) arrived on the scene. At this point we started looking at alternatives and we chose Xubuntu because it doesn’t make heavy demands of video processing or RAM. We also liked the fact that it had an interface that is relatively familiar to Windows users. We considered other desktop environments like LXDE, but small things matter and ease of use features like automounting USB keys tipped our choice in favour of Xubuntu.

Can you tell us a bit about your Xubuntu setup?

Paul has done all the work on our current form of installation: preseed files and some customization scripts. Computers are network booted and depending on the system we’ll install either a 32 or 64 bit version of Xubuntu with the following customizations:

  • Proprietary Flash Player
  • We show the bottom panel with an icon tray and don’t auto hide it
  • We use a customized version of the Whisker menu to show particular items
  • We include LibreOffice instead of depending on Abiword and Gnumeric
  • We include some VNC remote connection scripts so our volunteers can securely (and with the consent of the person at the other end) connect and provide remote help
  • We’ve set VLC as the default media player instead of Parole

We are very grateful for the existence of high-quality free and open source software. In addition to using it for desktop and laptop computers, we use it for information and testing (using a Debian-Live infrastructure) and for several in-house servers. About 8 years ago we hired a programmer looking for work to turn an open source eCommerce project into the point of sale software we still use today. The flexibility of open source was really important and has made a big difference when we needed to adjust to changes (merging of GST and PST in Canada to HST for example). Closed source solutions were carefully considered, but ultimately we decided to go open source because we knew we could adapt it to our needs without depending on a vendor.

Is there anything else you wish to share with us about your organization or how you use Xubuntu?

Paul and I are also grateful for the opportunities open source afforded us. In 2005 The Working Centre hired me (Charles) to advance the Computer Recycling Project into a more regular venture. This opportunity wouldn’t have come if I hadn’t first worked with Paul as a volunteer on the Working Centre Linux Project and worked on several other open source projects (a PHP/MySQL inventory system and a SAMBA server). Around 2007 The Working Centre hired Paul to help with system administration and general IT. Since then he’s spearheaded many FLOSS projects at The Working Centre. Daniel Allen, who also worked with us on WCLP, currently works as a Software Specialist for the Science department at the University of Waterloo.

People can find out more about The Working Centre’s Computer Recycling Project by visiting the web site at http://www.theworkingcentre.org/cr/

Ubuntu Studio: Ubuntustudio 16.04 Wallpaper Contest!

Planet Ubuntu - Wed, 01/20/2016 - 04:44
Contest Entries are here!! >>> https://flic.kr/s/aHsksiP3by <<< Where is YOUR entry? Ubuntustudio 16.04 will be officially released in April 2016. As this will be a Long Term Support version, we are most anxious to offer an excellent experience on the desktop. Therefore, the Ubuntustudio community will host a wallpaper design contest! The contest is open to […]

Mattia Migliorini: Best Ways to Save While You are Starting Your Business

Planet Ubuntu - Wed, 01/20/2016 - 03:44

Starting your own business can put you on the path to success and financial freedom. You become the master of your own destiny, and you work toward creating your own wealth rather than being at the whim of someone else. Your work enriches you – not someone else.

But starting your own business can also be very expensive. Not everyone can afford to just quit their jobs, rent an office space, hire a staff, and invest in a top-tier marketing campaign, among other expenses. Instead, most people start working for themselves at home, sometimes while they are still working on their full-time jobs.

Here are a few things you can do to save money while yo are getting your business started, whether you are working at it full-time or part-time or just working out of your home:

Choose the Most Affordable Service Providers

Depending on where you live, you likely have your choice in service providers. You don’t have to consign yourself to paying the prices you see on your bill for your electricity, Internet, or cable.

Shop around to fine out what providers are available in your area and what kind of rates they are currently offering. For example, you can check out Frontier Internet to get the best rates on your Internet service, which is one of the most important services you will need to run your business.

Buy the Best Computer You Can

While you will technically spend more money on a high-powered and sophisticated computer, it will actually save you money in the long run.

If you try to skimp on your computer purchase, you will end up with a machine that runs slowly, is more vulnerable to viruses, and crashes frequently. You will end up losing money on lost productivity, lost data, and lost contracts because of missed deadlines or customer dissatisfaction with your service.

Spring for the high-end computer that will help you get your work done more quickly and deliver better service for your clients. You’ll also keep your data safer and protect yourself against theft.

Invest in the Right Software

The right software can be like a service professional in your computer. For example, if you invest in professional tax software, you can reliably do your own taxes without hiring an accountant. Invest in payroll software, and you can work without a bookkeeper.

Other software will help you improve the efficiency of your operation. For example, the right invoicing software can help you put together accurate invoices in a timely manner ,ensuring that you don’t ever miss a payment.

Focus oN the Most Effective Marketing Tools

The right marketing campaign is key to your success when you are just starting out with your business. The right marketing campaign will help your build brand awareness and reach your target audience.

However, once you get started, you will quickly realize that there are hundreds of marketing tools available. You can quickly spend a lot of money on these tools without necessarily getting any results. Instead of trying to market with as many of these tools as you can, focus on only the most effective tools as identified by your research.

You will want to invest in a quality email marketing service, social media manager, and advertising network to start. Depending on your business, you may identify a few other tools that are necessary to help you get started.

Whatever you can to save money while you are getting started with your business can help keep you afloat while you are still trying to get customers and sales. You can also set aside money to invest in growing your business, such as moving into a dedicated space or even starting to hire a staff.

The post Best Ways to Save While You are Starting Your Business appeared first on deshack.

Dustin Kirkland: Data Driven Analysis: /tmp on tmpfs

Planet Ubuntu - Tue, 01/19/2016 - 21:16
tl;dr
  • Put /tmp on tmpfs and you'll improve your Linux system's I/O, reduce your carbon foot print and electricity usage, stretch the battery life of your laptop, extend the longevity of your SSDs, and provide stronger security.
  • In fact, we should do that by default on Ubuntu servers and cloud images.
  • Having tested 502 physical and virtual servers in production at Canonical, 96.6% of them could immediately fit all of /tmp in half of the free memory available and 99.2% could fit all of /tmp in (free memory + free swap).
Try /tmp on tmpfs Yourself$ echo "tmpfs /tmp tmpfs rw,nosuid,nodev" | sudo tee -a /etc/fstab
$ sudo reboot
BackgroundIn April 2009, I proposed putting /tmp on tmpfs (an in memory filesystem) on Ubuntu servers by default -- under certain conditions, like, well, having enough memory. The proposal was "approved", but got hung up for various reasons.  Now, again in 2016, I proposed the same improvement to Ubuntu here in a bug, and there's a lively discussion on the ubuntu-cloud and ubuntu-devel mailing lists.

The benefits of /tmp on tmpfs are:
  • Performance: reads, writes, and seeks are insanely fast in a tmpfs; as fast as accessing RAM
  • Security: data leaks to disk are prevented (especially when swap is disabled), and since /tmp is its own mount point, we should add the nosuid and nodev options (and motivated sysadmins could add noexec, if they desire).
  • Energy efficiency: disk wake-ups are avoided
  • Reliability: fewer NAND writes to SSD disks
In the interest of transparency, I'll summarize the downsides:
  • There's sometimes less space available in memory, than in your root filesystem where /tmp may traditionally reside
  • Writing to tmpfs could evict other information from memory to make space
You can learn more about Linux tmpfs here.Not Exactly Uncharted Territory...Fedora proposed and implemented this in Fedora 18 a few years ago, citing that Solaris has been doing this since 1994. I just installed Fedora 23 into a VM and confirmed that /tmp is a tmpfs in the default installation, and ArchLinux does the same. Debian debated doing so, in this thread, which starts with all the reasons not to put /tmp on a tmpfs; do make sure you read the whole thread, though, and digest both the pros and cons, as both are represented throughout the thread.Full Data TreatmentIn the current thread on ubuntu-cloud and ubuntu-devel, I was asked for some "real data"...

In fact, across the many debates for and against this feature in Ubuntu, Debian, Fedora, ArchLinux, and others, there is plenty of supposition, conjecture, guesswork, and presumption.  But seeing as we're talking about data, let's look at some real data!

Here's an analysis of a (non-exhaustive) set of 502 of Canonical's production servers that run Ubuntu.com, Launchpad.net, and hundreds of related services, including OpenStack, dozens of websites, code hosting, databases, and more. These servers sampled are slightly biased with more physical machines than virtual machines, but both are present in the survey, and a wide variety of uptime is represented, from less than a day of uptime, to 1306 days of uptime (with live patched kernels, of course).  Note that this is not an exhaustive survey of all servers at Canonical.

I humbly invite further study and analysis of the raw, tab-separated data, which you can find at:
The column headers are:
  • Column 1: The host names have been anonymized to sequential index numbers
  • Column 2: `du -s /tmp` disk usage of /tmp as of 2016-01-17 (ie, this is one snapshot in time)
  • Column 3-8: The output of the `free` command, memory in KB for each server
  • Column 9-11: The output of the `free` command, sway in KB for each server
  • Column 12: The number of inodes in /tmp
I have imported it into a Google Spreadsheet to do some data treatment. You're welcome to do the same, or use the spreadsheet of your choice.
For the numbers below, 1 MB = 1000 KB, and 1 GB = 1000 MB, per Wikipedia. (Let's argue MB and MiB elsewhere, shall we?)  The mean is the arithmetic average.  The median is the middle value in a sorted list of numbers.  The mode is the number that occurs most often.  If you're confused, this article might help.  All calculations are accurate to at least 2 significant digits.Statistical summary of /tmp usage:
  • Max: 101 GB
  • Min: 4.0 KB
  • Mean: 453 MB
  • Median: 16 KB
  • Mode: 4.0 KB
Looking at all 502 servers, there are two extreme outliers in terms of /tmp usage. One server has 101 GB of data in /tmp, and the other has 42 GB. The latter is a very noisy django.log. There are 4 more severs using between 10 GB and 12 GB of /tmp. The remaining 496 severs surveyed (98.8%) are using less than 4.8 GB of /tmp. In fact, 483 of the servers surveyed (96.2%) use less than 1 GB of /tmp. 454 of the servers surveyed (90.4%) use less than 100 MB of /tmp. 414 of the servers surveyed (82.5%) use less than 10 MB of /tmp. And actually, 370 of the servers surveyed (73.7%) -- the overwhelming majority -- use less than 1MB of /tmp.
Statistical summary of total memory available:
  • Max: 255 GB
  • Min: 1.0 GB
  • Mean: 24 GB
  • Median: 10.2 GB
  • Mode: 4.1 GB
All of the machines surveyed (100%) have at least 1 GB of RAM.  495 of the machines surveyed (98.6%) have at least 2GB of RAM.   437 of the machines surveyed (87%) have at least 4 GB of RAM.   255 of the machines surveyed (50.8%) have at least 10GB of RAM.    157 of the machines surveyed (31.3%) have more than 24 GB of RAM.  74 of the machines surveyed (14.7%) have at least 64 GB of RAM.
Statistical summary of total swap available:
  • Max: 201 GB
  • Min: 0.0 KB
  • Mean: 13 GB
  • Median: 6.3 GB
  • Mode: 2.96 GB
485 of the machines surveyed (96.6%) have at least some swap enabled, while 17 of the machines surveyed (3.4%) have zero swap configured. One of these swap-less machines is using 415 MB of /tmp; that machine happens to have 32 GB of RAM. All of the rest of the swap-less machines are using between 4 KB and 52 KB (inconsequential) /tmp, and have between 2 GB and 28 GB of RAM.  5 machines (1.0%) have over 100 GB of swap space.
Statistical summary of swap usage:
  • Max: 19 GB
  • Min: 0.0 KB
  • Mean: 657 MB
  • Median: 18 MB
  • Mode: 0.0 KB
476 of the machines surveyed (94.8%) are using less than 4 GB of swap. 463 of the machines surveyed (92.2%) are using less than 1 GB of swap. And 366 of the machines surveyed (72.9%) are using less than 100 MB of swap.  There are 18 "swappy" machines (3.6%), using 10 GB or more swap.
Modeling /tmp on tmpfs usageNext, I took the total memory (RAM) in each machine, and divided it in half which is the default allocation to /tmp on tmpfs, and subtracted the total /tmp usage on each system, to determine "if" all of that system's /tmp could actually fit into its tmpfs using free memory alone (ie, without swap or without evicting anything from memory).

485 of the machines surveyed (96.6%) could store all of their /tmp in a tmpfs, in free memory alone -- i.e. without evicting anything from cache.

Now, if we take each machine, and sum each system's "Free memory" and "Free swap", and check its /tmp usage, we'll see that 498 of the systems surveyed (99.2%) could store the entire contents of /tmp in tmpfs free memory + swap available. The remaining 4 are our extreme outliers identified earlier, with /tmp usages of [101 GB, 42 GB, 13 GB, 10 GB].
Performance of tmpfs versus ext4-on-SSDFinally, let's look at some raw (albeit rough) read and write performance numbers, using a simple dd model.

My /tmp is on a tmpfs:
kirkland@x250:/tmp⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
tmpfs 7.7G 2.6M 7.7G 1% /tmp

Let's write 2 GB of data:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=/tmp/zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.56469 s, 1.4 GB/s

And let's write it completely synchronously:
kirkland@x250:/tmp⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.47235 s, 869 MB/s

Let's try the same thing to my Intel SSD:
kirkland@x250:/local⟫ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 217G 106G 100G 52% /

And write 2 GB of data:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 7.52918 s, 285 MB/s

And let's redo it completely synchronously:
kirkland@x250:/local⟫ dd if=/dev/zero of=./zero bs=2G count=1 oflag=dsync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 11.9599 s, 180 MB/s

Let's go back and read the tmpfs data:
kirkland@x250:~⟫ dd if=/tmp/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 1.94799 s, 1.1 GB/s

And let's read the SSD data:
kirkland@x250:~⟫ dd if=/local/zero of=/dev/null bs=2G count=1
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB) copied, 2.55302 s, 841 MB/s

Now, let's create 10,000 small files (1 KB) in tmpfs:
kirkland@x250:/tmp/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m15.518s
user 0m1.592s
sys 0m7.596s

And let's do the same on the SSD:
kirkland@x250:/local/foo⟫ time for i in $(seq 1 10000); do dd if=/dev/zero of=$i bs=1K count=1 oflag=dsync ; done
real 0m26.713s
user 0m2.928s
sys 0m7.540s

For better or worse, I don't have any spinning disks, so I couldn't repeat the tests there.

So on these rudimentary read/write tests via dd, I got 869 MB/s - 1.4 GB/s write to tmpfs and 1.1 GB/s read from tmps, and 180 MB/s - 285 MB/s write to SSD and 841 MB/s read from SSD.

Surely there are more scientific ways of measuring I/O to tmpfs and physical storage, but I'm confident that, by any measure, you'll find tmpfs extremely fast when tested against even the fastest disks and filesystems.
Summary
  • /tmp usage
    • 98.8% of the servers surveyed use less than 4.8 GB of /tmp
    • 96.2% use less than 1.0 GB of /tmp
    • 73.7% use less than 1.0 MB of /tmp
    • The mean/median/mode are [453 MB / 16 KB / 4 KB]
  • Total memory available
    • 98.6% of the servers surveyed have at least 2.0 GB of RAM
    • 88.0% have least 4.0 GB of RAM
    • 57.4% have at least 8.0 GB of RAM
    • The mean/median/mode are [24 GB / 10 GB / 4 GB]
  • Swap available
    • 96.6% of the servers surveyed have some swap space available
    • The mean/median/mode are [13 GB / 6.3 GB / 3 GB]
  • Swap used
    • 94.8% of the servers surveyed are using less than 4 GB of swap
    • 92.2% are using less than 1 GB of swap
    • 72.9% are using less than 100 MB of swap
    • The mean/median/mode are [657 MB / 18 MB / 0 KB]
  • Modeling /tmp on tmpfs
    • 96.6% of the machines surveyed could store all of the data they currently have stored in /tmp, in free memory alone, without evicting anything from cache
    • 99.2% of the machines surveyed could store all of the data they currently have stored in /tmp in free memory + free swap
    • 4 of the 502 machines surveyed (0.8%) would need special handling, reconfiguration, or more swap
Conclusion
  • Can /tmp be mounted as a tmpfs always, everywhere?
    • No, we did identify a few systems (4 out of 502 surveyed, 0.8% of total) consuming inordinately large amounts of data in /tmp (101 GB, 42 GB), and with insufficient available memory and/or swap.
    • But those were very much the exception, not the rule.  In fact, 96.6% of the systems surveyed could fit all of /tmp in half of the freely available memory in the system.
  • Is this the first time anyone has suggested or tried this as a Linux/UNIX system default?
    • Not even remotely.  Solaris has used tmpfs for /tmp for 22 years, and Fedora and ArchLinux for at least the last 4 years.
  • Is tmpfs really that much faster, more efficient, more secure?
    • Damn skippy.  Try it yourself!
:-Dustin

Serge Hallyn: Cgroups are now handled a bit differently in Xenial

Planet Ubuntu - Tue, 01/19/2016 - 10:54

In the past, when you logged into an Ubuntu system, you would receive and be logged into a cgroup which you owned, one per controller (i.e. memory, freezer, etc). The main reason for this is so that unprivileged users can use things like lxc.

However this caused some trouble, especially through the cpuset controller. The problem is that when a cpu is plugged in, it is not added to any existing cpusets (in the legacy cgroup hierarchy, which we use). This is true even if you previously unplugged that cpu. So if your system has two cpus, when you first login you have cpus 0-1. 1 gets unplugged and replugged, now you only have 0. Now 0 gets unplugged…

The cgroup creation previously was done through a systemd patch, and is not configurable. In Xenial, we’ve now reduced that patch to only work on the name=systemd cgroup. Other controllers are to be handled by the new libpam-cgm package. By default it only creates a cgroup for the freezer controller. You can change the list by editing /etc/pam.d/common-session. For instance to add memory, you would change the line

optional pam_cgm.so -c freezer

to

optional pam_cgm.so -c freezer,memory

One more change expected to come to Xenial is to switch libpam-cgm to using lxcfs instead of cgmanager (or, just as likely, create a new conflicting libpam-cgroup package which does so). Since Xenial and later systems use systemd, which won’t boot without lxcfs anyway, we’ll lose no functionality by requiring lxcfs for unprivileged container creation on login.

On a side note, reducing the set of user-owned cgroups also required a patch to lxc. This means that in a mixture of nested lxcs, you may run into trouble if using nested unprivileged containers in older releases. For instance, if you create an unprivileged Trusty container on a Xenial host, you won’t own the memory cgroup by default, even if you’re root in the container. At the moment Trusty’s lxc doesn’t know how to handle that yet to create a nested container. The lxc patches should hopefully get SRUd, but in the meantime you can use the ubuntu-lxc ppas to get newer packages if needed. (Note that this is a non-issue when running lxd on the host.)


The Fridge: Ubuntu Weekly Newsletter Issue 450

Planet Ubuntu - Tue, 01/19/2016 - 00:27

Ubuntu Weekly Newsletter Issue 450

The Fridge - Tue, 01/19/2016 - 00:27

Jonathan Riddell: In the Mansion House

Planet Ubuntu - Mon, 01/18/2016 - 15:35

Here is deepest Padania a 4 story mansion provides winter cover to KDE developers working to free your computers.


I woke up this morning and decided I liked it


The mansion keeps a close lock on the important stuff


The pool room has a table with no pockets, it must be posh


Front door


The not so important stuff


Jens will not open the borgen to the Danish


David prefers Flappy Birds to 1000€ renaissance painting


Engineers fail to implement continuous integration


Bring on the 7 course meal!


In the basement Smaug admires the view

by

Seif Lotfy: Skizze progress and REPL

Planet Ubuntu - Mon, 01/18/2016 - 10:41




Over the last 3 weeks, based on feedback we proceeded fledging out the concepts and the code behind Skizze.
Neil Patel suggested the following:

So I've been thinking about the server API. I think we want to choose one thing and do it as well as possible, instead of having six ways to talk to the server. I think that helps to keep things sane and simple overall.

Thinking about usage, I can only really imagine Skizze in an environment like ours, which is high-throughput. I think that is it's 'home' and we should be optimising for that all day long.

Taking that into account, I believe we have two options:

  1. We go the gRPC route, provide .proto files and let people use the existing gRPC tooling to build support for their favourite language. That means we can happily give Ruby/Node/C#/etc devs a real way to get started up with Skizze almost immediately, piggy-backing on the gRPC docs etc.

  2. We absorb the Redis Protocol. It does everything we need, is very lean, and we can (mostly) easily adapt it for what we need to do. The downside is that to get support from other libs, there will have to be actual libraries for every language. This could slow adoption, or it might be easy enough if people can reuse existing REDIS code. It's hard to tell how that would end up.

gRPC is interesting because it's built already for distributed systems, across bad networks, and obviously is bi-directional etc. Without us having to spend time on the protocol, gRPC let's us easily add features that require streaming. Like, imagine a client being able to listen for changes in count/size and be notified instantly. That's something that gRPC is built for right now.

I think gRPC is a bit verbose, but I think it'll pay off for ease of third-party lib support and as things grow.

The CLI could easily be built to work with gRPC, including adding support for streaming stuff etc. Which could be pretty exciting.

That being said, we gave Skizze a new home, where based on feedback we developed .proto files and started rewriting big chunks of the code.

We added a new wrapper called "domain" which represents a stream. It wraps around Count-Min-Log, Bloom Filter, Top-K and HyperLogLog++, so when feeding it values it feeds all the sketches. Later we intend to allow attaching and detaching sketches from "domains" (We need a better name).

We also implemented a gRPC API which should allow easy wrapper creation in other languages.

Special thanks go to Martin Pinto for helping out with unit tests and Soren Macbeth for thorough feedback and ideas about the "domain" concept.
Take a look at our initial REPL work there:


click for GIF

Canonical Design Team: Ubuntu Clock Refresh

Planet Ubuntu - Mon, 01/18/2016 - 07:47

In the coming months, users will start noticing things looking more and more different on Ubuntu. As we gradually roll out our new and improved Suru visual language, you’ll see apps and various other bits of the platform take on a lighter, more clean and cohesive look. The latest app to undergo a visual refresh is the Clock app and I’ll take you through how we approached its redesign.

The Redesign

Our Suru visual language is based on origami, with graphic elements containing meaningful folds and shadows to create the illusion of paper and draw focus to certain areas. Using the main clock face’s current animation (where the clock flips from analog to digital on touch) as inspiration, it seemed natural to place a fold in the middle of the clock. On touch, the clock “folds” from analog to digital.

To further the paper look, drop shadows are used to give the illusion of layers of paper. The shadow under the clock face elevates it from the page, adding a second layer. The drop shadows on the clock hands add yet another layer.

As for colours, the last clock design featured a grey and purple scheme. In our new design, we make use of our very soon-to-be released new color palette. We brightened the interface with a white background and light grey clock face. On the analog clock, the hour and second hands are now Ubuntu orange. With the lighter UI, this subtle use of branding is allowed to stand out more. Also, the purple text is gone in favor of a more readable dark grey.

The bottom edge hint has also been redesigned. The new design is much more minimal, letting users get used to the gesture without interrupting the content too much.

In the stopwatch section, the fold is absent from the clock face since the element is static. We also utilize our new button styling. In keeping with the origami theme, the buttons now have a subtle drop shadow rather than an inward shadow, to again create a more “paper” feel.

This project has been one of my favorites so far. Although it wasn’t a complete redesign (the functionality remains the same) it was fun seeing how the clock would evolve next. Hope you enjoy the new version of the clock, it’s currently in development so look out for it on your phones soon and stay tuned for more visual changes.

Visual Design: Rae Shambrook

UX Design: James Mulholland

 

Elizabeth K. Joseph: Color me Ubuntu at UbuCon Summit & SCALE14x

Planet Ubuntu - Sun, 01/17/2016 - 11:32

This week I’ll be flying down to Pasadena, California to attend the first UbuCon Summit, which is taking place at the the Fourteenth Annual Southern California Linux Expo (SCALE14x). The UbuCon Summit was the brain child of meetings we had over the summer that expressed concern over the lack of in person collaboration and connection in the Ubuntu community since the last Ubuntu Developer Summit back in 2012. Instead of creating a whole new event, we looked at the community-run UbuCon events around the world and worked with the organizers of the one for SCALE14x to bring in funding and planning help from Canonical, travel assistance to project members and speakers to provide a full two days of conference and unconference event content.

As an attendee of and speaker at these SCALE UbuCons for several years, I’m proud to see the work that Richard Gaskin and Nathan Haines has put into this event over the years turn into something bigger and more broadly supported. The event will feature two tracks on Thursday, one for Users and one for Developers. Friday will begin with a panel and then lead into an unconference all afternoon with attendee-driven content (don’t worry if you’ve never done an unconference before, a full introduction after the panel will be provided on to how to participate).

As we lead up to this the UbuCon Summit (you can still register here, it’s free!) on Thursday and Friday, I keep learning that more people from the Ubuntu community will be attending, several of whom I haven’t seen since that last Developer Summit in 2012. Mark Shuttleworth will be coming in to give a keynote for the event, along with various other speakers. On Thursday at 3PM, I’ll be giving a talk on Building a Career with Ubuntu and FOSS in the User track, and on Friday I’ll be one of several panelists participating in an Ubuntu Leadership Panel at 10:30AM, following the morning SCALE keynote by Cory Doctorow. Check out the full UbuCon schedule here: http://ubucon.org/en/events/ubucon-summit-us/schedule/

Over the past few months I’ve been able to hop on some of the weekly UbuCon Summit planning calls to provide feedback from folks preparing to participate and attend. During one of our calls, Abi Birrell of Canonical held up an origami werewolf that she’d be sending along instructions to make. Turns out, back in October the design team held a competition that included origami instructions and gave an award for creating an origami werewolf. I joked that I didn’t listen to the rest of the call after seeing the origami werewolf, I had already gone into planning mode!

With instructions in hand, I hosted an Ubuntu Hour in San Francisco last week where I brought along the instructions. I figured I’d use the Ubuntu Hour as a testing ground for UbuCon and SCALE14x. Good news: We had a lot of fun, it broke the ice with new attendees and we laughed a lot. Bad news: We’re not very good at origami. There were no completed animals at the end of the Ubuntu Hour!


The xerus helps at werewolf origami

At 40 steps to create the werewolf, one hour and a crowd inexperienced with origami, it was probably not the best activity if we wanted animals at the end, but it did give me a set of expectations. The success of how fun it was to try it (and even fail) did get me thinking though, what other creative things could we do at Ubuntu events? Then I read an article about adult coloring books. That’s it! I shot an email off to Ronnie Tucker, to see if he could come up with a coloring page. Most people in the Ubuntu community know Ronnie as the creator of Full Circle Magazine: the independent magazine for the Ubuntu Linux community, but he’s also a talented artist whose skills were a perfect matched for this task. Lucky for me, it was a stay-home snowy day in Glasgow yesterday and within a couple hours he had a werewolf draft to me. By this morning he had a final version ready for printing in my inbox.

You can download the creative commons licensed original here to print your own. I have printed off several (and ordered some packets of crayons) to bring along to the UbuCon Summit and Ubuntu booth in the SCALE14x expo hall. I’m also bringing along a bunch of origami paper, so people can try their hand at the werewolf… and unicorn too.

Finally, lest we forget that my actual paid job is a systems administrator on the OpenStack Infrastructure team, I’m also doing a talk at DevOpsDayLA on Open Source tools for distributed systems administration. If you think I geek out about Ubuntu and coloring werewolves, you should see how I act when I’m talking about the awesome systems work I get to do at my day job.

Dustin Kirkland: Intercession -- Check out this book!

Planet Ubuntu - Sun, 01/17/2016 - 09:57
https://www.inkshares.com/projects/intercessionA couple of years ago, a good friend of mine (who now works for Canonical) bought a book recommended in a byline in Wired magazine.  The book was called Daemon, by Leinad Zeraus.  I devoured it in one sitting.  It was so...different, so...technically correct, so....exciting and forward looking.  Self-driving cars and autonomous drones, as weapons in the hands of an anonymous network of hackers.  Yikes.  A thriller, for sure, but a thinker as well.  I loved it!

I blogged about it here in September of 2008, and that blog was actually read by the author of Daemon, who reached out to thank me for the review.  He sent me a couple of copies of the book, which I gave away to readers of my blog, who solved a couple of crypto-riddles in a series of blog posts linking eCryptfs to some of the techniques and technology used in Daemon.

I can now count Daniel Suarez (the award winning author who originally published Daemon under a pseudonym) as one of my most famous and interesting friends, and I'm always excited to receive an early draft of each of his new books.  I've enjoyed each of Daemon, Freedom™, Kill Decision, and Influx, and gladly recommend them to anyone interested in cutting edge, thrilling fiction.

Knowing my interest in the genre, another friend of mine quietly shared that they were working on their very first novel.  They sent me an early draft, which I loaded on my Kindle and read in a couple of days while on a ski vacation in Utah in February.  While it took me a few chapters to stop thinking about it as-a-story-written-by-a-friend-of-mine, once I did, I was in for a real treat!  I ripped through it in 3 sittings over two snowy days, on a mountain ski resort in Park City, Utah.

The title is Intercession.  It's an adventure story -- a hero, a heroin, and a villain.  It's a story about time -- intriguingly non-linear and thoughtfully complex.  There's subtle, deliberate character development, and a couple of face-palming big reveals, constructed through flashbacks across time.

They have published it now, under a pseudonym, Nyneve Ransom, on InkShares.com -- a super cool self-publishing platform (I've spent hours browsing and reading stories there now!).  If you love sci-fi, adventure, time, heroes, and villains, I'm sure you'll enjoy Intercession!  You can read a couple of chapters for free right now ;-)

Happy reading!
:-Dustin

Marcin Juszkiewicz: Running 32-bit ARM virtual machine on AArch64 hardware

Planet Ubuntu - Sun, 01/17/2016 - 03:24

It was a matter of days and finally all pieces are done. Running 32-bit ARM virtual machines on 64-bit AArch64 hardware is possible and quite easy.

Requirements
  • AArch64 hardware (I used APM Mustang as usual)
  • ARM rootfs (fetched Fedora 22 image with “virt-builder” tool)
  • ARM kernel and initramfs (I used Fedora 24 one)
  • Virt Manager (can be done from shell too)
Configuration

Start “virt-manager” and add new machine:

Select rootfs, kernel, initramfs (dtb will be provided internally by qemu) and tell kernel where rootfs is:

Then set amount of memory and cores. I did 10GB of RAM and 8 cores. Save machine.

Let’s run

Open created machine and press Play button. It should boot:

I upgraded F22 to F24 to have latest development system.

Is it fast?

If I would just boot and write about it then there will be questions about performance. I did build of gcc 5.3.1-3 using mock (standard Fedora way). On arm32 Fedora builder it took 19 hours, on AArch64 builder 4.5h only. On my machine AArch64 build took 9.5 hour and in this vm it took 12.5h (slow hdd used). So builder with memory and some fast storage will boost arm32 builds a lot.

Numbers from “openssl speed” shows performance similar to host cpu:

The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes md2 1787.41k 3677.19k 5039.02k 5555.88k 5728.94k mdc2 0.00 0.00 0.00 0.00 0.00 md4 24846.05k 81594.07k 226791.59k 418185.22k 554344.45k md5 18881.79k 60907.46k 163927.55k 281694.58k 357168.47k hmac(md5) 21345.25k 69033.83k 177675.52k 291996.33k 357250.39k sha1 20776.17k 65099.46k 167091.03k 275240.62k 338582.71k rmd160 15867.02k 42659.95k 88652.54k 123879.77k 140571.99k rc4 167878.11k 186243.61k 191468.46k 192576.51k 193112.75k des cbc 35418.48k 37327.19k 37803.69k 37954.56k 37991.77k des ede3 13415.40k 13605.87k 13641.90k 13654.36k 13628.76k idea cbc 36377.06k 38284.93k 38665.05k 38864.71k 39032.15k seed cbc 42533.48k 43863.15k 44276.22k 44376.75k 44397.91k rc2 cbc 29523.86k 30563.20k 30763.09k 30940.50k 30857.44k rc5-32/12 cbc 0.00 0.00 0.00 0.00 0.00 blowfish cbc 60512.96k 66274.07k 67889.66k 68273.15k 68302.17k cast cbc 56795.77k 61845.42k 63236.86k 63251.11k 63445.82k aes-128 cbc 61479.48k 65319.32k 67327.49k 67773.78k 66590.04k aes-192 cbc 53337.95k 55916.74k 56583.34k 56957.61k 57024.51k aes-256 cbc 46888.06k 48538.97k 49300.82k 49725.44k 50402.65k camellia-128 cbc 59413.00k 62610.45k 63400.53k 63593.13k 63660.03k camellia-192 cbc 47212.40k 49549.89k 50590.21k 50843.99k 50012.16k camellia-256 cbc 47581.19k 49388.89k 50519.13k 49991.68k 50978.82k sha256 27232.09k 64660.84k 119572.57k 151862.27k 164874.92k sha512 9376.71k 37571.93k 54401.88k 74966.36k 84322.99k whirlpool 3358.92k 6907.67k 11214.42k 13301.08k 14065.66k aes-128 ige 60127.48k 65397.14k 67277.65k 67428.35k 67584.00k aes-192 ige 52340.73k 56249.81k 57313.54k 57559.38k 57191.08k aes-256 ige 46090.63k 48848.96k 49684.82k 49861.32k 49892.01k ghash 150893.11k 171448.55k 177457.92k 179003.39k 179595.95k sign verify sign/s verify/s rsa 512 bits 0.000322s 0.000026s 3101.3 39214.9 rsa 1024 bits 0.001446s 0.000073s 691.7 13714.6 rsa 2048 bits 0.008511s 0.000251s 117.5 3987.5 rsa 4096 bits 0.058092s 0.000945s 17.2 1058.4 sign verify sign/s verify/s dsa 512 bits 0.000272s 0.000297s 3680.6 3363.6 dsa 1024 bits 0.000739s 0.000897s 1353.1 1115.2 dsa 2048 bits 0.002762s 0.002903s 362.1 344.5 sign verify sign/s verify/s 256 bit ecdsa (nistp256) 0.0005s 0.0019s 1977.8 538.3 384 bit ecdsa (nistp384) 0.0015s 0.0057s 663.0 174.6 521 bit ecdsa (nistp521) 0.0035s 0.0136s 286.8 73.4 op op/s 256 bit ecdh (nistp256) 0.0016s 616.0 384 bit ecdh (nistp384) 0.0049s 204.8 521 bit ecdh (nistp521) 0.0115s 87.2

Related posts:

  1. Fedora 21 RC5 released for AArch64
  2. AArch64 can build OpenEmbedded
  3. Running VMs on Fedora/AArch64

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator