Feed aggregator

Jonathan Riddell: KDE Store With QtCon Wallpaper

Planet Ubuntu - Sat, 09/03/2016 - 02:29

KDE Store has now launched.  The first product on it is my QtCon wallpaper.  An experiment in community content and donations, will anyone give me money?

by

Seif Lotfy: Log levels described as sh*t hitting the fan

Planet Ubuntu - Fri, 09/02/2016 - 14:14
Log Level Description DEBUG poop contains corn TRACE poop currently moving into colon WARN sh*t is approaching the fan ERROR sh*t has hit the fan SEVERE sh*t is spraying into the fan CRITICAL sh*t is spraying all over the room ALERT sh*t is piling up FATAL the room is flooding with sh*t EMERGENCY the fan has stopped spinning VERBOSE censored

By Chris Andrews, narrated by Devin Villegas

Julian Andres Klode: apt 1.3 RC4 – Tweaking apt update

Planet Ubuntu - Fri, 09/02/2016 - 12:50

Did that ever happen to you: You run apt update, it fetches a Release file, then starts fetching DEP-11 metadata, then any pdiff index stuff, and then applies them; all after another? Or this: You don’t see any update progress until very near the end? Worry no more: I tweaked things a bit in 1.3~rc4 (git commit).

Prior to 1.3~rc4, acquiring the files for an update worked like this: We create some object for the Release file, once a release file is done we queue any next object (DEP-11 icons, .diff/Index files, etc). There is no prioritizing, so usually we fetch the 5MB+ DEP-11 icons and components files first, and only then start working on other indices which might use Pdiff.

In 1.3~rc4 I changed the queues to be priority queues: Release files and .diff/Index files have the highest priority (once we have them all, we know how much to fetch). The second level of priority goes to the .pdiff files which are later on passed to the rred process to patch an existing Packages, Sources, or Contents file. The third priority level is taken by all other index targets.

Actually, I implemented the priority queues back in Jun. There was just one tiny problem: Pipelining. We might be inserting elements into our fetching queues in order of priority, but with pipelining enabled, stuff of lower priority might already have their HTTP request sent before we even get to queue the higher priority stuff.

Today I had an epiphany: We fill the pipeline up to a number of items (the depth, currently 10). So, let’s just fill the pipeline with items that have the same (or higher) priority than the maximum priority of the already-queued ones; and pretend it is full when we only have lower priority items.

And that works fine: First the Release and .diff/Index stuff is fetched, which means we can start showing accurate progress info from there one. Next, the pdiff files are fetched, meaning that we can apply them in parallel to any targets downloading later in parallel (think DEP-11 icon tarballs).

This has a great effect on performance: For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update of Debian unstable and testing with Contents and appstream for amd64 and i386, update time reduced from 37 seconds to 24-28 seconds.

 

In other news

I recently cleaned up the apt packaging which renamed /usr/share/bug/apt/script to /usr/share/bug/apt. That broke on overlayfs, because dpkg could not rename the old apt directory to a backup name during unpack (only directories purely on the upper layer can be renamed). I reverted that now, so all future updates should be fine.

David re-added the Breaks against apt-utils I recently removed by accident during the cleanup, so no more errors about overriding dump solvers. He also added support for fingerprints in gpgv’s GOODSIG output, which apparently might come at some point.

I Also fixed a few CMake issues, fixed the test suite for gpgv 2.1.15, allow building with a system-wide gtest library (we really ought to add back a pre-built one in Debian), and modified debian/rules to pass -O to make. I wish debhelper would do the latter automatically (there’s a bug for that).

Finally, we fixed some uninitialized variables in the base256 code, out-of-bound reads in the Sources file parser, off-by-one errors in the tagfile comment stripping code[1], and some memcpy() with length 0. Most of these will be cherry-picked into the 1.2 (xenial) and 1.0.9.8 (jessie) branches (releases 1.2.15 and 1.0.9.8.4). If you forked off your version of apt at another point, you might want to do the same.

[1] those were actually causing the failures and segfaults in the unit tests on hurd-i386 buildds. I always thought it was a hurd-specific issue…

PS. Building for Fedora on OBS has a weird socket fd #3 that does not get closed during the test suite despite us setting CLOEXEC on it. Join us in #debian-apt on oftc if you have ideas.


Filed under: Debian, Ubuntu

Michael Hall: Sharing is caring, with Snaps!

Planet Ubuntu - Fri, 09/02/2016 - 08:03

Snaps are a great way to get the most up to date applications on your desktop without putting the security or stability or your system at risk. I’ve been snapping up a bunch of things lately and the potential this new paradigm offers is going to be revolutionary. Unfortunately nothing comes for free, and the security of snaps comes with some necessary tradeoffs like isolation and confinement, which reduces some of the power and flexibility we’ve become used to as Linux users.

But now the developers of the snappy system (snapd, snap-confine and snapcraft) are giving us back some of that missing flexibility in the form of a new “content” interface which allows you to share files (executables, libraries, or data) between the snap packages that you develop. I decided to take this new interface for a test drive using one of the applications I had recently snapped: Geany, my editor of choice. Geany has the ability to load plugins to extend it’s functionality, and infact has a set of plugins available in a separate Github repository from the application itself.

I already had a working snap for Geany, so the next thing I had to do was create a snap for the plugins. Like Geany itself, the plugins are hosted on GitHub and have a nice build configuration already, so turning it into a snap was pretty trivial. I used the autotools plugin in Snapcraft to pull the git source and build all of the available plugins. Because my Geany snap was built with Gtk+ 3, I had to build the plugins for the same toolkit, but other than that I didn’t have to do anything special.

parts: all-plugins: plugin: autotools source: git@github.com:geany/geany-plugins.git source-type: git configflags: [--enable-gtk3=yes --enable-all-plugins]

Now that I had a geany.snap and geany-plugins.snap, the next step was to get them working together. Specifically I wanted Geany to be able to see and load the plugin files from the plugins snap, so it was really just a one-way sharing. To do this I had to create both a slot and a plug using the content interface. Usually when you’re building snap you only use plugs, such as network or x11, because you are consuming services provided by the core OS. In those cases also you just have to provide the interface name in the list of plugs, because the interface and the plug have the same name.

But with the content interface you need to do more than that. Because different snaps will provide different content, and a single snap can provide multiple kinds of content, you have to define a new name that is specific to what content you are sharing. So in my geany-plugins snapcraft.yaml I defined a new kind of content that I called geany-plugins-all (because it contains all the geany plugins in the snap), and I put that into a slot called geany-plugins-slot which is how we will refer to it later. I told snapcraft that this new slot was using the content interface, and then finally told it what content to share across that interface, which for geany-plugins was the entire snap’s content.

slots: geany-plugins-slot: content: geany-plugins-all interface: content read: - /

With that I had one half of the content interface defined. I had a geany-plugins.snap that was able to share all of it’s content with another snap. The next step was to implement the plug half of the interface in my existing geany.snap. This time instead of using a slots: section I would define a plugs: section, with a new plug named geany-plugins-plug and again specifying the interface to be content just like in the slot. Here again I had to specify the content by name, which had to match the geany-plugins-all that was used in the slot. The names of the plug and slot are only relevant to the user who needs to connect them, it’s this content name that snapd uses to make sure they can be connected in the first place. Finally I had to give the plug a target directory for where the shared content will be put. I chose a directory called plugins, and when the snaps are connected the geany-plugins.snap content will be bind-mounted into this directory in the geany.snap

plugs: geany-plugins-plug: content: geany-plugins-all default-provider: geany-plugins interface: content target: plugins

Lastly I needed to tell snapcraft which app would use this interface. Since the Geany snap only has one, I added it there.

apps: geany: command: gtk-launch geany plugs: [x11, unity7, home, geany-plugins-plug]

Once the snaps were built, I could install them and the new plug and slot were automatically connected

$ snap interfaces Slot Plug geany-plugins:geany-plugins-slot geany:geany-plugins-plug

Now that put the plugins into the application’s snap space, but it wasn’t enough for Geany to actually find them. To do that I used Geany’s Extra plugin path preferences to point it to the location of the shared plugin files.

After doing that, I could open the Plugin manager and see all of the newly shared plugins. Not all of them work, and some assume specific install locations or access to other parts of the filesystem that they won’t have being in a snap. The Geany developers warned me about that, but the ones I really wanted appear to work.

Daniel Holbach: Snapcraft workshop at Akademy

Planet Ubuntu - Fri, 09/02/2016 - 06:09

I’m looking forward to next week, as

On Wednesday I’m going to give this workshop

So if you are interested in learning how to publish software easily and directly to users, this might be just for you.

Snaps are self-contained, confined apps, which run across a variety of Linux systems. The process of snapping software is very straight-forward and publishing them is very quick as well. The whole process offers many things upstreams and app publishers have been asking for years.

The workshop is interactive, all that’s required is that you either have VirtualBox or qemu installed or run any flavour of Ubuntu 16.04 or later. I’m going to bring USB sticks with images.

The workshop will consist of three very straight-forward parts:

  • Using the snap command to find, install, remove, update and revert software installations.
  • Using snapcraft to build and publish software.
  • Taking a look at KDE/Qt software and see how it’s snapped.

A few words about your host of the session: I’m Daniel Holbach, I have been part of the Ubuntu community since its very early days and work for Canonical on the Community team. Right now I’m working closely with the Snappy team on making publishing software as much fun as it should be.

See you next Wednesday!

Daniel Pocock: Arrival at FSFE Summit and QtCon 2016, Berlin

Planet Ubuntu - Fri, 09/02/2016 - 01:46


The FSFE Summit and QtCon 2016 are getting under way at bcc, Berlin. The event comprises a range of communities, including KDE and VideoLAN and there are also a wide range of people present who are active in other projects, including Debian, Mozilla, GSoC and many more.

Talks

Today, some time between 17:30 and 18:30 I'll be giving a lightning talk about Postbooks, a Qt and PostgreSQL based free software solution for accounting and ERP. For more details about how free, open source software can make your life easier by helping keep track of your money, see my comparison of free, open source accounting software.

Saturday, at 15:00 I'll give a talk about Free Communications with Free Software. We'll look at some exciting new developments in this area and once again, contemplate the question can we hope to use completely free and private software to communicate with our friends and families this Christmas? (apologies to those who don't celebrate Christmas, the security of your communications is just as important too).

A note about the entrance fee...

There is an entry fee for the QtCon event, however, people attending the FSFE Summit are invited to attend by making a donation. Contact FSFE for more details and consider joining the FSFE Fellowship.

Sebastian Kügler: Plasma at QtCon

Planet Ubuntu - Fri, 09/02/2016 - 01:41
QtCon opening keynote

QtCon 2016 is a special event: it co-hosts KDE’s Akademy, the Qt Contributor summit, the FSFE summit, the VideoLan dev days and KDAB’s training day into one big conference. As such, the conference is buzzing with developers and Free software people (often both traits combined in one person).

Naturally, the Plasma team is there with agenda’s filled to the brim: We want to tell more people about what Plasma has to offer, answer their questions, listen to their feedback and rope them in to work with us on Plasma. We have also planned a bunch of sessions to discuss important topics, let me give some examples:

  • Release Schedule — Our current schedule was based on the needs of a freshly released “dot oh” version (Plasma 5.0), Plasma 5 is now way more mature. Do we want to adjust our release schedule of 4 major versions a year to that?
  • Convergence — How can we improve integration of touch-friendly UIs in our workflows?
  • What are our biggest quality problems right now, and what are we going to do about it?
  • How do we make Plasma Mobile available on more devices, what are our next milestones?
  • How can we improve the Plasma session start?
  • What’s left to make Plasma on Wayland ready for prime-time?
  • How can we improve performance further?
  • etc.

You see, we’re going to be really busy here. You won’t see results of this next week, but this kind of meeting is important to flesh out our development for the next months and years.

All good? – All good!

Ubuntu Insights: A Tale of Two Architecture Models: A Peek into Canonical Cloud Architecture Design Rationale

Planet Ubuntu - Thu, 09/01/2016 - 14:48
A Tale of Two Architecture Models

When designing clouds for customers and partners, Canonical Consultants and Architects follow a standard process that creates designs based upon use-cases and requirements. Over the years working with many thousands of customers and partners, we have zeroed in on two Cloud Architecture Models that serve as the base for many of our designs.

The Hyperconverged Model is the default model used in our Canonical Cloud Reference Architecture and is what all technical sales conversations are started with.

But, as we delve deeper into those discussions, either through pre-sales activities or via paid consulting engagements, it may be determined that another model, the Disaggregated Model should be used.

The primary difference between the two, as you will see in the diagrams below, is service placement on physical hardware. Other key differences abound and are also important to help drive the final outcome of design workshops with customers.

While both models have pros and cons, the use-case & requirements we gather while working with customers ultimately decides which model we will go forward with.

Hyperconverged Model

Hyperconvergence is a type of infrastructure  with a software-based architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies on commodity hardware.

Typical Use Cases of the Hypercoverged Model

The primary use cases for the Hyper-Converged design model are Public Cloud, General Purpose or starter clouds on premises. The cost savings in management and operations costs alone can make it a great choice for many reasons or uses:

  • Public Clouds
  • General Purpose Clouds
  • Proof of Concepts
  • Development Labs
  • Non-specific workloads
  • Customers who want to test vast amounts of different services and technology
  • Customers who expect large services growth rates and need to scale out quickly
  • Customers unsure or undecided of application stack

This model is generally more efficient requiring fewer physical servers to provide the required capacity, however, it can carry additional cost as it’s not optimized for any particular type of workload.  For clouds where the diversity of workload is very low, alternate architectures may offer greater potential for performance tuning.

Disaggregated Model

Disaggregation takes cloud architecture a step further and targets the processing, memory, networking, and storage subsystems which make up every system. Disaggregation is really attractive among hyperscale providers which see disaggregation is a way to achieve more flexible systems and fewer underutilized resources.

Typical Use Cases of the Disaggregated Model

The use cases for the Disaggregated Model is customers who know what workloads will be deployed and understand how to tune the environment for those applications.  A customer may have one workload that is CPU bound while another demands low latency networking or high-performance storage.  When choosing between the Disaggregated Model and the Hyper-converged model, the latter is less efficient to grow the cloud that is optimized for these workloads, especially if they grow at different rates.

Use Cases where the Disaggregated Model work best are:

  • NFVi Clouds
  • Storage Clouds
  • HPC
  • Other Specialized Workloads

The Disaggregated Model gives the customer flexibility to grow the cloud in a more optimized way based on a specific workload. It requires more detailed knowledge of workload and takes more thought to scale out. The payoff of knowing these two is going to be a more efficient cloud for the price.

When to choose which model?

The flexibility of cloud workloads, in general, makes the hyper-converged model a great starting point.  For clouds where the diversity of workload is very low and more specialized, the disaggregated architecture model may offer greater potential for performance tuning.

NFVi cloud architecture designs, for example, are best served by the disaggregated model. In the case of NFVi,  many vendors of VNFs require the cloud infrastructure to be specialized and tuned to give the highest performance and to attempt to reach as close to line rate speeds as possible. Some machines may be tuned and/or built for VNFS that would require SR-IOV and DPDK while others may be used for VNF management backplane and be less powerful. Again,  it all will depend on the use-case and requirements of the customer.

Changing Models on the Fly

But what if my use-case or requirements change? How do I reuse hardware that I have already purchased? I do not want to go out and spend hundreds of thousands of dollars on another purpose-built cloud.  Juju and MAAS to the rescue!

If you haven’t heard about Juju, it is Canonical’s tool for modeling, deploying, and operating Big Software.  One such Big Software Stack is Canonical OpenStack(OpenStack,Ceph, Swift, etc.). It has the innate ability to re-deploy a cloud architecture at any time. In parallel, MAAS has a complete picture of my physical infrastructure that allows me to redeploy operating systems, reconfigure networks, and do other cool stuff that I would otherwise have to manually do.

We find Juju and MAAS extremely useful when we are testing and validating designs as we are working with customers.  Our labs only contain a limited set amount of hardware with a very generic hardware configuration. Juju and MAAS combined give me the ability to repurpose that hardware, redeploying either model quickly and efficiently.

It’s as simple as

juju deploy customerdesign_hyperconverged_version.yaml

or

juju deploy customerdesign_disaggregated_version.yaml

whereas the YAML file will have a complete description of how services should be placed, configured, and deployed on our lab hardware. This means I can redeploy without having to run through a large amount of steps to repurpose that hardware or have two sets of hardware laying around for testing.

Conclusion

Simply put, cloud designs and deployments can be boiled down to two discrete models. Hyperconverged for general purpose computing where workloads are generic and forgiving or Disaggregated where workloads are more specialized requiring higher levels of performance.

Using a use-case, requirements-driven approach to design has simplified and shortened delivery of clouds to our customers. Ultimately, it has allowed us to streamline and reduce our architectures down to two models which drive designs that match customers vision and business case more closely.

Special  Thanks

This blog post was co-authored with Chris DeYoung who is one of my co-workers and team members on the Canonical Consulting Architect Team that I lead.  A very special thanks to Chris for his contributions to this post and for his help with creating the cleaner diagrams.  He, along with other members of my team, are some of the best High Touch Domain Architects in the world. Check him out here:  https://www.linkedin.com/in/christiandeyoung

Original article

Jono Bacon: The Psychology of Report/Issue Templates

Planet Ubuntu - Thu, 09/01/2016 - 13:50

This week HackerOne, who I have been working with recently, landed Report Templates.

In a nutshell, a report template is a configurable chunk of text that can be pre-loaded into the vulnerability submission form instead of a blank white box. For example:

The goal of a report template is two-fold. Firstly, it helps security teams to think about what specific pieces of information they require in a vulnerability report. Secondly, it provides a useful way of ensuring a hacker provides all of these different pieces of information when they submit a report.

While a simple feature, this should improve the overall quality of reports submitted to HackerOne customers, improve the success of hackers by ensuring their vulnerability reports match the needs of their security teams, and result in overall better quality engagement in the platform.

Similar kinds of templates can be seen in platforms such as Discourse, GitLab, GitHub, and elsewhere. While a simple feature, there are some subtle underlying psychological components that I thought could be interesting to share.

The Psychology Behind the Template

When I started working with HackerOne the first piece of work I did was to (a) understand the needs/concerns of hackers and customers and then based on this, (b) perform a rigorous assessment of the typical community workflow to ensure that it mapped to these requirements. My view is simple: if you don’t have simple and effective workflow, it doesn’t matter how much outreach you do, people will get confused and give up.

This view fits into a wider narrative that has accompanied my work over the years that at the core of great community leadership is intentionally influencing the behavior we want to see in our community participants.

When I started talking to the HackerOne team about Report Templates (an idea that had already been bounced around), building this intentional influence was my core strategic goal. Customers on HackerOne clearly want high quality reports. Low quality reports suck up their team’s time, compromise the value of the platform, and divert resources from other areas. Similarly, hackers should be set up for success. A core metric for a hacker is Signal, and signal threshold is a metric for many of the private programs that operate on HackerOne.

In my mind Report Templates were a logical areas to focus on for a few reasons.

Firstly, as with almost everything in life, the root of most problems are misaligned expectations. Think about spats with your boss/spouse, frustrations with your cable company, and other annoyances as as examples of this.

A template provides an explicit tool for the security team to state exactly what they need. This reduces ambiguity, which in turn reduces uncertainty, which has proven to be a psychological blocker, and particularly dangerous on communities.

There has also been some interesting research into temptation and one of the findings has been that people often make irrational choices when they are in a state of temptation or arousal. Thus, when people are in a state of temptation, it is critical for us to build systems that can responsibility deliver positive results for them. Otherwise, people feel tempted, initiate an action, do not receive the rewards they expected (e.g. validation/money in this case), and then feel discomfort at the outcome.

Every platform plays to this temptation desire. Whether it is being tempted to buy something on Amazon, temptation to download and try a new version of Ubuntu, temptation to respond to that annoying political post from your Aunt on Facebook, or a temptation to submit a vulnerability report in HackerOne, we need to make sure the results of the action, at this most delicate moment, are indeed positive.

Report Templates (or Issue/Post Templates in other platforms) play this important role. They are triggered at the moment the user decides to act. If we simply give the user a blank white box to type into, we run the risk of that temptation not resulting in said suitable reward. Thus, the Report Template greases the wheels, particularly within the expectations-setting piece I outlined above.

Finally, and as relates to temptation, I have become a strong believer in influencing behavioral patterns at the point of action. In other words, when someone decides to do something, it is better to tune that moment to influence the behavior you want rather than try to prime people to make a sensible decision before they do so.

In the Report Templates example, we could have alternatively written oodles and oodles of documentation, provided training, delivered webinars/seminars and other content to encourage hackers to write great reports. There is though no guarantee that this would have influenced their behavior. With a Report Template though, because it is presented at the point of action (and temptation) it means that we can influence the right kind of behavior at the right time. This generally delivers better results.

This is why I love what I do for a living. There are so many fascinating underlying attributes, patterns, and factors that we can learn from and harness. When we do it well, we create rewarding, successful, impactful communities. While the Report Templates feature may be a small piece of this jigsaw, it, combined with similar efforts can join together to create a pretty rewarding picture.

The post The Psychology of Report/Issue Templates appeared first on Jono Bacon.

Xubuntu: SRU for 16.04: Intel cursor bug fix released

Planet Ubuntu - Thu, 09/01/2016 - 11:58

When we announced the release of Xubuntu 16.04 back in April there were a few known issues, but none has been more frustrating to users than this one:

When returning from lock, the cursor disappears on the desktop, you can bring the cursor back with Ctrl+Alt+F1 followed by Ctrl+Alt+F7

Most of the other bugs were fixed by the time 16.04.1 was released in July, but this one lingered while developers tested the xserver-xorg-video-intel package that had the fix in the proposed repository in August.

Thanks to the work of those developers and the members of our community who tested the package upon our request in August, we’re delighted to announce that the fix has been included as a Stable Release Update (SRU)!

This update will be applied to your system with all other regular updates, no special action is needed on your part.

Ubuntu Podcast from the UK LoCo: S09E27 – Bit O’ Posh - Ubuntu Podcast

Planet Ubuntu - Thu, 09/01/2016 - 07:00

It’s Episode Twenty-Seven of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Marcin Juszkiewicz: PowerVR is other way to say headless

Planet Ubuntu - Thu, 09/01/2016 - 06:10

Yesterday Acer announced convertible Chromebook R13, first MediaTek powered Chromebook. With AArch64 cpu cores. And PowerVR GPU.

As it was in the evening I did not notice PowerVR part and got excited. Finally some AArch64 Chromebook which people will be able to buy and do some development on. Specs were nice: 4GB of memory, 16/32/64GB of emmc storage, 13.3″ FullHD touchscreen display. But why they use that GPU :((

There are few graphics processing units in ARM/AArch64 world. Some of them have FOSS drivers (Ardeno, Tegra, Vivante), some are used with 2D units (Mali) and some just sucks (PowerVR).

Mali is kind of lost case as no one works on free driver for it (so-called “lima” looks like ARM Ltd secret task to get people from trying to do anything) but as it is paired with 2D unit users have working display. And there are binary blobs from ARM Ltd to get 3D acceleration working.

But PowerVR? I never heard about anyone working on free driver for it. I remember that it was used in Texas Instruments OMAP line. And that after few kernel releases it just stopped working when TI fired whole OMAP4 team so no one worked on getting it working with binary blobs.

So now MediaTek used it to make cpu for Chromebook… Sure it will work under ChromeOS as Google is good at keeping one kernel version for ages (my 2012 Samsung Chromebook still runs 3.8.11 one) so blobs will work. But good luck with it under other distributions and mainline kernel.

Heh, even Raspberry/Pi has working free driver nowadays…

Related posts:

  1. Hardware acceleration on Chromebook
  2. How open Chromebook is?
  3. Chromebook support for Ubuntu

Pasi Lallinaho: Setting up new systems

Planet Ubuntu - Thu, 09/01/2016 - 03:47

In May, I bought a new laptop. In this article, I go through a few of the most essential tweaks I set up with the new laptop.

When possible, I like to customize my system to support my workflow and to make working faster. Once you have customized something and got used to it, there’s no going back. This means every time I set up a new system (or an old system again), I have to set up the custom configuration up as well.

Locales

I prefer my interface completely English, but apart from that, I want some locale related things to be set to the Finnish standards.

In ~/.pam_environment, I have the following:

LANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8
LC_NUMERIC=fi_FI.UTF-8
LC_COLLATE=fi_FI.utf8
LC_MONETARY=fi_FI.UTF-8
LC_PAPER=fi_FI.UTF-8
LC_ADDRESS=fi_FI.UTF-8
LC_TELEPHONE=fi_FI.UTF-8
LC_MEASUREMENT=fi_FI.UTF-8
LANGUAGE=en
LC_NAME=fi_FI.UTF-8
LC_IDENTIFICATION=fi_FI.UTF-8
PAPERSIZE=a4

Thunderbird configuration

I use Thunderbird for all of my mail and feed related activities. However, I don’t like the default set of shortcuts. The Keyconfig extension helps me set up my preferred shortcuts and disable shortcuts I don’t want to use at all. The most important shortcuts are as follows:

  • B for Address Book
  • C for Calendar
  • E for Edit As New Message
  • F for Forward
  • W for (Write) New Message

I use the Hide Local Folders and Manually sort folders for some fine-grained control over what is shown on my sidepane – and how. I also use some calendars with Thunderbird. The Lightning (integrated calendar), Lightbird (standalone calendar UI) and Provider for Google Calendar extensions let me sync my calendars easily.

Finally, I have customized the UI with an userChrome.css file, currently holding the following CSS:

/* do not color folders/servers with new messages blue */
#folderTree > treechildren::-moz-tree-cell-text(isServer-true, biffState-NewMail),
#folderTree > treechildren::-moz-tree-cell-text(folderNameCol, newMessages-true) {
color: inherit !important;
}

Display sizes for fonts

The laptop sports a 13.3″ screen with a full HD 1920×1080 resolution. This makes some of the text a bit too small and hard to read, and thus I’ve done some adjustments to DPI related stuff.

I’ve set the Xfce desktop DPI to 108.

For Firefox and Thunderbird, setting the value of layout.css.devPixelsPerPx to 1.1 both makes the UI a bit more spacy and the text a bit more readable. I usually like small text though, so you might want to increase the value even more.

Scratching the surface

Ultimately, these tweaks are just scratching the surface of the level of modidfications I have done already. Not to even talk about modifications and custom workflows I’m using on my desktop…

What kind of modifications do you use?

Will Cooke: Unity 7 Low Graphics Mode

Planet Ubuntu - Thu, 09/01/2016 - 03:43

Unity 7 has had a low graphics mode for a long time but recently we’ve been making it better.

Eleni has been making improvements to reduce the amount of visual effects that are seen while running in low graphics mode.  At a high level this includes things like:

  • Reducing the amount of animation in elements such as the window switcher, launcher and menus (in some cases down to zero)
  • Removing blur and fade in/out
  • Reducing shadows

The result of these changes will be beneficial to people running Ubuntu in a virtual machine (where hardware 3D acceleration is not available) and for remote-control of desktops with VNC, RDP etc.

Low graphics mode should enable itself when it detects certain GL features are not available (e.g. in a virtualised environment) but there are times when you might want to force it on.  Here’s how you can force low graphics mode on 16.04 LTS (Xenial) :

  1. nano ~/.config/upstart/lowgfx.conf
  2. Paste this into it:
start on starting unity7 pre-start script     initctl set-env -g UNITY_LOW_GFX_MODE=1 end script
  1. Log out and back in

If you want to stop using low graphics comment out the initctl line by placing a ‘#’ at the start of the line.

This hack won’t work in 16.10 Yakkety because we’re moving to systemd for the user session.  I’ll write up some instructions for 16.10 once it’s available.

Here’s a quick video of some of the effects in low graphics mode:

 

 

Will Cooke on Google+Unity 7 Low Graphics Mode was last modified: September 1st, 2016 by Will Cooke

Canonical Design Team: August’s reading list

Planet Ubuntu - Thu, 09/01/2016 - 01:10

August has been a quiet month, as many of team have taken some time off to enjoy the unusually lovely London summer weather, but we have a few great links to share with you that were shared by the design team this month:

  1. Developing a Crisis Communication Strategy
  2. Accessibility Guidelines
  3. An Evening with Cult Sensation – Randall Munroe
  4. Clearleft Public Speaking Workshop in Brighton
  5. Hello Color
  6. The best and worst Olympic logo designs throughout the ages, according to the man who created I <3 NY
  7. Readability Test Tool
  8. Breadcrumbs For Web Sites: What, When and How

Thank you to Joe, Peter, Steph and me for the links this month!

Adolfo Jayme Barrientos: Don’t be misled: Mexicans haven’t approved Trump’s “visit”…

Planet Ubuntu - Wed, 08/31/2016 - 12:23

… and we are as confused as everybody else by this news. It makes absolutely no sense, neither for Peña nor for Trump.*

Even Moby is flabbergasted as to why this visit was even devised:

He says: “dear Mexican friends, why are you inviting Trump to your country? He’s called you ‘rapists’ and ‘criminals’. Trump is no friend of Mexico”.

Of course that orange motherfucker is no friend of us. We know it very well, and we have never approved him stepping on our soil. We also have never taken any of his crap to begin with! But you should also know that our most honorable president, Mr. Enrique Peña, is as shitty as Trump is. Even worse: Spurious Peña is servile, egotistic, very ignorant, a complete cheat and too much of an idiot to realize how much he’s damaging the country — as if he ever gave a fuck about directing Mexico at all, ha ha. In his idiocy he’s followed all the wrong leads he could: he’s alienated the middle class with half-assed constitutional reforms that have only served to increase the income gap and inequality even more, by caring only about the vested interests of his allies. If we ever had an international image of being a banana republic (which we are not), Peña has only incentivized it, scandal after scandal… This PRI mandate has been a curse, and most of us didn’t even want it on the first place: the 2012 election was, as usual, rigged.

So yeah, I think I can say we Mexicans are, unfortunately, very experienced when it comes to unjust, corrupt politics. I’m afraid United Staters are also going to know first-hand how absurd and unbelievable national politics can get. It seems that too much of a good administration is always going to lead ultimately to hogwash-loving actors hacking their way into the mainstream political scene. It was bad enough that somebody like Trump could get away with a major party nomination — that was an assault against reason. Now, we are all feeling the chills because this circus is getting really, really scary.

* Well, but Trump is incapable of maintaining a consistent posture on anything for more than a minute.

Ubuntu 16.10 Free Culture Showcase – Call for wallpapers

The Fridge - Wed, 08/31/2016 - 08:32

It’s time once again for the Ubuntu Free Culture Showcase!

The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We’re looking for content which shows off the skill and talent of these amazing artists and will greet Ubuntu 16.10 users.

More information about the Free Culture Showcase is available at https://wiki.ubuntu.com/UbuntuFreeCultureShowcase

This cycle, we’re looking for beautiful wallpaper images that will literally set the backdrop for new users as they experience Ubuntu 16.10 for the first time. Submissions will be handled via Flickr at https://www.flickr.com/groups/ubuntu-fcs-1610/

I’m looking forward to seeing the next round of entrants and a difficult time picking final choices to ship with Ubuntu 16.10.

Originally posted to the ubuntu-community-team mailing list on Mon Aug 29 11:16:02 UTC 2016 by Nathan Haines

Bryan Quigley: Once your organization get’s big enough…

Planet Ubuntu - Tue, 08/30/2016 - 13:19

it’s harder to keep everyone on the same page.  These are two emails I got from Mozilla in the last month.

Short Story:
MDN (their Wiki) is requiring everyone use a GitHub account now.
While add-ons.mozilla.org (addon authors/reviewers) is requiring everyone use a Firefox account now.
(Bugzilla can do a local account, a Persona account, or Github)

Just to be clear, this isn’t an issue specific to Mozilla, but I’d expect them to support OpenID more if their Persona initiative failed.

Aug 18
“Dear MDN contributor,

You are getting this message because you use Persona to log in to your account on MDN.

We are discontinuing Persona as a sign-in method. If you want to keep access to your account, you must link your profile to a GitHub account.

If you do not have a GitHub account, you will need to create one.

If you do not link your profile to a GitHub account by Oct. 31, you will not be able to log in to MDN using your current profile, create or update pages, or update your profile. We recognize that this is an inconvenience, and we apologize.

If you have questions, please let us know. You can also read more on MDN about this change.

Thank you,
The MDN Team”

July 28th
“In February 2016 we turned on Firefox Accounts as an authentication source for addons.mozilla.org (AMO). Since then, 80% of the developers who have visited AMO have migrated their account to a Firefox Account. We are writing to remind you to migrate your account as well.

We urge you to do so in the next few weeks, when the migration wizard will close and you will no longer be able to log in using your old AMO credentials. You can start the migration flow at https://addons.mozilla.org/users/login today.

After migration closes, you can still log in to your AMO account, but first you’ll have to create a Firefox Accounthttps://accounts.firefox.com/ using the same email address you use for your AMO account.

Sincerely,
The AMO Team”

Rhonda D&#39;Vine: Thomas D

Planet Ubuntu - Tue, 08/30/2016 - 09:12

It's not often that an artist touches you deeply, but Thomas D managed to do so to the point of that I am (only half) jokingly saying that if there would be a church of Thomas D I would absolutely join it. His lyrics always did stand out for me in the context of the band I found about him, and the way he lives his life is definitely outstanding. And additionally there are these special songs that give so much and share a lot. I feel sorry for the people who don't understand German to be able to appreciate him.

Here are three songs that I suggest you to listen to closely:

  • Fluss: This song gave me a lot of strengh in a difficult time of my life. And it still works wonders when I feel down to get my ass up from the floor again.
  • Gebet an den Planeten: This songs gives me shivers. Let the lyrics touch you. And take the time to think about it.
  • An alle Hinterbliebenen: This song might be a bit difficult to deal with. It's about loss and how to deal with suffering.

Like always, enjoy!

/music | permanent link | Comments: 0 |

Jono Bacon: My Reddit AMA Is Live – Join Us

Planet Ubuntu - Tue, 08/30/2016 - 08:43

Just a quick note that my Reddit Ask Me Anything discussion is live. Be sure to head over to this link and get your questions in!

All and any questions are absolutely welcome!

The post My Reddit AMA Is Live – Join Us appeared first on Jono Bacon.

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator