The phrase "The Year of the Linux Desktop" is one we see being used by hopefuls, to describe a future in which desktop Linux has reached the masses.
But I'm more pragmatic, and would like to describe the past and tweak this phrase to (I believe) accurately surmise 2011 as "The Year of the Linux Desktop Schism".
So let me tell you a little story of about this schism.A Long Time Ago in 2011
The Linux desktop user base was happily enjoying the status quo. We had (arguably) two major desktops: GNOME and KDE , with a few smaller, less popular or used desktops as well (mostly named with initialisms).
It was the heyday of GNOME 2 on the desktop, being the default desktop used in many of the major distributions. But bubbling out of the ether of the GNOME Project was this idea for a new shell and a overhaul of GNOME, so GNOME 2 was brought to a close and GNOME Shell was born as the future of GNOME.The Age of Dissent, Madness & Innovation
GNOME 3 and its new Shell did not sit well with everyone and many in the great blogosphere saw it as disastrous for GNOME and for users.
Much criticism was spouted and controversy raised and many started searching for alternatives. But there were those who stood by their faithful project and seeing the new version for what it was: a new beginning for GNOME and they knew that beginnings are not perfect.
Nevertheless, with this massive split in the desktop market we saw much change. There came a rapid flurry of several new projects and a moonshot from one for human beings.
The Ubuntu upgraded their fledging "netbook interface" and promoted it to the desktop, calling it Unity and it was took off down a path to unite the desktop with other emerging platforms yet to come.
There was also much dissatisfaction with the abandonment of GNOME 2, and the community they decided to lower their figurative pitchforks and use them to do some literal forking. They took up the remenants of this legacy desktop and used it to forge a new project. This project was to be named MATE and was to continue in the original spirit of GNOME 2.
The Linux Mint team, unsure of their future with GNOME under the Shell created the "Mint GNOME Shell Linux Mint Extension Pack of Extensions for GNOME Shell". This addon to the new GNOME experience would eventually lead to the creation Cinnamon, which itself was a fork of GNOME 3.
Despite being a relatively new arrival, the ambitious elementary and its team was developing the Pantheon desktop in relative secrecy for use in future versions of their OS, having previously relied on a slimmed-down GNOME 2. They were to become one of the most polished of them all.
And they all live happily ever since.
The end.The Moral of the Story
All of these projects have been thriving in these 3 years hence, and why? Because of their communities.
All that has occurred is what the Linux community is and it is exemplary of the freedom that it and the whole of open source represents. We have the freedom in open source to exact our own change or act upon what may not agree with. We are not confined to a set of strictures, we are able to do what we feel is right and find other people who do as well.
To deride and belittle others for acting in their freedom or because they may not agree with you is just wrong and not keeping with the ethos of our community.
To ensure quality of the Juju charm store there are automatic processes that test charms on multiple cloud environments. These automated tests help identify the charms that need to be fixed. This has become so useful that charm tests are a requirement to become a recommended charm in the charm store for the trusty release.What are the goals of charm testing?
For Juju to be magic, the charms must always deploy, scale and relate as they were designed. The Juju charm store contains over 200 charms and those charms can be deployed to more than 10 different cloud environments. That is a lot of environments to ensure charms work, which is why tests are now required!Prerequisites
The Juju ecosystem team has created different tools to make writing tests easier. The charm-tools package has code that generates tests for charms. Amulet is a python 3 library that makes it easier to programatically work with units and whole deployments. To get started writing tests you will need to install charm-tools and amulet packages:sudo add-apt repository -y ppa:juju/stable sudo apt-get update sudo apt-get install -y charm-tools amulet
Now that the tools are installed, change directory to the charm directory and run the following command:juju charm add tests
This command generates two executable files 00-setup and 99-autogen into the tests directory. The test are prefixed with a number so they are run in the correct lexicographical order.00-setup
The first file is a bash script that adds the Juju PPA repository, updates the package list, and installs the amulet package so the next tests can use the Amulet library.99-autogen
This file contains python 3 code that uses the Amulet library. The class extends a unittest class that is a standard unit testing framework for python. The charm tool test generator creates a skeleton test that deploys related charms and adds relations, so most of the work done already.
This automated test is almost never a good enough test on its own. Ideal tests do a number of things:
- Deploy the charm and make sure it deploys successfully (no hook errors)
- Verify the service is running as expected on the remote unit (sudo service apache2 status).
- Change configuration values to verify users can set different values and the changes are reflected in the resulting unit.
- Scale up. If the charm handles the peer relation make sure it works with multiple units.
- Validate the relationships to make sure the charm works with other related charms.
Most charms will need additional lines of code in the 99-autogen file to verify the service is running as expected. For example if your charm implements the http interface you can use the python 3 requests package to verify a valid webpage (or API) is responding.def test_website(self): unit = self.deployment.sentry.unit['<charm-name>/0'] url = 'http://%s' % unit['public-address'] response = requests.get(url) # Raise an exception if the url was not a valid web page. response.raise_for_status() What if I don't know python?
Charm tests can be written in languages other than python. The automated test program called bundletester will run the test target in a Makefile if one exists. Including a 'test' target would allow a charm author to build and run tests from the Makefile.
Bundletester will run any executable files in the tests directory of a charm. There are example tests written in bash in the Juju documentation. A test fails if the executable returns a value other than zero.Where can I get more information about writing charm tests?
There are several videos on youtube.com about charm testing:
Documentation on charm testing can be found here:
Documentation on Amulet:
Check out the lamp charm as an example of multiple amulet tests:
With this knowledge in mind, a common mistake by folks new to Erlang is to think these performance characteristics will be applicable to their own particular domain. This has often resulted in failure, disappointment, and the unjust blaming of Erlang. If you want to process huge files, do lots of string manipulation, or crunch tons of numbers, Erlang's not your bag, baby. Try Python or Julia.
But then, you may be thinking: I like supervision trees. I have long-running processes that I want to be managed per the rules I establish. I want to run lots of jobs in parallel on my 64-core box. I want to run jobs in parallel over the network on 64 of my 64-core boxes. Python's the right tool for the jobs, but I wish I could manage them with Erlang.
(There are sooo many other options for the use cases above, many of them really excellent. But this post is about Erlang/LFE :-)).
Traditionally, if you want to run other languages with Erlang in a reliable way that doesn't bring your Erlang nodes down with badly behaved code, you use Ports. (more info is available in the Interoperability Guide). This is what JInterface builds upon (and, incidentally, allows for some pretty cool integration with Clojure). However, this still leaves a pretty significant burden for the Python or Ruby developer for any serious application needs (quick one-offs that only use one or two data types are not that big a deal).
erlport was created by Dmitry Vasiliev in 2009 in an effort to solve just this problem, making it easier to use of and integrate between Erlang and more common languages like Python and Ruby. The project is maintained, and in fact has just received a few updates. Below, we'll demonstrate some usage in LFE with Python 3.
If you want to follow along, there's a demo repo you can check out:
Change into the repo directory and set up your Python environment:
Next, switch over to the LFE directory, and fire up a REPL:
Note that this will first download the necessary dependencies and compile them (that's what the [snip] is eliding).
Now we're ready to take erlport for a quick trip down to the local:
And that's all there is to it :-)
Perhaps in a future post we can dive into the internals, showing you more of the glory that is erlport. Even better, we could look at more compelling example usage, approaching some of the functionality offered by such projects as Disco or Anaconda.
I gave everyone shoutout on social media, but since planet looks best overrun with thank you posts, I shall blog it as well!
Thank you to:
David Planella for being the rock that has anchored the team.
Leo Arias for being super awesome and making testing what it is today on all the core apps.
Carla Sella for working tirelessly on many many different things in the years I've known her. She never gives up (even when I've tried too!), and has many successes to her name for that reason.
Nekhelesh Ramananthan for always being willing to let clock app be the guinea pig
Elfy, for rocking the manual tests project. Seriously awesome work. Everytime you use the tracker, just know elfy has been a part of making that testcase happen.
Jean-Baptiste Lallement and Martin Pitt for making some of my many wishes come true over the years with quality community efforts. Autopkgtest is but one of these.
And many more. Plus some I've forgotten. I can't give hugs to everyone, but I'm willing to try!
To everyone in the ubuntu community, thanks for making ubuntu the wonderful community it is!
Why lambda? Why not gamma or delta? Or Siddham ṇḍha?
To my great relief, this question was finally answered when I was reading one of the best Lisp books I've ever read: Peter Norvig's Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. I'll save my discussion of that book for later; right now I'm going to focus on the paragraph at location 821 of my Kindle edition of the book. 
The story goes something like this:
- Between 1910 and 1913, Alfred Whitehead and Bertrand Russell published three volumes of their Principia Mathematica, a work whose purpose was to derive all of mathematics from basic principles in logic. In these tomes, they cover two types of functions: the familiar descriptive functions (defined using relations), and then propositional functions. 
- Within the context of propositional functions, the authors make a typographical distinction between free variables and bound variables or functions that have an actual name: bound variables use circumflex notation, e.g. x̂(x+x). 
- Around 1928, Church (and then later, with his grad students Stephen Kleene and J. B. Rosser) started attempting to improve upon Russell and Whitehead regarding a foundation for logic. 
- Reportedly, Church stated that the use of x̂ in the Principia was for class abstractions, and he needed to distinguish that from function abstractions, so he used ⋀x  or ^x  for the latter.
- However, these proved to be awkward for different reasons, and an uppercase lambda was used: Λx. .
- More awkwardness followed, as this was too easily confused with other symbols (perhaps uppercase delta? logical and?). Therefore, he substituted the lowercase λ. 
- John McCarthy was a student of Alonzo Church and, as such, had inherited Church's notation for functions. When McCarthy invented Lisp in the late 1950s, he used the lambda notation for creating functions, though unlike Church, he spelled it out. 
Somehow, this endears lambda to me even more ;-)
 As you can see from the rest of the footnotes, I've done some research since then and have found other references to this history of the lambda notation.
 Norvig, Peter (1991-10-15). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp (Kindle Locations 821-829). Elsevier Science - A. Kindle Edition. The paragraph in question is quoted here:
The name lambda comes from the mathematician Alonzo Church’s notation for functions (Church 1941). Lisp usually prefers expressive names over terse Greek letters, but lambda is an exception. Abetter name would be make - function. Lambda derives from the notation in Russell and Whitehead’s Principia Mathematica, which used a caret over bound variables: x( x + x). Church wanted a one-dimensional string, so he moved the caret in front: ^ x( x + x). The caret looked funny with nothing below it, so Church switched to the closest thing, an uppercase lambda, Λx( x + x). The Λ was easily confused with other symbols, so eventually the lowercase lambda was substituted: λx( x + x). John McCarthy was a student of Church’s at Princeton, so when McCarthy invented Lisp in 1958, he adopted the lambda notation. There were no Greek letters on the keypunches of that era, so McCarthy used (lambda (x) (+ xx)), and it has survived to this day. http://plato.stanford.edu/entries/pm-notation/#4
 Norvig, 1991, Location 821.
 History of Lambda-calculus and Combinatory Logic, page 7.
 Norvig, 1991, Location 821.
 Looking at Church's works online, he uses lambda notation in his 1932 paper A Set of Postulates for the Foundation of Logic. His preceding papers upon which the seminal 1932 is based On the Law of Excluded Middle (1928) and Alternatives to Zermelo's Assumption (1927), make no reference to lambda notation. As such, A Set of Postulates for the Foundation of Logic seems to be his first paper that references lambda.
 Norvig indicates that this is simply due to the limitations of the keypunches in the 1950s that did not have keys for Greek letters.
 Alex Martelli is not a fan of lambda in the context of Python, and though a good friend of Peter Norvig, I've heard Alex refer to lambda as an abomination :-) So, perhaps not beloved for everyone. In fact, Peter Norvig himself wrote (see above) that a better name would have been make-function.
Notably, another trend I've recognized is that in a large group of devs, there are often a committed few who really know their field and its history. That is always so amazing to me and I have a great deal of admiration for the commitment and passion they have for their art. Let's have more of that :-)
As for myself, these days I have many fewer hours a week which I can dedicate to programming compared to what I had 10 years ago. This is not surprising, given my career path. However, what it has meant is that I have to be much more focused when I do get those precious few hours a night (and sometimes just a few per week!). I've managed this in an ad hoc manner by taking quick notes about fields of study that pique my curiosity. Over time, these get filtered and a few pop to the top that I really want to give more time.
One of the driving forces of this filtering process is my never-ending curiosity: "Why is it that way?" "How did this come to be?" "What is the history behind that convention?" I tend to keep these musings to myself, exploring them at my leisure, finding some answers, and then moving on to the next question (usually this takes several weeks!).
However, given the observations of the recent years, I thought it might be constructive to ponder aloud, as it were. To explore in a more public forum, to set an example that the vulnerability of curiosity and "not knowing" is quite okay, that even those of us with lots of time in the industry are constantly learning, constantly asking.
My latest curiosity has been around recursion: who first came up with it? How did it make it's way from abstract maths to programming languages? How did it enter the consciousness of so many software engineers (especially those who are at ease in functional programming)? It turns out that an answer to this is actually quite closely related to a previous post I wrote on the secret history of lambda. A short version goes something like this:
Giuseppe Peano wanted to establish a firm foundation for logic and maths in general. As part of this, he ended up creating consistent axioms around the hard-to-define natural numbers, counting, and arithmetic operations (which utilized recursion). While visiting a conference in Europe, Bertrand Russell was deeply impressed by the dialectic talent of Peano and his unfailing clarity; he queried Peano as to his secret for success (Peano told him) and them asked for all of his published works. Russell proceeded to study these quite deeply. With this in his background, he eventually co-wrote the Principia Mathematica. Later, Alonzo Church (along with his grad students) sought to improve upon this, and in the process Alonzo Church ended up developing the lambda calculus. His student, John McCarthy, later created the first functional programming language, Lisp, utilizing concepts from the lambda calculus (recursion and function composition).
In the course of reading between 40-50 mathematics papers (including various histories) over the last week, I have learned far more than I had originally intended. So much so, in fact, that I'm currently working on a very fun recursion tutorial that not only covers the usual practical stuff, but steps the reader through programming implementations of the Peano axioms, arithmetic definitions, the Ackermann function, and parts of the lambda calculus.
I've got a few more blog post ideas cooking that dive into functions, their history and evolution. We'll see how those pan out. Even more exciting, though, was having found interesting papers discussing the evolution of functions and the birth of category theory from algebraic topology. This, needless to say, spawned a whole new trail of research, papers, and books... and I've got some great ideas for future blog posts/tutorials around this topic as well. (I've encountered category theory before, but watching it appear unsearched and unbidden in the midst of the other reading was quite delightful).
In closing, I enjoy reading not only the original papers (and correspondence between great thinkers of a previous era), but also the meanderings and rediscoveries of my peers. I've run across blog posts like this in the past, and they were quite enchanting. I hope that we continue to foster that in our industry, and that we see more examples of it in the future.
Keep on questing ;-)
The quotes below are real(ish).
"Hi honey, did you just call me? I got a weird message that sounded like you were in some kind of trouble. All I could hear was traffic noise and sirens..."
"I'm sorry. I must have dialed your number by mistake. I'm not in the habit of dialing my ex-boyfriends, but since you asked, would you like to go out with me again? One more try?"
"Once a friend called me and I heard him fighting with his wife. It sounded pretty bad."
"I got a voicemail one time and it was this guy yelling at me in Hindi for almost 5 minutes. The strange thing is, I don't speak Hindi."
"I remember once my friend dialed me. I called back and left a message asking whether it was actually the owner or...
It's called "butt dialing" in my parts of the world, or "purse dialing" (if one carries a purse), or sometimes just called pocket dialing: That accidental event where something presses the phone and it dials a number in memory without the knowlege of its owner.
After hearing these phone stories, I'm reminded that humanity isn't perfect. Among other things, we have worries, regrets, ex's, outbursts, frustrations, and maybe even laziness. One might be inclined to write these occurrences off as natural or inevitable. But, let's reflect a little. Were the people that this happened to any happier for it? Did it improve their lives? I tend to think it created unnecessary stress. Were they to blame? Was this preventable?
"Smart" phones. I'm inclined to call you what you are: The butt of technology.
We're not living in the 90's anymore. Sure, there was a time when phones had real keys and possibly weren't lockable and maybe were even prone to the occasional purse dial. Those days are long gone. "Smart" phones, you know when you're in a pocket or a purse. Deal with it. You are as dumb as my first feature phone. Actually, you are dumber. At least my first feature phone had a keyboard cover.
Folks, I hope that in my lifetime we'll actually see a phone that is truly smart. Perhaps the Ubuntu Phone will make that hope a reality.
I can see the billboards now. "Ubuntu Phone. It Will Save Your Butt." (Insert your imagined inappropriate billboard photo alongside the caption. ;)
Do you have a great butt dialing story? Please share it in the comments.
No people were harmed in the making of this article. And not one person who shared their story is or was a "user". They are real people that were simply excluded from the decisions that made their phones dumb.
Image: Gwyneth Anne Bronwynne Jones (The Daring Librarian), CC BY-SA 2.0
So I deleted my whole website by accident.
Yep, it wasn't very fun. Luckily, Linode's Backup Service saved the day. Though they backup the whole machine, it was easy to restore to the linode, change the configuration to use the required partition as a block device, reboot, and then manually mount the block device. At that point, restoration was a cp away.
The reason why this all happened is because I was working on the final piece to my ideal blogging workflow: putting everything under version control.
The problem came when I tried to initialize my current web folder. I mean, it worked, and I could clone the repo on my computer, but I couldn't push. Worse yet, I got something scary back:remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
So in the process of struggling with this back and forth between local and remote, I killed my files. Don't you usually panic when you get some long error message that doesn't make a darn bit of sense?
Yeah, well, I guess I kind of got the idea, but it wasn't entirely clear. The key point is that we're trying to push to a non-bare folder— i.e. one that includes all the tracked files— and it's on a main branch.
So let's move to the solution: don't do this. You should either push to a different branch and then manually merge on remote, but merges aren't always guaranteed. Why not do something entirely different? Something more proper.
First, start with the remote:# important: make a new folder!
git init --bare ~/path/to/some/new/folder
Then local:git clone user@server:path/to/the/aforementioned/folder
# make some changes
git add -A
git commit -am "initial commit or whatever you want to say"
If you check out what's in that folder on remote you'll find it has no tracked files. Basically, a bare repo is basically just an index. It's a place to pull and push to. You're not goinng to go there and start changing files and getting git all confused.
Now here's the magic part: in the hooks subfolder of your folder, create a new executable file called post-receive containing the following:#!/usr/bin/env sh
git checkout -f master
# add any other commands that need to happen to rebuild your site, e.g.:
# blogofile build
Assuming you've already committed some changes, go ahead and run it and check your website.
Pretty cool, huh? Well, it gets even better. Next push you do will automatically update your website for you. So now for me, an update to the website is just a local push away. No need to even login to the server anymore.
There are other solutions to this problem but this one seems to be the most consistent and easy.
This is a technical post about PulseAudio internals and the upcoming protocol improvements in the upcoming PulseAudio 6.0 release.PulseAudio memory copies and buffering
PulseAudio is said to have a “zero-copy” architecture. So let’s look at what copies and buffers are involved in a typical playback scenario.Client side
When PulseAudio server and client runs as the same user, PulseAudio enables shared memory (SHM) for audio data. (In other cases, SHM is disabled for security reasons.) Applications can use pa_stream_begin_write to get a pointer directly into the SHM buffer. When using pa_stream_write or through the ALSA plugin, there will be one memory copy into the SHM.Server resampling and remapping
On the server side, the server might need to convert the stream into a format that fits the hardware (and potential other streams that might be running simultaneously). This step is skipped if deemed unnecessary.
First, the samples are converted to either signed 16 bit or float 32 bit (mainly depending on resampler requirements).
In case resampling is necessary, we make use of external resampler libraries for this, the default being speex.
Second, if remapping is necessary, e g if the input is mono and the output is stereo, that is performed as well. Finally, the samples are converted to a format that the hardware supports.
So, in worst case, there might be up to four different buffers involved here (first: after converting to “work format”, second: after resampling, third: after remapping, fourth: after converting to hardware supported format), and in best case, this step is entirely skipped.Mixing and hardware output
PulseAudio’s built in mixer multiplies each channel of each stream with a volume factor and writes the result to the hardware. In case the hardware supports mmap (memory mapping), we write the mix result directly into the DMA buffers.Summary
The best we can do is one copy in total, from the SHM buffer directly into the DMA hardware buffer. I hope this clears up any confusion about what PulseAudio’s advertised “zero copy” capabilities means in practice.
However, memory copies is not the only thing you want to avoid to get good performance, which brings us to the next point:Protocol improvements in 6.0
PulseAudio does pretty well CPU wise for high latency loads (e g music playback), but a bit worse for low latency loads (e g VOIP, gaming). Or to put it another way, PulseAudio has a low per sample cost, but there is still some optimisation that can be done per packet.
For every playback packet, there are three messages sent: from server to client saying “I need more data”, from client to server saying “here’s some data, I put it in SHM, at this address”, and then a third from server to client saying “thanks, I have no more use for this SHM data, please reclaim the memory”. The third message is not sent until the audio has actually been played back.
For every message, it means syscalls to write, read, and poll a unix socket. This overhead turned out to be significant enough to try to improve.
So instead of putting just the audio data into SHM, as of 6.0 we also put the messages into two SHM ringbuffers, one in each direction. For signalling we use eventfds. (There is also an optimisation layer on top of the eventfd that tries to avoid writing to the eventfd in case no one is currently waiting.) This is not so much for saving memory copies but to save syscalls.
From my own unscientific benchmarks (i e, running “top”), this saves us ~10% – 25% of CPU power in low latency use cases, half of that being on the client side.
I'm not anyone's Nonna but I can still make good biscotti, which is something I enjoy with a nice espresso.
If you don't know what biscotti is, it's a sweet, Italian twice-baked cookie/biscuit ("biscotti" literally means twice cooked, so does "biscuit" for that matter) typically made with a strong spice, like cinnamon or anise, and with a nut/seed, typically almonds, in it.
There are a bunch of variations of biscotti, with different flavour combinations –like chocolate– but I don't like things to be too sweet so I keep it simple with a few flavours.
- 1 cup sugar
- 1 cup butter
- 1 cup whole or coarsely chopped almonds –toasted*, if you like.
- 1 teaspoon ground anise
- 1 teaspoon lemon zest –about as much as you would get from zesting half a lemon
- 1 teaspoon vanilla extract
- 3 eggs
- 2 teaspoons baking powder
- 2 3/4 cups flour –you can cut this with 1/4 cup almond flour, if you like or have some.
*to toast the almonds (or any nut) simply heat a dry pan to quite hot & toss the nuts in, now while shaking to prevent burning, cook until they are fragrant.
- In a large bowl, combine the flour and baking powder
- Combine the sugar with the ground anise and lemon zest, then mash it into the butter –here, I deviate from tradition in the method.
- Add the eggs and mix those in.
- Add the egg-butter-sugar mixture to the dry ingredients and bring together into a smooth dough.
- Wrap in plastic wrap and chill in the fridge for at least 30 minutes.
- Preheat an oven to 350 degrees Fahrenheit.
- Remove the dough from the fridge and divide in half.
- Moisten your hands with water and form each half into a long thin log –approx. 8x32x4 centimeters.
- Place each log apart from each other on a clean baking sheet, and bake for 30 minutes until golden brown.
- Remove from the oven and let cool on a wire rack for 15 minutes or so.
- Cut** each log into 2 cm slices and arrange them on the baking sheet with the cut side face down/up.
- Return to the oven and bake for another 25 to 30 minutes, until golden brown.
- Remove finished biscotti from the oven and let cool on a wire rack before eating.
**to get that angled cut like seen here, slice the biscotti log on a ~45 degree angle, what is typically called "slicing on a bias".
I recently updated the PostBooks packages in Debian and Ubuntu to version 4.7. This is the version that was released in Ubuntu 14.10 (Utopic Unicorn) and is part of the upcoming Debian 8 (jessie) release.Better prospects for Fedora and RHEL/CentOS/EPEL packages
As well as getting the packages ready, I've been in contact with xTuple helping them generalize their build system to make packaging easier. This has eliminated the need to patch the makefiles during the build. As well as making it easier to support the Debian/Ubuntu packages, this should make it far easier for somebody to create a spec file for RPM packaging too.Debian wins a prize
Steve Hackbarth, Director of Product Development at xTuple, myself and the impressive Community Member of the Year trophy
This is a great example of the productive relationships that exist between Debian, upstream developers and the wider free software community and it is great to be part of a team that can synthesize the work from so many other developers into ready-to-run solutions on a 100% free software platform.
Receiving this award really made me think about all the effort that has gone into making it possible to apt-get install postbooks and all the people who have collectively done far more work than myself to make this possible:
- The Debian PostgreSQL packaging team making the PostgreSQL server, client libraries and related packages available to install and upgrade easily on Debian and Ubuntu.
- The Debian Qt/KDE packaging team providing the Qt libraries.
- Andrew Shadura originally started the Postbooks packaging and preparing patches for a clean build on Debian.
- Juliana Louback who created the JSCommunicator / WebRTC extension for xTuple's new web interface while working in Google Summer of Code.
- xTuple themselves, who have an ongoing and enthusiastic commitment to free software and are actively developing their new web platform on Github.
Here is a screenshot of the xTuple web / JSCommunicator integration, it was one of the highlights of xTupleCon:
and gives a preview of the wide range of commercial opportunities that WebRTC is creating for software vendors to displace traditional telecommunications providers.
xTupleCon also gave me a great opportunity to see new features (like the xTuple / Drupal web shop integration) and hear about the success of consultants and their clients deploying xTuple/PostBooks in various scenarios. The product is extremely strong in meeting the needs of manufacturing and distribution and has gained a lot of traction in these industries in the US. Many of these features are equally applicable in other markets with a strong manufacturing industry such as Germany or the UK. However, it is also flexible enough to simply disable many of the specialized features and use it as a general purpose accounting solution for consulting and services businesses. This makes it a good option for many IT freelancers and support providers looking for a way to keep their business accounts in a genuinely open source solution with a strong SQL backend and a native Linux desktop interface.
I swear, I find out about some new event Ubuntu does every day. How is it that I've been around Ubuntu for as long as I have and I've only now heard about this?
Well, in any case, today is Ubuntu Community Appreciation Day, where we give thanks to the humans (remember, ubuntu means humanity!) that have so graciously donated their time to make Ubuntu a reality.
I have a lot of people to thank in the community. We have some really exceptional people about. I really feel like I could make the world's longest blog post just trying to list them all. Several folks already have!
Instead, I'll point out a major player in the community who is pretty unseen these days.
Phill Westside was a major contributor to Lubuntu. He was there when I first came to #lubuntu so many moons ago. His friendly, inviting demeanour was one of the things that kept me sticking around after my support request was met. Phill took it upon himself to encourage me just as he had with others and slowly I came to contribute more and more.
Sadly, some people in high rankings in the community failed to see Phill's value for whatever reason. I'm not sure I totally understand but I think the barrage of opinions that came from Jono Bacon's call for reform in Ubuntu governance may offer some hint. Phill's no longer an Ubuntu member and is rarely seen in the typical places in the community.
Yet he still helps out on #lubuntu, still helps with Lubuntu ISO testing, still reposts Lubuntu news on Facebook, still contributes to the Lubuntu mailing lists, still tries to help herd the cats as it were, though he's handed off titles to others (that's how I'm the Release Manager and Head of QA!). tl;dr, Phill is still a major contributor to Ubuntu.
Did I mention he's a great guy to hang out with, too? I've never met him face to face, but I'm sure if I did, I'd give him one heck of a big ole hug.
Today is Ubuntu Community Appreciation Day and I wanted to recognize several people who have helped me along my journey within the Ubuntu Community.
Elizabeth Krumbach Joseph
Lyz has been a friend for years. We met when I was just transitioning from using Windows to using Linux. The Ubuntu New York LoCo was holding its bi-annual release part at the Holiday Inn located in Waterloo, NY on November, 8th 2009. Lyz gave a presentation “Who Uses and Contributes to Open Source Projects (And how you can too!)” that day and helped serve as a guide for the New York LoCo team as it sought to become an approved LoCo team. Lyz is an amazing person who has given me advice over the last five years. She contributes her energies to the Ubuntu project with a commitment and passion that has both my respect and admiration.
Thank you for all you have done Lyz!
Jorge is the first ‘Ubuntu celebrity’ that I interacted with. When I was helping to organize FOSSCON at RIT in Rochester, NY I contacted Jorge to ask if he would attend and present at the conference. I think Jorge’s participation helped us attract attendees the first year and I was grateful that he was willing to attend. FOSSCON has become a successful conference under the guidance of my friend Jonathan Simpson. Jorge also encouraged me to apply for sponsorship to an Ubuntu Developer Summit which culminated in my being sponsored and attending my first UDS. Jorge is a person that is always willing to help others with great energy and a smile. He is an awesome contributor to the Ubuntu Community and I am thankful that I have met him in person.
Jorge you inspire us with your advice to Just Do It!
At my first UDS I was in awe of the people around me. They were brilliant high energy people committed to Ubuntu and open source. There was a fantastic energy and passion in every session I attended. While I had offered what thoughts I had and signed up to undertake work items in many sessions I felt like a small fish in a sea of very big fish. It was Jono who took the time to let me know that he was impressed with my willingness to speak up, volunteer to undertake work and get things done. He made me feel as though my contributions were appreciated. It is an awesome feeling I will remember for the rest of my life. He inspired me that day to continue to contribute and to help others do the same.
Jono, you have my utmost respect for your ability to inspire people to take on important work and make the world a better place.
While many would thank Mark for his unique vision for Ubuntu or his massive contribution of money to fund the project, I would like to thank him for the personal touch he exhibits to members of the community. Mark took the time to autograph a picture for my young son who was impressed that I knew a person who had been in space. To this day my son tells his peers at school about the picture and keeps it on his night stand. I also remember a young man at his first UDS that had a great idea and wanted to present it to Mark. I mentioned this to Mark and he immediately made time to meet the young man and listened intently to his idea. The young man felt he had a limited ability to impact the project as a college student from Poland, but after speaking with Mark he was inspired and felt that he could make a difference in his local community and in the Ubuntu Project. To this day I am amazed at the passion to do good that I have seen Mark exhibit.
Thanks for creating the project Mark; you are truly amazing.
I have worked with Laura on the LoCo Council and on the Community Council and she is a fantastically dedicated hard working person who is very passionate about Ubuntu LoCo Teams. She is an advocate for women in technology and open source. Laura has helped move many projects along and one of the hardest working people I have ever met. It is amazing how much work she does behind the scenes without ever seeking recognition or thanks.
Thank your Laura for all your hard work and dedication to the Ubuntu Community.
Brian is one of the first New York Ubuntu LoCo members I met. We met at Wegman’s in Rochester, NY on November 6th, 2008 with the intention of reviving the NY LoCo team. Over the next several years Brian played a key role in helping me expand the activities of the team. He helped organize the launch parties, presentations, irc meetings and other activities. Brian helped man many booths at local technology events and was instrumental in getting the team copies of CDs before we were eligible to receive them from Canonical.
Thank you Brian!
What a truly amazing person! Daniel is very thoughtful and understanding when dealing with important issues in the Ubuntu community. He takes on multiple tasks with ease and is always cheerful and energetic. He helps to keep the Community Council organized and on task. When Daniel contributes his thoughts they are always well thought out and of high value.
Daniel you are awesome my friend!
The Ubuntu Community is filled with unique, intelligent and amazing people. There is not enough space to mention everyone, but I truly feel enriched for having met many of you either in-person or online. Each and every one of you help make the Ubuntu Community amazing!
And again, I don’t know how to start a blog post. I believe that one of my weak points is that I don’t know how to start redacting stuff. But meh, we’re here because it’s the Ubuntu Community Appreciation Day. And here am I, part of this huge community for more than three years. It’s been an awesome experience ever since I joined. And I am grateful to a whole bunch of people.
I know it may sound like a cliché, but seriously, listing all the people who I have met and contributed with in the community would be basically impossible for me. It would be a never-ending list! All I can say right now is that I am so, so thankful for crossing paths with so many of them. From developers, translators, designers and more, the Ubuntu community is such a diverse one, with people united by one thing: Ubuntu.
When I joined the community I was a kind-of disoriented 14-year old guy. As time passed, the community has helped me develop skills from improving my English (Spanish is my native language for those who didn’t know) to starting me in programming (thing that I didn’t know about a couple years ago!). And I’ve formed great friendships around.
Again, all I can say is I am forever grateful to all those people who I have worked with, and to those who I haven’t too. We are working on what’s the future of open computing, and all of this wouldn’t be possible without you. Whether you have contributed in the past are or still contributing to the community, rest assured that you have helped build this huge community.
Thank you. Sincerely, thank you.
Thank you maco/Mackenzie Morgan for getting me involved in Ubuntu Women and onto freenode.
Thank you akk/Akkana Peck, Pleia2/Lyz Joseph, Pendulum/Penelope Stow, belkinsa/Svetlana Belkin and so many more of the Ubuntu Women for being calm and competent, and energizing the effort to keep Ubuntu welcoming to all.
Thank you to my Kubuntu team, Riddell/Jonathan Riddell, apachelogger/Harald Sitter, shadeslayer/Rohan Garg, yofel/Philip Muscovak, ScottK/Scott Kitterman and sgclark/Scarlett Clark for your energy, intelligence and wonderful work. Your packaging and tooling makes it all happen. The great people who help users on the ML and in IRC and on the forums keep us going as well. And the folks who test, who are willing to break their systems so the rest of us don't have to: thank you!
There are so many people (some of the same ones!) to thank in KDE, but that's a separate blogpost. Your software keeps me working and online.
What a great community working together for the betterment of humanity.
Ubuntu: human kindness.
Elizabeth Krumbach Joseph
Elizabeth is a stellar community contributor who has provided solid leadership and mentorship to thousands of Ubuntu Contributors over the years. She is always available to lend an ear to a Community Contributor and provide advice. Her leadership through the Community Council has been amazing and she has always done what is in the best interest of the Community.
Charles is a friend of the Community and long time contributor who is always providing excellent and sensical feedback as we have discussions in the community. He is among a few who will always call it how he sees it and always has the community’s best interest in mind. For me he was very helpful when I first started building communities in Ubuntu and shared his own experiences and how to get through bureaucracy and do awesome.
Michael is a Canonical Employee who started as a Community Contributor and I think of all the employees I have met that work for Canonical it is Michael who has always seemed to be able to balance his role at Canonical and contributing best. He is always fair when dealing with contributors and has an uncanny ability to see things through the Community lenses which I think many at Canonical cannot. I appreciate his leadership on the Community Council.
Thanks again to all those who make Ubuntu one of the best linux distros available for Desktop, Server and Cloud! You all rock!
In the light of Community Appreciation Day, I would like to thank everyone in the Ubuntu Community for doing a great job contributing to Ubuntu- from promoting it to fixing bugs or from leading events to teaching others. There are two people and one group that I would like to really, really thank.
The first one is Elizabeth Krumbach Joseph of Ubuntu Women. She was the first one who interacted with me when I first started last year. From that point on, she mentored (not formally, but in a organic way) me on how to do thing within the Community, like how to reply in mail-lists in a way where it’s readable. She also supported me with the various ideas that I came up with.
Ubuntu Ohio Team’s very fine leader, Stephen Michael Kellat, is the next one. He mentored me on how to deal with the state of our LoCo and how to think in a different way on certain topics.
P.S. I would like to thank Michael Hall for his blog post.
When things are moving fast and there’s still a lot of work to do, it’s sometimes easy to forget to stop and take the time to say “thank you” to the people that are helping you and the rest of the community. So every November 20th we in Ubuntu have a Community Appreciation Day, to remind us all of the importance of those two little words. We should of course all be saying it every day, but having a reminder like this helps when things get busy.
Like so many who have already posted their appreciation have said, it would be impossible for me to thank everybody I want to thank. Even if I spent all day on this post, I wouldn’t be able to mention even half of them. So instead I’m going to highlight two people specifically.
First I want to thank Scarlett Clark from the Kubuntu community. In the lead up to this last Ubuntu Online Summit we didn’t have enough track leads on the Users track, which is one that I really wanted to see more active this time around. The track leads from the previous UOS couldn’t do it because of personal or work schedules, and as time was getting scarce I was really in a bind to find someone. I put out a general call for help in one of the Kubuntu IRC channels, and Scarlett was quick to volunteer. I really appreciated her enthusiasm then, and even more the work that she put in as a first-time track lead to help make the Users track a success. So thank you Scarlett.
Next, I really really want to say thank you to Svetlana Belkin, who seems to be contributing in almost every part of Ubuntu these days (including ones I barely know about, like Ubuntu Scientists). She was also a repeat track lead last UOS for the Community track, and has been contributing a lot of great feedback and ideas on ways to make our amazing community even better. Most importantly, in my opinion, is that she’s trying to re-start the Ubuntu Leadership team, which I think is needed now more than ever, and which I really want to become more active in once I get through with some deadline-bound work. I would encourage anybody else who is a leader in the community, or who wants to be one, to join her in that. And thank you, Svetlana, for everything that you do.
It is both a joy and a privilege to be able to work with people like Scarlett and Svetlana, and everybody else in the Ubuntu community. Today more than ever I am reminded about how lucky I am to be a part of it.
This is my first time participating in the Ubuntu Community Appreciation Day. I think it is a great idea to publicly acknowledge the work of others and thank them for their work to improve Ubuntu. After all, Ubuntu is a community where people come together to collaborate, have fun and bring technology to the masses in a humanly fashion.
Anyway, the person I like to thank is Riccardo Padovani whose contributions spread across several apps like Reminders, Ubuntu Browser, Clock, Calculator etc and various other personal projects. In particular, it shows how one can get involved and work on the applications that you use daily and improve them. Riccardo becomes a beacon of inspiration for others including myself.
It is definitely a challenge to juggle University and open-source work, and by the looks of it, he seems to have achieved a perfect equilibrium.
Thanks Riccardo for everything and keep up the good work!