The year is coming to an end and I would encourage you all to consider making a tax-deductible donation (If you live in the U.S.) to one of the following great non-profits:
The Mozilla Foundation is a non-profit organization that promotes openness, innovation and participation on the Internet. We promote the values of an open Internet to the broader world. Mozilla is best known for the Firefox browser, but we advance our mission through other software projects, grants and engagement and education efforts.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development.
The ACLU is our nation’s guardian of liberty, working daily in courts, legislatures and communities to defend and preserve the individual rights and liberties that the Constitution and laws of the United States guarantee everyone in this country.
The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content, and to providing the full content of these wiki-based projects to the public free of charge. The Wikimedia Foundation operates some of the largest collaboratively edited reference projects in the world, including Wikipedia, a top-ten internet property.
Feeding America is committed to helping people in need, but we can’t do it without you. If you believe that no one should go hungry in America, take the pledge to help solve hunger.
ACF International, a global humanitarian organization committed to ending world hunger, works to save the lives of malnourished children while providing communities with access to safe water and sustainable solutions to hunger.
These six non-profits are just one of many causes to support but these ones specifically are playing a pivotal role in protecting the internet, protecting liberties, educating people around the globe or helping reduce hunger.
Even if you cannot support one of these causes, consider giving this post a share to add visibility to your friends and family and help support these causes in the new year!
Last month i started taking up fishing as a hobby, it's a wonderful outdoor pastime and a great way to relax and unwind. One of the best things about fishing is that you don't need expensive equipment, i did bought some amateur fishing gear(from Avito) to start with(3 rods 5m, 270 & 240cm).
Until now i haven't caught any fish yet, i learned a lot since i started practicing and i am still trying to find a good spot where i can fish and charge my batteries during the week-end.
Recently, I have been involved with several discussions about leadership, community, projects, etc and how these are working with each others and how things are done, etc.
I have also received a very long private email from a good friend that I met online a months or two ago. That email was about the same topic as above.
So you see, recently, I’ve been engaged into topics of that kind with more than one person.
I was thinking to share my own vision and thoughts about all this and how I do things myself with the projects I’m part of:
- Kibo – see my tweet about it.
- ToriOS – saving very old machines from the trash.
- Ubuntu GNOME – an official flavour of Ubuntu I’m proud to be part of as I have earned my GNOME Membership and Ubuntu Membership while I’m contributing voluntary to that project.
- StartUbuntu – here is my latest post about it.
- Linux Padawan – as free service I’m willing to offer with Kibo and a new project I couldn’t resist and couldn’t refuse to be part of.
- Other Secondary Projects.
I was a bit confused how and from where should I start? that topic needs more than just one post to cover the important aspects and provide the full picture.
Then, I realized I should go back to my rules to help myself in finding out how to write about it and from where should I start? and indeed, I got the idea.
One of the rules I tend and do my best to live by is:
KISS – Keep It Simple and Short
And, to make life easier, save time and energy for everyone, I can put all what I have in mind in 18 super helpful, super useful, super inspirational and motivating minutes and share this video from YouTube:
If you can not view the above video, click here to watch it on YouTube.
And, that is indeed My Ultimate Goal with my own projects (Kibo, ToriOS and StartUbuntu) and the projects I’m heavily contributing to (Ubuntu GNOME). Most likely, that would be My Ultimate Goal with anything in life. Did I mention that video was my endless inspiration and unlimited motivation?
Mission accomplished. Now, that is my answer for anyone who might asking or wondering:
“What is your plan(s) or goal(s) about … project?”
Keep in mind though, we have no magical stick in hands. Things will never be done nor built over night. It takes time and it needs lots of efforts and energy. It is not easy to reach that goal, thus it is called The Ultimate Goal. However, it is not impossible at all to reach. It just takes time. And more importantly than time, it is all depending on what you want or set for yourself as a target or aim to reach.
Last but not least, another rule that I do like to follow and live by:
“Don’t aim for success if you want it; just do what you love and believe in, and it will come naturally.” – David Frost
Thank you for reading
On Saturday December 13 2014, I had the opportunity to attend a University of Minnesota Computer Science and Engineering presentation by David Parnas on the topic of Software Engineering, Why and What. Dr. Parnas is an early pioneer of Software Engineering since the 1960's. He presented ”what Software Engineers need to know and what they need to be doing”. Dr. Parnas presentation contained information about the differences between Software Engineering and Computer Science. Theses are two terms that I previously had articulating the differences so to help me on my journey of mastery, I thought I would write about this topic.
Science and Engineering are fundamentally different activities. Science produces knowledge and Engineering produces products.
Computer Science is a body of knowledge about computers and their use. The Computer Science field is the scientific approach to computation and its applications. A Computer Scientist specializes in the theory of computation and design of computational systems.
Software Engineering is the multi-person development of multi-version software programs. Software Engineering is about the application of engineering to the design, development and maintenance of software. Software engineers produce large families of programs that requires not only a mastery of programming but several other skills as well.
Dr. Parnas presented his list of skills a Software Engineer must to know and challenged the audience to use it and extend it.Software Engineering checklist
- Communicate precisely between developers and stakeholders.
- Communicate precisely among developers, others who will use the program.
- Design human-computer interfaces.
- Design and maintain multi-version software.
- Separating concerns.
- Using parameterization.
- Design software for portability.
- Design software for extension or contraction.
- Design software for reuse.
- Revise old programs.
- Software quality assurance.
- Develop secure software.
- Create and use models in system development.
- Specify, predict, analyze and evaluate performance.
- Be disciplined in development and maintenance.
- Use metrics in system development.
- Manage complex projects.
- Deal with concurrency.
- Understand and use non-determinacy.
- Apply mathematics to increase quality and efficiency.
All the capabilities on the list have several things in common. They are all subjects that require a deep level of understanding to get right. All of the skills involve some Computer Science, and Mathematics. They are fundamental skills and not related to a specific technology. The technology changes but the core concepts of Software Engineering do not change.
What resonated the most with me was the need for discipline. Engineers are not born with disciplined work habits, they have to be taught. Writing good software requires discipline in the entire software lifecycle.
Software maintenance requires even more discipline that the original development. One of Dr. Parnas' techniques for teaching Software Engineering is to have students analyze, optimize and maintain code that someone else wrote. Did any other Software Engineers get that kind of training in school?To learn to do, you must do Sometimes experience is the greatest teacher
Maintaining a large software project (that I did not write) was one of the most difficult projects I worked on in my professional career. To maintain software you did not write is incredibly difficult because you have to learn what the software is supposed to do which is often not what it actually does. With little documentation to go on, my team was forced to read the mostly uncommented code. I often had the impulse (and pressure from management) to “ship” a quick fix to a customer problem, but learned that all changes no matter how small had to be carefully considered and tested before it could be released. Bugs or errors in the field are bad but can be very costly for a company to fix. It takes discipline to maintain large software products because a fix to one problem could create another somewhere else. This maintenance project changed how I wrote software because I did not want other people to have the same difficult experience that we had. To this day I obsessively comment any code I write for software projects.
Thanks to Dr. David Parnas for this list, I will try to use the information about Software Engineering on my continuing journey toward mastery.
language or script, and getting their feet in the open-source ecosystem.
Your phone's handy weather app depends upon the goodwill of a for-profit data provider, and their often-opaque API (14 degrees? Where was it observed? When?) That's a shame because most data collection is paid for by you, the taxpayer.
Let's take the profit-from-data out of that system. Several projects have tried to do this before (including libgweather), but each tried to do too much and replicate the one-data-provider-to-rule-them-all model. And most ran aground on that complexity.
Here's where you come inOne afternoon, look up your weather service's online data sources. And knock together a script to publish them in a uniform format.
Here's the one I did for the United States:
Worldwide METAR observation sites
US DOD and NOAA weather radars
US Forecast/Alert zones
- Looking for data on non-METAR (non-airport) observation stations, weather radar sites, and whatever forecast and alert areas your country uses.
- Use the same format I did: Lat (deg.decimal), Lon (deg.decimal), Location Code, Long Name. Use the original source's data, even if it's wrong. Area and zones should use the lat/lon of the centroid.
- The format is simple CSV, easy to parse and publish.
- Publish on GitHub, for easy version control, permalinking, free storage, and uptime.
- Here's the key: Your data must be automatically-updated. Regularly, your program must check the original source and update your published version. How I did it with a cron job. Publish both the data and your method on GitHub.
- When you have published, drop me an e-mail so I can link to your data and source.
If you do it right, one afternoon to setup your country's self-updating database. Not a bad little project, you learn a little, and you help make the world a better place.
My country doesn't have online weather dataSorry to hear that. You're missing some great stuff.
If you live in a country with a reasonably free press and reasonably fair elections, make a stink about it. You are probably already paying for it through taxes, why can't you have it?
If you live somewhere else, then next time you have a revolution or coup, add 'open data' to the long list of needed reforms.
This is one fundamental tool that other free-weather projects have lacked. And any weather project can use this.
The global database of locations is really small by most database standards. Small enough to easily fit on a phone. Small enough to be bundled with apps that can request data directly from original sources...once they can look up the correct source to use.
How will this change the world?It's about simple tools that make
it easy to create free, cool software.
And it's about ensuring free access to data you already paid for.
Not bad for one afternoon's contribution.
Use the issue tracker. Having an open issue means there is always something to point to with all the history of the change that wont get lost. Submit patches using the appropriate method (merge proposals, pull requests, attachments in the issue tracker etc).
Sell your idea. The change is important to you but the maintainers may not think so. You may be a 1% use case that doesn't seem worth supporting. If the change fixes a bug describe exactly how to reproduce the issue and how serious it is. If the change is a new feature then show how it is useful.
Always follow the existing coding style. Even if you don't like it. If the existing code uses tabs, then use them too. Match brace style. If the existing code is inconsistent, match the code nearest to the changes you are making.
Make your change as small as possible. Put yourself in the mind of the reviewer. The longer the patch the more time it will take to review (and the less appealing it will be to do). You can always follow up later with more changes. First time contributors need more review - over time you can propose bigger changes and the reviewers can trust you more.
Read your patch before submitting it. You will often find bits you should have removed (whitespace, unrelated variable name changes, debugging code).
Be patient. It's OK to check back on progress - your change might have be forgotten about (everyone gets busy). Ask if there's any more you can do to make it easier to accept.
Instalamos el tema RAVEfinity y luego establecemos nuestro color preferido. Si también queremos cambiar colores de carpetas concretas, instalaremos Folder Color.
El método de instalación/configuración difiere si usas Ubuntu, Ubuntu GNOME o Ubuntu MATE:
- Instrucciones de instalación para Ubuntu (Con Unity)
- Instrucciones para Ubuntu MATE
- Instrucciones para Ubuntu GNOME
En Ubuntu MATE
En Ubuntu GNOME
I’m very happy that folks took notes during and after the meeting to bring up their ideas, thoughts, concerns and plans. It got a bit unwieldy, so Elfy put up a pad which summarises it and is meant to discuss actions and proposals.
Today we are going to have a meeting to discuss what’s on the “actions” pad. That’s why I thought it’d be handy to put together a bit of a summary of what people generally brought up. They’re not my thoughts, I’m just putting them up for further discussion.
- Feeling that people innovate *with* Ubuntu, not *in* Ubuntu.
- Perception of contributor drop in “older” parts of the community.
- Less activity at UDS/vUDS/UOS events (was discussed at UOS too, maybe we need a committee which finds a new vision for Ubuntu Community Planning)?
- Less activity in LoCos (lacking a sense of purpose?)
- No drop in members/developers.
- Less activity in Canonical-led projects.
- We don’t spend marketing money on social media. Build a pavement online.
- Downloading a CD image is too much of a barrier for many.
- Our “community infrastructure” did not scale with the amount of users.
- Some discussion about it being hard becoming a LoCo team. Bureaucracy from the LoCo Council.
- We don’t have enough time to train newcomers.
- Language barriers make it hard for some to get involved.
- Canonical does a bad job announcing their presence at events.
- Why are less people innovating in Ubuntu? Is Canonical driving too much of Ubuntu?
- Why aren’t more folks stepping up into leadership positions? Mentoring? Lack of opportunities? More delegation? Do leaders just come in and lead because they’re interested?
- Lack of planning? Do we re-plan things at UOS events, because some stuff never gets done? Need more follow-through? More assessment?
- community.ubuntu.com: More clearly indicate Canonical-led projects? Detail active projects, with point of contact, etc? Clean up moribund projects.
- Make Ubuntu events more about “doing things with Ubuntu”?
- Ubuntu Leadership Mentoring programme.
- Form more of an Ubuntu ecosystem, allowing to earn money with Ubuntu.
I have chaired many, attended many but Kibo Team’s Meeting this morning was great and very useful.
We actually had an IRC meeting previously (1st of Dec, 2014) but it wasn’t on Google Hangout on Air. Those who have worked with me, they do know for a fact that I prefer visual face to face meetings much more than IRC meetings. Why? because we could see and talk to each others; that is very important IMHO.
Today’s meeting made me even more excited about Kibo. I have always dreamed to work with the people I have met online, those who are part of the huge Ubuntu Family. However, I had no idea when and how to do that? but finally, it is happening and I’m very thankful for that and super happy.
“Keep your dream alive because it might come true one day.”
And, my dream wasn’t just about working with the people of Ubuntu Family but working on something I love and believe in. Even better, Kibo is a business project that is inspired by Ubuntu’s Philosophy. What could be better than all that?
So, I’d like to thank all those who have attended the meeting; you guys have made my day so thanks a lot
Things we have discussed:
(1) Introducing the Kibo’s Board and selecting the members of that board.
The structure of the board will be like this:
Founder + Regional Coordinators (Managers) + Department Coordinators (Managers) = Kibo’s Board
With the next meeting, hopefully soon .. we shall distribute the main tasks for each member of the board.
Mainly, the board – for now – is in charge of:
- Organizing and Coordinating
- Voting and Decision Marking
Because Kibo is inspired by:
“I am because we are”
“All of us are smarter than anyone of us”
I decided not to lead the project alone but share the leadership with my team, even though Kibo is my own project.
(2) We have narrowed down the services that Kibo will offer from 12 to only 6 where one is free and one is as secondary. So, 4 main services to begin with and in the future, we could add more services.
Previously, we had:
- Development and Coding
- Web Design
- Graphics Design
- Software QA (Quality Assurance)
- System Administration and Servers
- Technical Support
- Marketing and Social Media
- Human Resources and Recruitment
- Call Center and Customer Service
- Project Management and Planning
Now, we have:
- Web Design
- Technical Support
- Marketing and Social Media
- Project Management and Planning
- Training – Linux Padawan = Free Service
- Documentation = secondary
(3) Kibo’s website should be fine by now and I got the access back (to add people) and there should be no more issues, hopefully.
(4) A folder has been created on Google Drive for Kibo’s Website where there are 6 documents, each is a draft for what we shall put on the pages of our websites (text contents). So, people need to be invited to these documents and add their suggestions and then all what we need is to review these drafts and prepare the final one which will be published on our website.
(5) Alfredo (Ubuntu GNOME Artwork Lead) has sent a draft of Kibo’s logo and myself (Ali), Svetlana Belkin and Gustavo Silva have liked one of these and email was sent to the list:
However, this is not really a final logo. However, looks like we are so close to have one!
(6) Two new emails account have been created today:
marketing AT kibo DOT computer
To be used to contact 3rd parties and communicate to the world (send emails)
hr AT kibo DOT computer
Which will be used for HR and Recruitment (receive emails)
And, of course we previously had:
into AT kibo DOT computer
The board members will share the details of these emails.
(7) There were other ideas we have discussed off the record (because we didn’t want the recorded meeting to be more than 60mins) but mainly these are ideas which will be discussed in more details soon, with other meetings.
(8) We did love the idea of having Google Hangout Meetings so we shall do that more often, maybe 3-4 times per week.
(9) We could also have the Meetingbot on our IRC channel (#kibo on freenode) to have a logged text meeting just in case someone for whatever reason can’t make it to the hangout but he/she can join the IRC channel. That suggestion for the next meetings.
(10) Social Media Channels have been created:
That is all for now, I guess
Looking forward for more productive meetings soon!
My door and Kibo’s door will always be open to anyone who would like to join
Thank you for reading!
In this week’s show:-
- We take a look at what’s been happening in the news:
- We take a look at what’s been happening in the community:
We’ll be back next week for the last episode of the series, when we’ll be talking to Michael Hall and reviewing last year’s predictions!
Please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
After I implemented infinite scrolling in uReadIt 2.0, I found that after a couple of page loads the UI would start to be sluggish. It’s not surprising, considering the number of components it kept adding to the ListView. But in order to keep the UI consistent, I couldn’t get rid of those items, because I wanted to be able to scroll back through old ones. What I needed was a way to make QML ignore them when they weren’t actually being displayed.
Today I found myself reading about the QML Scene Graph, which led me to realize that QML wouldn’t spend time and resources trying to render an item if it knew ahead of time that there wasn’t anything to render. So I made a 1 line change to my MultiColumnListView to set the opacity of off-screen components to 0.
I also found these cool ways to visualize what QML is doing in terms of drawing, which are very helpful when it comes to optimizing, and let me verify that my change was doing what I expected. I’m pretty sure Florian Boucault has shown me this before, but I had forgotten how he did it.
Since its announcement, there appears to have been some confusion and concern about lxd, how it relates to lxc, and whether it will be taking away from lxc development.
When lxc was first started around 2007, it was mainly a userspace tool – some c code and some shell scripts – to exercise the in-development new kernel features intended for container and checkpoint-restart functionality. The lxc command line experience, after all these years, is quite set in stone. While it is not ideal (the mandatory -n annoys a lot of people), it has served us very well for a long time.
A few years ago, we took all of the main container-related functions which could be done with various commands, and exported them through the new ‘lxc API’. For instance, lxc-create had been a script, and lxc-start and lxc-execute were separate c programs. The new lxc ‘API’ was premised around a container object with methods, including ‘create’ and ‘start’, for the common operations.
From the start we had in mind at least python bindings to the API, and in quick order bindings came into being for C, python3, python2, go, lua, haskell, and more, allowing container administration from these languages without having to shell out to the lxc commands. So now code running on the same machine can manipulate containers. But we still have the arguably crufty command line language, and the API is local only.
lxd addresses those two issues. First, it presents a REST API for manipulating containers, thereby exporting container management over the network. Secondly, it offers a command line client using the REST API to administer containers across remote hosts. The command line API is basically what we came up with when we asked “what, after years of working with containers, would the perfect, intuitive, most concise and still flexible CLI we could imagine?” For handling remote containers it borrows some good parts of the git remote API. (I say “we” here, but really the inestimable stgraber designed the CLI). This allows us to leave the legacy lxc api as-is for administering local containers (“lxc-create”, “lxc-start”, etc), while giving us a nicer API and easier administration using the new CLI (“lxc start c1″, “lxc start images:ubuntu/trusty/amd64 host2:new-container”).
Above all, lxd exports a new interface over the network, but entirely wrapped around lxc. So lxc will not be going away, and focus on lxd will mean further improvements for lxc, not a shift away from lxc.
Now that we've got that covered, let’s look deeper into a very cool feature - the ability to customize the instance and automate its startup and configuration. For example, at instance creation time you can specify a snappy application to be installed. cloud-init is what allows you to do this, and it is installed inside the Snappy image. cloud-init receives this information from the user in the form of 'user-data'.
One of the formats that can be fed to cloud-init is called ‘cloud-config’. cloud-config is yaml formatted data that is interpreted and acted on by cloud-init. For Snappy, we’ve added a couple specific configuration values. Those are included under the top level 'snappy'.
- ssh_enabled: determines if 'ssh' service is started or not. By default ssh is not enabled.
- packages: A list of snappy packages to install on first boot. Items in this list are snappy package names.
- runcmd: A list of commands run after boot has been completed. Commands are run as root. Each entry in the list can be a string or a list. If the entry is a string, it is interpreted by 'sh'. If it is a list, it is executed as a command and arguments without shell interpretation.
- ssh_authorized_keys: This is a list of strings. Each key present will be put into the default user's ssh authorized keys file. Note that ssh authorized keys are also accepted via the cloud’s metadata service.
- write_files: this allows you to write content to the filesystem. The module is still expected to work, but the user will have to be aware that much of the filesystem is read-only. Specifically, writing to file system locations that are not writable is expected to fail.
Example Cloud ConfigIts always easiest to start from a working example. Below is one that demonstrates the usage of the config options listed above. Please note that user data intended to be consumed as cloud-config must contain the first line '#cloud-config'.
- content: |
echo "==== Hello Snappy! It is now $(date -R) ===="
- /writable/greet | tee /run/hello.log
$ uvt-kvm create --wait --add-user-data=my-config.yaml snappy1 release=devel
Our user-data instructed cloud-init to do a number of different things. First, it wrote a file via 'write_files' to a writable space on disk, and then executed that file with 'runcmd'. Lets verify that was done:
$ uvt-kvm ssh snappy1 cat /run/hello.log
==== Hello Snappy! It is now Thu, 11 Dec 2014 18:16:34 +0000 ====
It also instructed cloud-init to install the Snappy 'xkcd-webserver' application.
$ uvt-kvm ssh snappy1 snappy versions
Part Tag Installed Available Fingerprint Active
ubuntu-core edge 141 - 7f068cb4fa876c *
xkcd-webserver edge 0.3.1 - 3a9152b8bff494 *
There we can see that xkcd-webserver was installed, lets check that it is running:
$ uvt-kvm ip snappy1
$ wget -O - --quiet http://192.168.122.80/ | grep <title>
Launching on AzureThe same user-data listed above also works on Microsoft Azure. Follow the instructions for setting up the azure command line tools, and then launch the instance with and provide the '--custom-data' flag. A full command line might look like:
$ azure vm create snappy-test $imgid ubuntu \
--location "North Europe" --no-ssh-password \
--ssh-cert ~/.ssh/azure_pub.pem --ssh \
Have fun playing with cloud-init!
On the behalf of the Community Council I would like to congratulate and welcome our newly appointed (and newly renewed) members to the LoCo Council:
- Bhavani Shankar – https://launchpad.net/~bhavi (incumbent)
- Nathan Haines – https://launchpad.net/~nhaines
Thanks to everyone who participated in this recent call for nominees and continue to provide support for LoCo teams worldwide.
Originally posted to the loco-contacts mailing list on Thu Dec 11 19:10:02 UTC 2014 by Elizabeth K. Joseph
In November 42.5 work hours have been equally split among 3 paid contributors. Their reports are available:
- Thorsten Alteholz did his share as usual.
- Raphaël Hertzog worked 18 hours (catching up the remaining 4 hours of October).
- Holger Levsen did his share but did not manage to catch up with the backlog of the previous months. As such, those unused work hours have been redispatched among other contributors for the month of December.
Last month we mentioned the possibility to recruit more paid contributors to better share the work load and this has already happened: Ben Hutchings and Mike Gabriel join the list of paid contributors.
Ben, as a kernel maintainer, will obviously take care of releasing Linux security updates. We are glad to have him on board because backporting kernel fixes really need some skills that nobody else had within the team of paid contributors.Evolution of the situation
Compared to last month, the number of paid work hours has almost not increased (we are at 45.7 hours per month) but we are in the process of adding a few more sponsors: Roche Diagnostics International AG, Misal-System, Bitfolk LTD. And we are still in contact with a couple of other companies which have announced their willingness to contribute but which are waiting the new fiscal year.
But even with those new sponsors, we still have some way to go to reach our minimal goal of funding the equivalent of a half-time position. So consider asking your company representative to join this project!
In terms of security updates waiting to be handled, the situation looks better than last month: the dla-needed.txt file lists 27 packages awaiting an update (6 less than last month), the list of open vulnerabilities in Squeeze shows about 58 affected packages in total. Like last month, we’re a bit behind in terms of CVE triaging and there are still many packages using SSLv3 where we have no clear plan (in response to the POODLE issues).
The good side is that even though the kernel update spent a large chunk of time to Holger and Raphaël, we still managed to further reduce the backlog of security issues.Thanks to our sponsors
- Gold sponsors:
- Silver sponsors:
- AD&D – David Ayers – IntarS Austria
- Domeneshop AS
- Trollweb Solutions
- Université Lille 3
- Bronze sponsors:
A couple of months ago, I re-introduced an old friend -- Ubuntu JeOS (Just enough OS) -- the smallest, (merely 63MB compressed!) functional OS image that we can still call “Ubuntu”. In fact, we call it Ubuntu Core.
That post was a prelude to something we’ve been actively developing at Canonical for most of 2014 -- Snappy Ubuntu Core! Snappy Ubuntu combines the best of the ground-breaking image-based Ubuntu remix known as Ubuntu Touch for phones and tablets with the base Ubuntu server operating system trusted by millions of instances in the cloud.
Snappy introduces transactional updates and atomic, image based workflows -- old ideas implemented in databases for decades -- adapted to Ubuntu cloud and server ecosystems for the emerging cloud design patterns known as microservice architectures.
The underlying, base operating system is a very lean Ubuntu Core installation, running on a read-only system partition, much like your iOS, Android, or Ubuntu phone. One or more “frameworks” can be installed through the snappy command, which is an adaptation of the click packaging system we developed for the Ubuntu Phone. Perhaps the best sample framework is Docker. Applications are also packaged and installed using snappy, but apps run within frameworks. This means that any of the thousands of Docker images available in DockerHub are trivially installable as snap packages, running on the Docker framework in Snappy Ubuntu.
Take Snappy for a Drive
You can try Snappy for yourself in minutes!
You can download Snappy and launch it in a local virtual machine like this:
$ wget http://cdimage.ubuntu.com/ubuntu-core/preview/ubuntu-core-alpha-01.img
$ kvm -m 512 -redir :2222::22 -redir :4443::443 ubuntu-core-alpha-01.img
Then, SSH into it with password 'ubuntu':
$ ssh -p 2222 ubuntu@localhost
At this point, you might want to poke around the system. Take a look at the mount points, and perhaps try to touch or modify some files.
$ sudo rm /sbin/init
rm: cannot remove ‘/sbin/init’: Permission denied
$ sudo touch /foo
touch: cannot touch ‘foo’: Permission denied
$ apt-get install docker
apt-get: command not found
Rather, let's have a look at the new snappy package manager:
$ sudo snappy --help
And now, let’s install the Docker framework:
$ sudo snappy install docker
At this point, we can do essentially anything available in the Docker ecosystem!
Now, we’ve created some sample Snappy apps using existing Docker containers. For one example, let’s now install OwnCloud:
$ sudo snappy install owncloud
This will take a little while to install, but eventually, you can point a browser at your own private OwnCloud image, running within a Docker container, on your brand new Ubuntu Snappy system.
We can also update the entire system with a simple command and a reboot:$ sudo snappy versions$ sudo snappy update
$ sudo reboot
And we can rollback to the previous version!$ sudo snappy rollback
$ sudo reboot
Here's a short screencast of all of the above...
While the downloadable image is available for your local testing today, you will very soon be able to launch Snappy Ubuntu instances in your favorite public (Azure, GCE, AWS) and private clouds (OpenStack).
Y es que, señoras y señores, sólo hay que prestar atención a la comunidad de Numix en Google+ y ver como hierve con multitud de capturas de pantalla de escritorios que parecen de ciencia ficción: Modernos, atractivos y funcionales :)
Numix + Unity
Awesome!¿Te apetece probarlo en Ubuntu? Abre una Terminal,
Para instalar Numix:
sudo add-apt-repository ppa:numix/ppa
sudo apt-get update
sudo apt-get install numix-gtk-theme numix-icon-theme numix-icon-theme-circle
Para activar Numix:
gsettings set org.gnome.desktop.interface gtk-theme "Numix"
gsettings set org.gnome.desktop.wm.preferences theme "Numix"
gsettings set org.gnome.desktop.interface icon-theme "Numix-Circle"
gsettings set com.canonical.desktop.interface scrollbar-mode normal
Para revertir y activar el tema por defecto:
gsettings set org.gnome.desktop.interface gtk-theme "Ambiance"
gsettings set org.gnome.desktop.wm.preferences theme "Ambiance"
gsettings set org.gnome.desktop.interface icon-theme "ubuntu-mono-dark"
gsettings set com.canonical.desktop.interface scrollbar-mode overlay-auto
Tal vez Ubuntu encontró un nicho de mercado en el que Unity puede que sea el rey :)
Setting up a local Snappy Ubuntu Core environment with uvtool
As I’ve already mentioned Ubuntu has a very simple set of tools for creating virtual machines using cloud images, called 'uvtool'. Uvtool offers a easy way to bring up images on your system in a kvm environment. Before we use uvtool to get snappy on your local environment, you’ll need install the special version that has snappy supported added to it:
$ sudo apt-add-repository ppa:snappy-dev/tools
$ sudo apt-get update
$ sudo apt-get install uvtool
$ newgrp libvirtd
You only need to do 'newgrp libvirtd' during the initial setup, and only if you were not already in the libvirtd group which you can check by running the 'groups' command. A reboot or logout would have the same effect.
uvtool uses ssh key authorization so that you can connect to your instances without being prompted for a password. If you do not have a ssh key in '~/.ssh/id_rsa.pub', you can create one now with:
We’re ready to roll. Let’s download the images:
$ uvt-simplestreams-libvirt sync --snappy flavor=core release=devel
This will download a pre-made cloud image of the latest Snappy Core image from http://cloud-images.ubuntu.com/snappy/. It will download about 110M, so be prepared to wait a little bit.
Now let’s start up an instance called 'snappy-test':
$ uvt-kvm create --wait snappy-test flavor=core
This will do the magic of setting up a libvirt domain, starting it and waiting for it to boot (via the --wait flag). Time to ssh into it:
$ uvt-kvm ssh snappy-test
You now have a Snappy image which you’re sshd into.
If you want to manually ssh, or test that your snappy install of xkcd-webserver worked, you can get the IP address of the system with:
$ uvt-kvm ip snappy-test
When you're done playing, just destroy the instance with:
$ uvt-kvm destroy snappy-test