Newperson speaks up, I think for the first time, wondering when a new project result will be put to use, and offering a possible sample.
Longtime Devel speaks up, using rather angry questions about how the old symbol came to be displaced.
Another Oldtimer speaks up defending the symbol, accusing Longtime Devel of being out of touch.
And on. And on. The listowners don't redirect the discussion, and when questions are asked, they are answered angrily.
Newperson probably has departed by this point.
This seems like a small occurrence, but it is bad for every single participant, and each bystander has the power to change the conversation at each point.
This blog is a call for each of us to think about our power to influence the community spaces we inhabit, to exercise leadership, to become a catalyst for dialog, to open up trust. When I was first asked to become an IRC channel operator, I was asked to read the Freenode Philosopy: Catalysts. Whether or not you use IRC, I recommend reading this page to change your thinking about how you interact with others in your free software project. In fact, these ways of thinking about personal interaction would transform business, education and politics if put into wide use.
We know that bullying in schools can be brought to a stop by bystanders who show the courage to immediately speak up on behalf of the victim, and walk away from the confrontation. While I don't want to label those who use abusive language as bullies, we can transform tense situations in similar ways by speaking up in a positive, calm manner, as outlined in the Catalyst page.
Labeling people as trolls doesn't defuse the situation, or create an atmosphere of trust and dialog.
Please folks, if you are in an IRC channel, on a list, or help out on a forum: read the Catalyst page, and remind yourself often to be the change you want to see in the world. You don't need to be an op, a listowner, or a moderator, to be a leader; bloom where you are! Our Codes of Conduct aren't bludgeons to be used against evildoers; rather they are guides to our everyday interaction with one another.
Yepee guys, I’m off from the Ubuntu contributions from now till 14th June for exams. My exam will start from 3rd June.
I’ve been hacking on some static analyses stuff for debuild.me, and i’ve been involved in a multi-year long yak shaving exercise. As today’s fun part, I wrote python-schroot to help run commands in a schroot chroot (say that 10 times fast!)
After a while, I got some neat stuff working. Here’s an honest example:from schroot import schroot with schroot('unstable-amd64') as chroot: chroot.copy('/etc/issue', '/etc/issue', user='root')
This will copy a file (/etc/issue) from the “host” system into the schroot chroot. Neat!
Now, to run something:with schroot('unstable-amd64') as chroot: out, err, ret = chroot.run("whoami") print(out)
Then, in an effort to make a DSL, I set out to create the following syntax:with schroot('unstable-amd64') as chroot: "apt-get update" in chroot
but, hit some issues with implementing it, and got the following to work:with schroot('unstable-amd64') as chroot: "apt-get update" > chroot // "root" # apt-get update as root
In this week’s show:-
- We take a look at what’s been happening in the news:
- Ubuntu will be supporting the 3.8 Linux kernel until August 2014
- The International Space Station is migrating its laptops from Windows to Linux
- There have been some security problems in the Linux kernel that are being fixed
- Yahoo! have acquired Tumblr
- It seems that Microsoft is accessing Skype chats
- We catch up with what’s happening in the Ubuntu community:
- Ubuntu.com is getting an update to ensure it allows access to all parts of the project
- You can win a Dell XPS 13 Ubuntu Edition ultrabook…if you go to Russia or the Ukraine and take a photo of a billboard
- Ubuntu Brainstorm will be shut down
- Jono Bacon has blogged a summary of what happened at the most recent UDS
- Ubuntu SDK should be ready to use by October this year
- And we mention some events:
We’ll be back in one week with an interview with Sean Tilley of Diaspora. In the meantime, send us your feedback!
Please send your comments and suggestions to: email@example.com
Join us on IRC in #ubuntu-uk-podcast on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus
Leave us some segment ideas on the Etherpad
Centre for Internet and Society celebrated their 5-year anniversary with an exhibition at their Bangalore and Delhi offices and a series of talks in Bangalore. I was there on Tuesday and managed to spend some time at the exhibition and attend the talks.
The exhibition showed off some of the work that CIS has been doing and the work of several independent artists. The bits that are particularly in my memory is Tara Kelton’s work as well as Sharath’s work.
Later in the day, Lawrence Liang talked about the Encyclopedia of Indian cinema. It was a very interesting talk, especially for me since it encompasses open data, open source software, and copyright issues! A convergence of a lot of my interests :) Lawrence talked about what they’ve built and the problems they’ve faced and how internet as a medium for a film encyclopedia is very powerful, but is limited by the legal issues surrounding copyright laws.
On that note, I’ll close with this video about copyright.
I know Disney is great, but I’m not sure I like them as much after this video.
I was few times in London but always on business without time for sightseeing so decided to change it.Day one
After few hours trip landed at London Gatwick airport. Some say that’s worst one of five but was not so bad. Why there? Because I could and I was on Stansted, Luton and Heathrow already (plan to use City one next time). Short trip to the city and hello Victoria station — long time no see…
Bought Oyster card to use public transport in easiest way and took a metro train to hotel. Nothing fancy — just cheap (65£ per night) hotel without any extras (but with working free WiFi).
Unpacked only needed things and went to city centre. Victoria monument, Buckingham Palace, the Mall etc. More or less followed the most popular trip from the “Trip Advisor” application.
Went to Thames, crossed with one bridge, looked at London Eye (and decided to skip it) and then Big Ben and Westminster Abbey were next. I considered returning to the Abbey next day but later decided against it.
Grabbed some food and went to sleep early as it was 3rd day when I woke up around 5:00.Day two
Thursday… Skipped Westminster Abbey and went by foot to the British Museum. Met Mark Brown on a way and we had good time looking at all those things which British Empire had stolen from all around the world. We didn’t managed to find Britain sections.
After lunch I went to the Forbidden Planet store. And sunk there for quite long time. Then got back to buy two more books. This place was amazing…
They had stuff related to movies, games, tv series – figurines, key chains, t-shirts, toys, rings/jewellery, helmets, weapons and other… Some from limited editions. But when I wondered “is that’s all?” I went to the basement. And sank.
Comics, books, movies, tv series, manga, anime, photo albums and more. Books about movies, books which movies were based on (and vice versa). “Darth Vader’s princess” and “Darth Vader and his son” were there (9£ each), “Simon’s cat” books which my daughter would love (so I bought one), lot of SF and fantasy books in nice editions (Asimov for example).
“Big book of butts” looked funny. Next one was “Big book of legs” next to “pin-up girls” and other photo/erotic ones.
Nice place to go to but I warn you – you can leave a lot of cash there and have problem packing…
Lack of Britain sections in museum made me go to their website to check floor plan. And back to the building to see few more exhibitions. When I finally found what was searching for they told us to leave :-(
But no need to be sad I thought cause I was going to meet long time no see friends at a pub not so far away. Went there, ordered some “organic lager” and sat down to wait for them. Few minutes later I had a chat with some guys around 60 years old about some random stuff. Good part were their recommendations which beer to try next. As you probably guess it was not lager but rather ale or something more English.
YaaL and Pornel arrived and we had nice chat about life, work etc. Time passed too fast :-( But it was good to meet after so many years.Day three
This had to be no museums day. First I went to visit Canonical’s office as I have never been there…
Finding building was quite easy, then discussion with security took a bit more before they finally realised that I am on a visitors list already. Got tour of the office, looked at wall full of Ubuntu Touch interface mockups, discussed few of them with someone, made some coffee and left the building.
Next step was Tete Modern Art gallery. I spent few hours there watching all those sculptures, paintings and installations which were counted as art in previous century. Did not even tried to understand those…
Due to cold I got during previous days I went back to the hotel. But why stay there when there are so many places to visit and so little time?
So I decided to make use of longer opening hours at the British Museum and went there. This time managed to see Britain sections and European Medieval times ones. It was good evening.Day four
As this was my last day in London I decided to not go far from the hotel. Checked out, left luggage and went on foot to the Science Museum.
Lovely place. Went quickly though Space exhibition (cause most of it I saw at Cape Canaveral already) but other ones were worth seeing. Age of Steam with all those engines and descriptions, vehicles like bikes (starting with “safety bicycle” by Rover), motorcycles, cars (including JET 1 powered by gas turbine) but also planes (with replica of Wright brothers one) and helicopters.
I enjoyed the “Materials” exhibition — especially body model with some artificial addons and long list next to it informing which materials can be used for which implants and other inserted parts.
There was also special exhibition about Alan Touring and his work.
After visit I went for food, took luggage from hotel and then the Underground to King’s Cross train station and went to Cambridge. But this will be next post.
Recently a company I know has chosen as a new leader of one of his most important project a very arrogant person.
I had the opportunity to work with him some times ago, and all the people that met him agree with me about his arrogance.
This man has indubitably a great know-how, he’s brilliant and talented for his work (but maybe less than others) but he is very able to increase his self-branding.
He built in times an image of solid professional, built not on his 20 years experience but on his bad temperament, on his arrogance, his language often remarkable when not openly rude.
The question is: he’s been chosen for or despite his bad temperament?
Some times ago I read an interesting story: a leader of a great company asked to a marketing guru if the fact his company wasn’t as big as Apple depended on he was an humble leader.
The answer was that Apple was a big company in spite of Job’s bad temperament.
In the highly controversial Good to Great book, the author, James C. Collins examines the performance over 40 years of 11 companies that became great.
The first of seven characteristics of companies that went “from good to great” is to have an inspired but humble leader.
Although many companies and many project have a strong leader, in my mind the my way or the highway approach is located just a step away from Godfather’s style.
I believe that a leader ought to be flexible, to be a good listener and not only a screaming monkey, he should be ready to learn from his mistakes, he should be aware to be not perfect, but perfectible.
In a nutshell, a good leader is charismatic and inspiring but refuses to be bossy.
A good example of charismatic humble leader is without doubt Mr. Barack Obama, a bossy leader is – unfortunately – Mr Silvio Berlusconi.
To be driven to do what’s best for the company, to be enthusiastic and crowd enchanter is quite different from state own authority with arrogance: in my humble opinion, a bad temperament often could hide skills and talents or – worse – cover a lack of them.
In reverse, an overweening attitude, very often shows an inner weakness and a intimate need to be reassured that immediately ceases when that leader lost his/her power.
That said, if mostly researches demonstrate that good-to-great leaders, it turns out, are humble, why so many bully leader there around?
Be inspiring. Don’t be overwhelming. Be a leader.
You know that famous British punk band called “Sex pistols” ?
Yes – Ace!
No – Well, get to know them! :>
Anyway, for those of you who are not familiar with them, here’s a little something:
As you can see, there is torn up Union Jack and a badge saying “Anarchy in the U.K.” next to the band’s name.
But where am I aiming with that? Well, the answer is actually rather simple – swap “the U.K.” (nothing personal, Britain, honestly) with “Free Software” and you will get cookies. … All right, you are not getting cookies (or cake for that matter!!!!!!), but rather “Anarchy in Free Software”.
See, I’ve been in the whole Free Software world for quite a lot of time and worked (and still working) on enough projects to learn that sadly in not that small amount of them there is some a la anarchy status, because no one actually steps up to say “I will lead this project and I will take the decisions.” especially when it comes to the design.
You know, some people tend to accuse Mark Shuttleworth for being too much of a dictator and Canonical for being too … strict, but the fact is that Ubuntu would have never been as successful as it is today without the strong leadership.
There is a person or a team of such behind every application out there (well … duh) and unless you are part of such team and want the project to fail miserably your team has to make decisions. Sometimes they might be tough, sometimes they maybe easy, sometimes they might require to have a discussion with the userbase of your application, but you have to take them, otherwise you risk accepting ALL the contributions, which results in what we hear in Bulgaria call (caution – might be really hard to pronounce for non Slavic speakers) – “Mnou babi, hilavo djate”, which translates as “A lot of grandmoms, spoiled kiddoh.” … and just be glad I didn’t write that in cyrillic … :>
You know, it’s ace to see community contributions (no matter if they will be design suggestions or branches with code or a feature request) to your application, no I lied, actually it’s brill, but you can’t just accept them all without going through every of them and evaluating and if you have to discussing it, simply because saying no might hurt someone’s feelings.
You have to evaluate every design and code contribution or a feature request, discuss it with the other team members and if you have to with the contributor himself, before making the call.
Being ignorant jerk that says “To hell with everything, I’m gonna go giddy on power” is a big no no, but accepting everything is just as bad and sometimes even worse.
So, to sum it up:
OMG, Union Jack everywhere!!!! (no clue whatsoever why, though … oh well … )
Last week at vUDS we had the discussion about erasing the current national-(and sometimes state)-border-centric organization of Ubuntu (loco) teams.
Here's a detailed (but rough) summary of the first half of the discussion. It's faster than watching.
Enabling teams that are not in the current geography that one would associate with loco teams
There has been talk on Planet Ubuntu and Community Roundtables about the notion of creating teams for any geography, to form freely and to potentially remove barriers on team formation based on that.
Wants to create a team for his part of Minnesota, because the state is divided - two major cities right next to each other - Minneapolis and St. Paul - would be easier for him if he could lead a west metro team instead of having a whole big Minnesota thing becuase most likely all the people over in St. Paul aren't going to attend his events and vice-versa
This situation likely applies to a lot of places, in the US, in Canada, and other countries
Extremely large geographic regions could benefit from having multiple team e.g. Texas, Alaska - there's huge spaces between cities. Another example is his state of NY where they have the 8th most populous city in the world and they probably would be a good location to have a focussed team just on NYC. Speaking with them has revealed they have far different challenges than does Rochester NY, so Charles feels out of his element in helping them finding places to meet, and other details that are different due to them being so large. He understands there are concerns with wanting to control things but feels that we shouldn't inhibit people from growing teams in places like that. There is currently no Community Council opinion at this point -each member probably has their own thoughts and in general the CC would want to work with the LoCo council on those kinds of things.
Good and bad both
Good: may help some teams like the Russian team - super big and many cities far apart
in Peru - many events during the year, people can easily reach out for support from existing team
long term- doesn't want to see a lot of inactive teams on the Loco directory - e/g. in Peru if there were 20 teams, some big teams working, other small ones not, but all listed and no one willing to revive the inactive teams - this will get people confused and many people just want to join, and not lead
India is very large, with very big cities, let's have city-based teams like Calcutta, Bangalore, Chennai, Mumbai where pop's are huge 10M, so users can get local support as quickly as possible
Loco Council has been brainstorming and trying to come up with best practices on this for three months - they were looking at larger countries and breaking them into a similar situation to what the United States has done and also what Brazil has done on their own (breaking themselves into smaller provinces or states) which seems to have worked pretty well
Comes with advantages and disadvantages - there will be states that are so large that they are the size of a small country, but if you break down a large state into smaller pieces what you might find is that you have a very small loco in one town and then the next city is where all the activity is, making people feel disheartened
If you break it down even further like they are thinking of doing with India then it can come down to being even language-specific as well within certain cities and provinces so where do you draw the line?
Acknowledges the issue. Every place has its own language. There is no common colloquial language
We can't use the same criteria for every country but can come up with a set of guidelines and best practices and India's more unique than others
They (LCC) do get requests, even this week for a sub-loco team, currently don't have a process so have been telling them to join the main (parent) loco and work with that - events can still be added to the LoCo team portal - it's not like we're saying "Just because you don't exist as a sub team you cant add your event on the loco team portal" there's nothing like that happening, events still get added and promoted
We cannot have the same criteria, at the comm roundtable we were also thinking of dropping the term "Approved" from loco teams. What would be the effect of this on teams that request materials - is the material going to be enough for the sub teams? probably yes but we need to remember that the materials are costing money (not free)
If people want to set up sub teams they should have to go to the loco council to see if it is right, otherwise people may do their own thing and it wont be good it will be too random
Recounted Steve Kellat (Ohio) blog - Ohio used to have smaller active teams, more active than state level team. Over time those teams became less active as people moved on to other projects. Now it appears Ohio is fairly dormant both on a state level and a more local level and possibly could benefit from a super-state. Steven mentioned regions that are close to Ohio (across state borders) that have activity and would be useful to partner with them. Though we tend to talk about dividing countries we may also want to talk about consolidating where necessary.
Another thought that was raised yesterday (from Ben Kerensa) was around the Portland team, or Oregon team, at one point in time there was a PNW team that was proposed as a super-state perhaps consisting of Washington, Oregon and parts of Northern California. He can envision highly dense areas like NYC (as Charles pointed out) that might be candidates for a team, where in more rural (sparsely populated) areas might be great candidates for super-state teams or regional teams that span several states or provinces or perhaps even a collection of cities e.g. NYC and NJ where they are right up against each other, there may be cities that are adjacent to one another that could form a super-city team which could potentially create a lot more activity than a city alone.
I think of San Francisco where there are three major cities within a 1 hour bus ride: Oakland, San Francisco, and San Jose , with silicon valley in between - this is a very large region with an unfortunately small team - so I am looking at both directions, we can go small geography, we can go big geography, what I would like to see is a situation where we expand and contract the geography in any manner that gets the teams as active and as large as possible, so we wouldn't necessarily be ruling out any level of team. Country teams are useful in some places and less useful in other places
In other words, remove any specific requirements of geography or population density. He still thinks we need to have a minimum to use some word to acknowledge them as a team and not having a situation where we get a one-person team that stays one person and declares itself a team versus to have them grow an actual community and that would have to be worked out: when do we actually say they're a team and then there's another level when do we extend the privilege of letting them have a mailing list? perhaps the mailing list exists at a state level and then multiple teams can share the mailing list . From one perspective it's great to have organic growth and do think that that's the way to go whether it be two cities that are close to each other or in a case like Cincinnati that is in two states (Ohio and Kentucky) , but what do we do on the back end and how do we deal with that as far as getting them resources like a forum and a mailing list etc? Jono has said that there are multiple ways for people to do that themselves now and it's not quite as strong a requirement for the Canonical Ubuntu community team to provide that because they can make their own Google plus - these things need to be worked out
There is a threshold where a team becomes a team. That threshold is bigger than one person. We could arbitrarily set a threshold for resources saying that if your team is more than 50 people then you have eligibility for something - whatever that something is - maybe we go orders of magnitude 50, 500, 5000, then you have eligibility for the next tier, whatever the next tier is - that's one way to slice it
That would be very hard/unfair on smaller countries no? e.g. Ireland has just under 4 million (essentially population of London), and is not possible/feasible to break it down further
A small country small population would have to catch a larger area. threshold would be reached at some geography e.g. Iceland. There would be workarounds, we wouldn't be penalizing a country for a small population, we'd be acknowledging that their geographical catch area would need to be largerr to catch enough people to get whatever resources they need
Need to keep in mind: Canonical and Loco council have mentioned that if we keep increasing the loco teams (which is brilliant to see happening) but if you break into sub teams that puts an increase on any kind of gift you get from Canonical so that's the reason we've kept to a structure. If we say anyone can go off and create a loco team and then tomorrow you have 400 new loco teams you might think it's great to see that happening as a growth perspective then that means there's an extra burden on finding conference packs. It's not an issue now, but it could be an issue 6 months or two years from now and if we don't have structures in place then we can't deal with that .
Also, the Loco council has informed the CC last week they intend to get rid of the terms "Approved" and "Unapproved" and replace it with one word: verified, then everyone will be a loco team. the LCC does work on issues in the background but it can be a slow process at times
I have blogged about the notion of conducting an experiment and think that we are in a spot right now where there are valid concerns about letting people form teams in any geography for any reason. There are resource concerns, process concerns and labour concerns, but we've never tried it and if we've never tried it I think those concerns are fears that are out there but they don't have any data to help make decisions or to build process around, so what I was advocating is we run it as an experiment for one UDS cycle as an experiment then we do a checkpoint to see what has happened, what processes need to be changed, or whether we scrap the whole idea or revert.
I think that unless we are willing to assume a little bit of risk to the project and the people that participate in it we are caught in this spot where we're afraid to move. That was my proposal- I'm big on data and I'm big on making decisions based on data so if three months down the road we saw 6000 teams and Canonical people were screaming at us because they can't send out conference packs fast enough then that would trigger a set of decisions that would fix the problem either through process or additional money or some other means or a threshold or an increase in the barrier in becoming a team that gets goodies.
Another scenario - Vancouver - someone tomorrow that feels they are unhappy with Ubuntu Vancouver decides they're going to create "Ubuntu Vancouver Prime", so now there's two teams in Vancouver,what would that mean for a team like Vancouver? I've thought about that quite a lot and I think what it might mean is that there's a team in Vancouver that is an alternative for people who aren't getting what they need from the existing team . For example the Vancouver team is very heavy on marketing and advocacy and social events and very light on development, programming and technical events, so I could see a second team in a city like Vancouver or somewhere else being heavily technically-oriented and thriving in that context and having two city teams that thrive for different reasons because they are meeting a different need. Vancouver's a fairly large and dense city so this may not work in smaller towns and villages .
I mention that because some cities countries or states may have a fear of duplication, but I also look at it as a useful thing because it could also mean segmentation and serving a community better than a single team could serve... so I'll throw that out there as a thought experiment....
... to be continued.
Are you a person that wants to start a block, village, town, city, super-city, super-state or super-national team? How about a planetary team? Have you been discouraged?
Word to the wise: Sometimes storms that seem small enough to fit in vessels that some would use to brew tea are symptoms of larger issues that need to be examined and changed lest the vessel crack. And sometimes when one is not in the vessel, one cannot see the importance of a tiny swirl that begins the tornado... There's a funny story and quote from some English politician about a tea thing that happened in America a while ago, but I'll leave that for another post. ;) Stated non-cryptically, I'm glad we've begun this discussion even if some may not understand why.
Ubuntu knows no borders.
Hello World! I’ve been wanting a to start my own blog for a while now. Time and other constraints have been preventing me, but I finally sat down over the last week-end and hacked together what you actually see.
I have decided to license the content of this website under a Creative Commons license; CC-BY-SA 3.0 Unported, to be exact. This means that you can take what I will write on this blog, reuse it for whatever you see fit, or combine it with other content under the same license, e.g. from Wikipedia. The only requirements are that :
- you quote where you got it from, i.e. from me (Adnane Belmadiaf) and this website (daker.me)
- you share the result under the same license
So feel free to leave a comment or tweet me @AdnaneBelmadiaf.
Here’s all the goodies for the week:
- Charm Auditing! Marco will be providing list of things and charms not up to snuff, post to list in order to get fixes (or eventually remove from store)
Please check out the blueprints, there’s a ton of detail there!Charm Tools
- Fixed dependency to recommend juju-core or juju, fixes Jono’s bug.
- charm-helpers being split into it’s own project: https://launchpad.net/charm-helpers
- Rewriting a bunch of them into python instead of a mishmash of bash and python, gives us cross-OS compatability, better templating, easier testing.
- Rolling out a single charm-helpers package
- Pre-beta live site (Nick hates it when we link it. :))
- Current docs not generating, filed RT, IS to complete by the end of this week.
- Code: lp:~evilnick/juju/go-juju-docs -[arosales] Todo to make a better docs staging site
- Rewriting the jitsu test code so it works as a juju plugin to enable easier testing.
- node.js - Jeff Pihach linked up with Mims, experienced node.js dev. Good things on the roadmap here. We’ll get a better status when Mims returns from Gluecon
- rails/rack - Follow up with Pavel?
- Django, someone mentioned in UDS that it’s nearly ready to be submitted to the store.
- Mark Mims is at Gluecon! Go get em!
- Submitted to Strata in NYC. (mims)
- Strata in London submission in progress (jamespage)
- TexasLinuxFest, arosales to present.
- Next Friday is Part 2 of “How to write a charm”
- We’d like to have a roadmap for charm schools
- Over the next day or so Jorge to publish a schedule for charm schools, will be on the Events.
- We’d like to be responsive to user needs.
- Keep biweekly cadence, be flexible enough to do on-the-spot charm schools.
- Jorge to add more detail to charm schools on the web page, show what topics were covered in more detail.
capture the IRC Logs (duh!).
Topics people want: Puppet/Juju, Charming from Scratch, Improving an existing charm (including the workflow to submit it back)
Today, I am happy to announce the release of Homerun 1.0.0. This new version comes with a few new features.
Let's start with the biggest one: favorite reordering by drag and drop. This is one of the most wanted feature requests for Homerun. It lets you reorder your favorite applications and places by holding down the left mouse button and dragging items around.
This short video demonstrates how it works:
This was surprisingly difficult to get right with QtQuick 1, so I am glad it's now done.
Note that while this feature is currently only available for the "Favorite Applications" and "Favorite Places" Homerun sources, it is actually possible for any source to provide reordering via drag and drop if it makes sense for this source to do so.
Another new feature is the ability to customize shortcuts. This started with the idea of creating a cheatsheet of Homerun shortcuts, but I was worried the list in the cheatsheet would not be kept up to date with the actual shortcuts so I looked into generating the content of the cheatsheet from the code handling the shortcuts. At one point I realized kdelibs already provided what I wanted and more in the form of the standard shortcut dialog, so I scraped my code and went for exposing the standard KDE shortcut dialog. You can reach it from the configure menu in the top-right of the screen.
Finally, other minor improvements have been made:
- The context menu of the "Trash" folder now has an "Empty Trash" entry,
- When an application or place is marked as a favorite, a short message appears on the top of the screen, reassuring you that your request has been taken into account.
As usual, this new release is available on download.kde.org.Moving On
As for me, I am going to return to what I enjoy most: working on applications. In the next months I plan to get more involved in KDEPIM, starting with what I do best: obsessing beyond reason about widgets layouts and margins. Once I feel familiar enough with the code base, to know the codebase enough, I'll try to get a bit out of my comfort zone and help fixing underlying bugs.
Silvia Bindelli and Cheri Francis worked to prepare the Ubuntu Women session at the virtual Ubuntu Developer Summit last week where the following was covered:
Plans for an information-based online scavenger-hung competition that the team will be doing in the coming months. We’re currently seeking volunteers to assist coming up with questions related to women in tech and Ubuntu and to work with us when “grading” the answers that come in.
A user poll to see how the team could be most effective in serving our audience of women interested in Ubuntu. We have found that the project needs a bit of an adjustment every couple of years to refocus on our current targets as Ubuntu and the open source ecosystem evolves.
Finally, much of the session was spent discussing our intention to further collaborate with other groups seeking to encourage women in open source (and in technology in general). A couple of our members will be attending Ada Camp in San Francisco next month and hope to make some connections there. We’re also reaching out to our current community members who are involved in other groups.
Thanks to everyone who participated and we’re looking forward to continuing discussions and work on all these items in the coming months.
Video from our session is available here:
Blueprint for the next few months can be found here: https://blueprints.launchpad.net/ubuntu-women.org/+spec/community-1305-ubuntu-women
Recently we had our online Ubuntu Developer Summit where we discussed a range of topics, defined next steps, and documented work items. The very last session at the event was an overall summary of the tracks (you can watch the video here), but I wanted to blog an overall summary too. These notes are quick and to the point, but they should give an overall idea of decisions made.Client
Content Handling -
- Siloing apps.
- Main applications will define a “main repo” and provide an API to deliver, share and access the data in the main repo.
- Want to update to 1.14 or even 1.15 if the video ABI doesn’t change.
- Focus on the phone settings defined here.
- Scopes that didn’t land in 13.04 should land within 2 weeks.
- Several scopes will be migrated from Python to either C++ or Go for memory purposes.
- Expressed interest in moving to Chromium as default for a better user experience. Gathered feedback on the possible move. Next steps are to take discussion to the mailing list.
Unity 8/Mir Preview in 13.10
- Want to have a preview of Ubuntu 8 (Phablet UI) running on Mir as an optional session (installable from universe or PPA, most likely).
- Reviewed the current 13.10 release schedule found several changes made in 13.04 that mistakenly hadn’t been carried over, such as later freeze dates and one fewer alpha; Adam Conrad will be syncing all this up and sending mail to the ubuntu-release list for review.
- We discussed the positioning of the development release in light of some conversations last cycle, and put some more flesh on the design for making it easier for people to follow along with the development release all the time.
- This cycle, we’ll be bringing up a new 64-bit ARM architecture based on cross-building work done last cycle, and we’ll update developers on that once we get closer to the point of starting up builds in Launchpad.
- Moving forward with click packages. Fleshed out ideas on source package provision, integrating with existing client package management stacks, and clarifying some other things like the security model.
- For image based upgrades, the team held a demo and Q&A for the current proposed solution, which is split into client, server, and upgrader; client is going well and expected to land by the end of June, server is currently blocked on infrastructure but should be ready around the same time, and Ondrej Kubik has been making good progress at tweaking the CyanogenMod recovery environment for the upgrader.
- Firmed up the plan for packaging Android components for Ubuntu Touch images.
- Upstart will be used as the standard way of spawning desktop apps for Unity on touch devices and ideally on desktop too (Unity 7 and 8). This will let us make sure we only have one instance per app, and will make it easy to apply AppArmor, seccomp and cgroup confinement consistently to all apps.
- Defined a goal to reduce the amount of time it takes to prepare, test and make a Checkbox release, automating more of the process. This will benefit people who use the Checkbox tool as part of their daily work. It’s possible that Checkbox may move to Universe, although this needs some more discussion.
- The server certification tools are being reengineered to use the new plainbox engine as their core. This will preserve the existing UI, but we’ll have co-installable packages with the new core, and will eventually switch over to the new tools.
- The cert tools and test suite are being upgraded to work well on ARM for our hyperscale and mobile work, fixing any issues so we can get full, clean test runs on ARM servers. MaaS will be used for provisioning, and tested as a part of the ARM server solution.
- We will be basing the primary kernels for 13.10 on Linux 3.10, but will strongly consider 3.11 depending on timing. For Ubuntu Touch devices, we already have kernels available for Nexus 4 and 7, and plan to also bring up kernels for Galaxy Nexus and Nexus 10. We’ll provide a 13.04 hardware enablement kernel in the 12.04.3 point release.
- In terms of Ubuntu Touch power management, we have some preliminary baselines against Android on Nexus 4 and will replicate those on other devices, although they won’t be entirely meaningful until things like Mir land. We’ve written some new utilities such as eventstat to track down problems here. On power management policy, we agreed key requirements for the system power manager and we’ll extend powerd to serve our immediate needs.
- Approved LoCo teams are no more, will be verified teams.
- Bringing back fortnightly leadership meetings.
- Ubuntu Advocacy Kit is driving to 1.0.
- Gathered UDS feedback.
Ubuntu Community Website
- Great discussion which clarified everybody’s involvement in the project.
- Clear roadmap for completing the content and design in the next few weeks.
- Design and web team have the templates we need to finish the work.
- No discussion with IS yet around deployment – this will be coordinated next week.
Ubuntu Womens Session
- Several events planned to get more people involved and the word out (Career Days, UOW, etc.).
- Discussion about a women in technology themed event at CLS.
Ubuntu Status Tracker
- The status tracker is many things to many different teams, but we managed to figure out a number of issues we can tackle, which should make everybody’s lives easier.
- Removal of kanban view.
- Add links from team pages to milestones pages.
- Set up a meeting to discuss setting up an “ongoing” dev series.
Ubuntu On Air! Discussion
- Issues with supporting multiple hosts.
- Discussion about building support into summit and re-using vUDS components to support more shows and multiple hosts.
- We want to open it up to more contributors, so we get more variety into the shows.
Development Onramp for Touch / Unity Next
- Goals to improve docs.
- We will track contributions to all the projects to see how we improve.
- Increased focus on testing, coordination with the SDK team.
- Update Getting Started Page, review current docs and previous mailing list feedback.
- Regular doc review cadence and more health check meetings.
- Focus on content in the UAK.
Ubuntu Enterprise Desktop Discussions
- Another meeting will be planned to get more input from users of enterprise desktops.
- Some common issues were identified and discussed:
- outdated cfengine package
- access to Firefox/Thunderbird packages before publication (resolved)
- support for livemeeting/linc
More Ubuntu Touch images
- We identified the current blockers and will simplify thingsby using an image without firmware blobs, so they can be added by a local tool afterwards.
- After the rebase to saucy we will also update the docs accordingly.
- Kernel images for devices will first live in PPA, afterwards probably in universe.
Regular Ubuntu Development Updates
- Organise regular Ubuntu on Air Hangouts to which we invite people from news sites as moderators.
- Briefly summarise work from the last week(s).
- Ask engineers to demo/showcase interesting developments.
- Do Q&A sessions.
- Also invite members of governance teams along.
Openstack Next Steps
- Looked at some high level areas for this cycle, avoiding digging into areas covered by other sessions. We decided that at current, moving over to Git for our packaging work doesn’t add value. We also agreed to clean up on some cruft within the packaging branches.
Cloud Archive Status Check
- Decided we had to support Grizzly for 12 months, which exposes a 3 month support gap from the backing Raring release. Need to discuss with the security team about how to fill this gap. Reviewed proposal for SRU cadence and tentatively rubber stamped.
- Better co-ordination around trumping by Security dates, specifically if it covers more than one project.
- Looked at using updates as a reason to increase our messaging.
12.04.x images with LTS Enablement Kernel
- The cloud images currently only contain the Precise (3.2) kernel. Discussed adding the kernel HWE stack to cloud images. We need to document how to enable backports, clearly state the support, and possibly tool cloud-init to handle updating the kernel on boot if folks need a more recent kernel on boot.
- We will not be creating new images with the HWE kernel for the default images. The HWE kernel will be used for Clouds that have a high velocity of change in the Hypervisor (i.e. Windows Azure). For the regular images, we will investigate tooling in cloud-init and other places to make the ingestion of the HWE kernels easier, such as enhancing the documentation, allowing for easier enablement of backports, and making it easier to enable the HWE kernel at boot time.
Cloud-Init for Vagrant
- We will create a good Ubuntu development workflow for Vagrant users (cross platform OSX, Windows). Ben Howard will investigate cloud-init tooling as well as the best method to enable the DKMS modules.
Cloud Init & Cloud Image Development for Saucy
- We will define the development work to improve cloud-init and cloud images for the saucy cycle.
- Discussed work on pre-cloud init phase, vendor hooks, cloud init plugin, and rebuding tools.
Juju Core Development
- 1.10 version of juju available in backports for 13.04, and should be available in precise backports soon.
- Look for releases in juju/dev ppa updating weekly, juju PPA monthly, and have stable release go into backports (couple of times per cycle).
- HyperV support is currently untested.
- VMWare support in charms, but not primary supported charms.
- We need a matrix to demonstrate interoperability and support of each variation.
- Need to work out additional hardening support.
- Building on our great history, moving away from per commit hardware testing to a more fluid multi virtualised separated environment, allowing greater interoperability testing. Hardware Cert term showed interest in getting more involved. The scope of this will be ratified when the interop matrix is created.
Flag Bearer Charms
- Will improve flag bearer charm integration testing.
- Implement list of reference charms.
- Develop Percona backup XtraBackup flag bearer charm.
- Document flag bearer and reference charm criteria in best practices.
- Discuss flag bearer charms on mailing list.
Charm Policy Review
- Add into policy for a charm to provide a config option to specify the version. The other items such as installation location (ie /srv), implementation of common subordinates, backups are to be added to best practices. The 3 ack on charm reviews is still under discussion.
- Split Juju docs best practices and policy sections.
- Discussed re-reviewing the current charms in the charm store to ensure accurate readmes, tests, functionality, rating, categories, and icons. The workflow was discussed for queues, and which charms to tackle first.
Charm Development Tooling
- Discussed gathering all the different charm development tools into one central package. These charm development tools include charm-tools, charmsupport, juju-gui,openstack-charm-helpers. Folks also discussed how the tools could be improved, and used as a singular set.
Juju Framework Charm for Server Application Technologies
- Discussed further building out of the Django, Rails, Node.js, and possibly Java.
Improve Juju Documentation
- Make a better user and charm developer experience for juju.ubuntu.com/docs. Discussed getting a permanent beta site going, methods to get documentation contributions. Hopefully a revamped docs will be in production in the next couple of weeks, and if not we’ll have a beta site very shortly (days).
Juju Charm Testing
- Currently jenkins.qu.u.c has graph testing showing reliable results. Marco will be landing integration soon (days), with a more formal testing framework to follow (weeks).
- Some ideas discussed were to gate charm store commits on testing, showing testing status in the GUI, and pre-deployment testing. Test examples will be made available along with a charm testing school.
Add User Feedback loops and Social Networking to Charm Store Charm Pages
- Discussed making sure users have a method to give and receive feedback on a per charm basis. We currently have social networking (+1s, Likes, Tweets) in addition to downloads, quality rating, bug links, and testing status. Some ideas were to get clarification from design on showing social networking numbers, as well as a ‘leave feedback’ form.
Juju GUI Development
- Discussed development done, and upcoming work. Covered ideas such as design, bundles, diagnostics, user data, juju feature parity, maintenance and support.
Improving QA for seeded server packages
- Established three distinctive areas of testing, these are upstream test suites which typically run at build-time, integration tests via dep8 and service level testing which often requires multiple nodes and is conducted using juju.
- We established that there is the regression test suite that can be included in many of the packages directly, with the requirement that we package some of the common ubuntu testing libraries.
- Discussed some areas of standardisation for dep8 testing.
Fastpath installer work for 13.10
- Established what FPI is, and the processes which are part of it.
- Fast Path installer will be delivered as a installable package in Ubuntu, most likely in python. The interface to it will we yaml formatted configuration.
OpenStack Charm work for Saucy/Havana
- Migrate all charms to be python based.
- Consolidation into new charm-helpers nextgen library.
- Complete SSL/HTTPS support into charms.
- Integration of wiki and help documentation, upstream series aligned with upgrading notes.
- Design around how proprietary+1 plugins will be integrated into core charms for Cinder and Quantum.
Investigate alternatives to mysql
- Agreed that the best course of action was to maintain mysql for this cycle, and try and support other flavours of mysql getting into Ubuntu via Debian.
Ceph activities for Saucy
- Dumpling release will be out in August (co-incides with FF for Saucy) so will be target for this release.
- Lots of incremental improvements in efficiency and performance, full RESTful API for RADOS Gateway admin features, block device encryption for data at rest.
- ceph-deploy (upstream cross platform deploy tooling) will be packaged.
- Implementation of more automated testing during Saucy.
HA Openstack charms V2
- Reviewed the current state of HA support in Openstack charms. Percona has volunteered to charm their offering, allowing great coverage by their mysql HA variant for active/active clustering.
- Work also on active/active and brokerless messaging options (ZeroMQ) and incremental improvements for service check monitoring in load balancers.
- Cluster stack (Corosync/Pacemaker) to be reviewed and upgraded for Saucy in preparation for 14.04.
MongoDB activities for Saucy
- File Main inclusion report for Mongo to support Ceilometer and Juju use cases. Raise a Micro Release Exemption (MRE) to the techboard, as point releases are bug fix only.
- Upstream armhf enablement patches. Re-sync with Debian. OpenSSL license exception.
Virtualization Stack Work for Saucy
- If debian libcgroup maintainer finds time, we’d like to merge cgroup-lite into libcgroup. For per-user configuration, first make it default-off optional, conditional on userns sysctl being enabled.
- LXC work is going well on track to 14.04 (and lxc 1.0) roadmaps. For this cycle, we’d like to get user namespaces enabled in the saucy kernel with a new off-by-default sysctl controling unprivileged use, and complete the ability to create and start basic containers without privilege; add console, attach and snapshot to the API, complete the create API function, and convert all of the lxc-* programs to use the API; write a libvirt driver based upon the API, and a patch to enable testing it with openstack; write loopback and qcow2 block device drivers; Get the subuid (user namespace enablement) patches into the shadow package; discuss with the community the maintenance of stable trees; improve the API thread safety; and work our distro lxc tests into the upstream package (akin to how it is done in netcf).
- In edk2, we want to contribute to the implementation of the ability to save and restore nvvars from the ovmf bios from qemu. We’ll fix the apparmor bug preventing the block device mounting in libvirt-lxc, which is blocking use of that feature by openstack.
- We intend to merge libvirt at least at version 1.0.6, qemu at 1.5, and hopefully xen 4.3. We’ll follow up on citrix’ plans for xcp. The blueprint lists additional xen work planned. We’ll also look into default use of openvswitch bridges in libvirt.
- Autopilot testcases written for ubuntu core applications will be checked to ensure they pass before auto-landing updates in the ubuntu touch images.
- The quality community team will help core application developers develop a suite of manual testcases for each ubuntu core application. These will be run as part of the verification process for the 1.0 stable release of each application.
- Add testcases so all default desktop applications for each flavor are covered.
- Expand and improve server testcases to allow more participation by those who might lack domain specific knowledge and/or hardware.
- Make available documentation more accessible by linking to it from the tools we use for testing, like the qatracker.
- Continue holding testing ‘how-to’ and knowledge sharing sessions during UDW, UOW, as part of UGJ, and on ubuntu on air.
- Add testing achievements to the ubuntu accomplishments project.
- Ubuntu Touch images will be smoke tested using the pending/current model already in use for other images. This ensures no image is published for general consumption that doesn’t pass a set of tests ensuring basic functionality of the image.
- Current Ubuntu Touch autopilot tests for the core applications will be investigated for use as part of these smoke tests.
- The concept of smoke test is going to be expanded to cover a no regression build.
- Autopilot 1.3 is now released and will be available in raring and saucy. No quantal support is planned. Precise support is being examined, but requires further investigation and backporting work.
- Autopilot developers will now be available on #ubuntu-autopilot — no need to always ping thomi!
- Planned tests for stressing mir to ensure good behavior during stressed conditions for things like OOM, memory leaks and race conditions.
- Stress tests targeted to be run as often as possible, but might be limited due to time constraints of wanting to run the tests over a longer period of time.
- UTAH will be expanded to include automated upgrade testing capabilities. UTAH jobs will be created for bootstrapping base images, for performing upgrades, and running post-upgrade tests. The old auto-upgrade-testing tool can still be used by flavors if desired.
- Create high-level views of the state of quality in ubuntu by aggregating results of test runs. This will allow for ‘problem’ areas within ubuntu to be more easily identified and targetted for further testing or investigation by interested parties. You can follow this work on the QA dashboard here.
- autopilot-gtk will now be maintained by the upstream QA team. Bug fixes and outstanding issues will be solved in order to allow for the autopilot desktop tests to run
- Once running properly, the autopilot desktop tests will become a part of daily image testing
- Continue development on umockdev to add support for more exotic networking tests (eg, 3G) and research sound testing
As ever, you can track progress on work items on status.ubuntu.com and we hope to see you at the next UDS in three months.
Q/master: lp1176977 (“XFS instability on armhf under load”) – working with
upstream on this one: i already backported a fix that turn the vmalloc() exhaustion
and fs shutdown to an -ENOSPC error, and this second error seems to be triggered
by the tiny fs used in these tests (~2GB). Still working to get it
R/master: lp1171582(“[highbank] hvc0 getty causes random hangs”) -
the jtag console has a 1-char producer-consumer buffer and if there’s no
real hw attached to the board, any subsequent write turn into an endless loop
waiting for a consumer. The situation is worsened by the fact
that before writing to this register a tty spinlocked is taken, and
any subsequent tentative to pick this spinlock makes the thread hang -
got a confirmation of the problem, some info about the hw, and i’m working on this.
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Milestone Targeted Work Items
The burn down charts have not yet been reset for 13.10, so disregard the
second link posted abovefor now. I’ll be cleaning up and adding work
items for 13.10 so that the +upcomingwork link will be more accurate.
Next week I’ll have the usual nag table available.
Status: Saucy Development Kernel
For now, we’ll plan on targetting the v3.10 kernel for Saucy but will
strongly re-evaluate a move to v3.11 in the coming months. We’ve just
rebased Saucy to v3.10-rc2 and are still cleaning up some of the
carnage. I don’t anticipate we’ll upload until a later -rc which will
hopefully provide more stability.
Importand upcoming dates:
Thurs June 20 – Alpha 1 (opt in)
== 2013-05-21 (28 days) ==
Currently we have 63 CVEs on our radar, with 8 CVEs added and 17 CVEs retired in the last 28 days.
See the CVE matrix for the current list:
Overall the backlog has decreased slightly this week:
Status: Stable, Security, and Bugfix Kernel Updates – Raring/Quantal/Precise/Oneiric/Lucid/Hardy
Support for Oneiric and Hardy expired on May 9th.
Status for the main kernels, until today (May. 21):
- Lucid – In Testing;
- Precise – In Testing; 2 upstream releases;
- Quantal – In Testing; 2 upstream releases;
Raring – In Testing; 3 upstream releases;
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
Future stable cadence cycles:
Open Discussion or Questions? Raise your hand to be recognized
For a while, I have been wanting to write about MAAS and how it can easily deploy workloads (specially OpenStack) with Juju, and the time has finally come. This will be the first post out of a series of posts where I’ll provide an Overview of how to quickly get started with MAAS and Juju.
What is MAAS?
I think that MAAS does not require introduction, but if people really need to know, this awesome video will provide a far better explanation than the one I can give in this blog post.
Components and Architecture
MAAS have been designed in such a way that it can be deployed in different architectures and network environments. MAAS can be deployed as both, a Single-Node or Multi-Node Architecture. This allows MAAS to be a scalable deployment system to meet your needs. It has two basic components, the MAAS Region Controller and the MAAS Cluster Controller.
The MAAS Region Controller is the component the users interface with, and is the one that controls the Cluster Controllers. It is the place of the WebUI and API. The Region Controller is also the place for the MAAS meta-data server for cloud-init, as well as the place where the DNS server runs. The region controller also configures a rsyslogd server to log the installation process, as well as a proxy (squid-deb-proxy) that is used to cache the debian packages. The preseeds used for the different stages of the process are also being stored here.
The MAAS Cluster Controller only interfaces with the Region controller and is the one in charge of provisioning in general. The Cluster Controller is the place the TFTP and DHCP server(s) are located. This is the place where both the PXE files and ephemeral images are being stored. It is also the Cluster Controller’s job to power on/off the managed nodes (if configured).
As you can see in the image above, MAAS can be deployed in both a single node or multi-node. The way MAAS has being designed makes MAAS highly scalable allowing to add more Cluster Controllers that will manage a different pool of machines. A single-node scenario can become in a multi-node scenario by simply adding more Cluster Controllers.
How Does It Work?
MAAS has 3 basic stages. These are Enlistment, Commissioning and Deployment which are explained below:
The enlistment process is the process on which a new machine is registered to MAAS. When a new machine is started, it will obtain an IP address and PXE boot from the MAAS Cluster Controller. The PXE boot process will instruct the machine to load an ephemeral image that will run and perform an initial discovery process (via a preseed fed to cloud-init). This discovery process will obtain basic information such as network interfaces, MAC addresses and the machine’s architecture. Once this information is gathered, a request to register the machine is made to the MAAS Region Controller. Once this happens, the machine will appear in MAAS with a Declared state.
The commissioning process is the process where MAAS collects hardware information, such as the number of CPU cores, RAM memory, disk size, etc, which can be later used as constraints. Once the machine has been enlisted (Declared State), the machine must be accepted into the MAAS in order for the commissioning processes to begin and for it to be ready for deployment. For example, in the WebUI, an “Accept & Commission” button will be present. Once the machine gets accepted into MAAS, the machine will PXE boot from the MAAS Cluster Controller and will be instructed to run the same ephemeral image (again). This time, however, the commissioning process will be instructed to gather more information about the machine, which will be sent back to the MAAS region controller (via cloud-init from MAAS meta-data server). Once this process has finished, the machine information will be updated it will change to Ready state. This status means that the machine is ready for deployment.
Once the machines are in Ready state, they can be used for deployment. Deployment can happen with both juju or the maas-cli (or even the WebUI). The maas-cli will only allow you to install Ubuntu on the machine, while juju will not only allow you to deploy Ubuntu on them, but will allow you to orchestrate services. When a machine has been deployed, its state will change to Allocated to <user>. This state means that the machine is in use by the user who requested its deployment.
Once a user doesn’t need the machine anymore, it can be released and its status will change from Allocated to <user> back to Ready. This means that the machine will be turned off and will be made available for later use.
But… How do Machines Turn On/Off?
Now, you might be wondering how are the machines being turned on/off or who is the one in charge of that. MAAS can manage power devices, such as IPMI/iLO, Sentry Switch CDU’s, or even virsh. By default, we expect that all the machines being controlled by MAAS have IPMI/iLO cards. So if your machines do, MAAS will attempt to auto-detect and auto-configure your IPMI/iLO cards during the Enlistment and Commissioning processes. Once the machines are Accepted into MAAS (after enlistment) they will be turned on automatically and they will be Commissioned (that is if IPMI was discovered and configured correctly).. This also means that every time a machine is being deployed, they will be turned on automatically.
Note that MAAS not only handles physical machines, it can also handle Virtual Machines, hence the virsh power management type. However, you will have to manually configure the details in order for MAAS to manage these virtual machines and turn them on/off automatically.
When doing Openmoko hacking, one always first plugged in the USB cable and forwarded network, or like I did later forwarded network over Bluetooth. It was mostly because the WiFi was quite unstable with many of the kernels.
I recently found out myself using a chroot on a Nexus 4 without working WiFi, so instead of my usual WiFi usage I needed network over USB... trivial, of course, except that there's Android on the way and I'm a Android newbie. Thanks to ZDmitry on Freenode, I got the bits for the Android part so I got it working.
On device, have eg. data/usb.sh with the following contents.
ip addr add 192.168.137.2/30 dev usb0
ip link set usb0 up
ip route delete default
ip route add default via 192.168.137.1;
setprop net.dns1 184.108.40.206
echo 'nameserver 220.127.116.11' >> $CHROOT/run/resolvconf/resolv.conf
On the host, execute the following:
adb shell setprop sys.usb.config rndis,adb
adb shell data/usb.sh
sudo ifconfig usb0 192.168.137.1
sudo iptables -A POSTROUTING -t nat -j MASQUERADE -s 192.168.137.0/24
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -P FORWARD ACCEPTThis works at least with Ubuntu saucy chroot. The main difference in some other distro might be whether the resolv.conf has moved to /run or not. You should be now all set up to browse / apt-get stuff from the device again.