Feed aggregator

Charles Profitt: Ubuntu Community Health

Planet Ubuntu - Sat, 11/15/2014 - 10:01

Recently Jono Bacon, Senior Director of Community at the XPRIZE Foundation, talked about an Ubuntu Governance reboot. In his blog post he questioned the “purpose and effectiveness” of the governance structure; specifically the Community Council and Technical Board.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

Let us look at the word governance:

Governance:
noun
1. government; exercise of authority; control.
2. a method or system of government or management.

What Jono described fits the definition. The Ubuntu Governance structure are exercising authority, control and trying to manage a community. Jono notices that ‘leadership’ is missing, but by definition that is not part of governance.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

I can understand what Jono is saying in this quote as I have experienced putting forth ideas that I thought were great ideas that would provide transformational change leading to a better community. Oddly, Jono was one of the people who resisted the idea and showed a reluctance to ‘explore, experiment and try new ideas’. My purpose here is not to challenge Jono’s observations, but to point out that with the presentation of any ‘great idea’ there are two perspectives. If you believe the idea is a poor one and will not help the community you are not being reluctant, but prudent. As a person who has both challenged the Ubuntu Governance structure and been a member of two councils I can tell you that my perspective changed once I was sitting on a council. The vast majority of ‘disputes’ I was part of resolving involved two parties that had not come to a fundamental agreement that there was a problem to be fixed. Every potential change was painfully examined to ensure that the change had a high chance of improving the community and low chance to causing damage. Often there are multiple effect paths that were explored that were not envisioned by anyone when the change was first proposed. As a member of the Community Council I am much more cautious, because I know the decisions that I help to make can have unintended consequences. I feel it is my duty to consider things carefully and not ‘leap to conclusions’. I appreciate the cultural difference such as: the fact that many people from Europe do not truly appreciate how large Texas is or how spread out Alaska is. On the flip side, not many Americans understand some of the regional issues in Europe. They are unaware of the independence votes in Catalonia or Scotland. These differences, and others, make it challenging for the people who sit on Ubuntu Governance boards. Suggested changes that solve a problem for one problem may create more problems.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

I agree with Jono that there is a need for leadership and inspiration. I felt a malaise slip over a large portion of the existing Ubuntu community when Canonical’s focus adverted to the Ubuntu phone. I think a significant portion of the community feels at odds with Canonical’s direction as evidenced by some of the recent tension with Kubuntu, and discussions about copyright and trademark.

I think part of the issue is that the Community has primarily looked to Canonical employees (Jono and Mark) for inspiration and leadership. Another issue is that the current Ubuntu Governance depends on Canonical to provide answers to a great many questions. For example Mark promised that Canonical was going to publish clarifications on trademark, copyright and patent agreements. In June the Community Council was asked for an update and sent a quick message to Canonical asking for an update. They received confirmation that Canonical was currently working on an update. Each month the Community Council reached out to the same contact and the only information we have is that they are working on it and do not have an estimate as to completion. It is difficult to provide leadership or inspire when there is no ability to get better information than ‘trust us we are working on it’. This particular issue has great importance to the community and while I understand that the current Community Council does not have the legal background to craft an official statement I do think that it is reasonable that we should be able to see the work in progress and be involved in crafting the clarifications.

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This statement raises a few questions for me:

  • Do the community council and technical board require change or should their be a different structure for leadership and inspiration?
  • Is the current environment, produced by the relationship between Canonical and the community, conducive to fostering inspirational leaders?
  • Are there issues with the way Ubuntu events are taking place that inhibit or discourage the community?
  • Does the press cover the community or just Canonical?

Change in Structure:
Governance is not leadership. I do not think the need for governance and arbitration will go away so I think one should consider if one group should both lead, inspire and judge. As an example think of government structures where there is separation of powers (executive, judiciary, legislative). I do not have an answer, but I think it should be considered and discussed.

Inspirational Leadership:
Do Ubuntu community members have the ability to make inspirational statements that exert leadership? When Mark announces something ‘big and exciting’ it has often been planned and worked on over an extended period of time. The current community leadership is often finding out about these announcement at the same time the rest of the community is. The community is also focused on items that are less glamorous, but no less important like documentation and end user support. (Let us not get hung up on the use of the word user; OK Randall?)

Ubuntu Events:
UDS used to take a great deal of planning and effort when it was both physical and virtual. Now that it is virtual it seems to be less organized and people have less time to plan for the event. Most members of the community would benefit from having more time to plan for involvement with vUDS. Events like the Ubuntu Global Jam need to be designed to be more beneficial and more accessible to local teams. LoCo teams that are comprised of people with school work, jobs and families need time to secure a venue, advertise the event and ensure they have the necessary support to hold a quality event.

Examples of Press Coverage:

Headline: Canonical Drops Ubuntu 14.10 Dedicated Images for Apple Hardware
Body: The Ubuntu devs marked this interesting evolution in the official announcement for Ubuntu 14.10, but it went largely unnoticed.

Was the community involved in this decision? Was there technical leadership from the community involved? I do not know the answer to that question, but this does illustrate how press coverage can impact how people perceive things.

Moving Forward:
The first step is to agree there is an issue and then once there is an agreement on that work towards a solution. You can not jump to a solution without agreeing on the issue first. If you would like to help lead change in the Ubuntu Community please add your thoughts to the ongoing discussion in the Ubuntu Community Team email list. Let us all focus on positive outcomes and actions over words without action.


Jo Shields: mono-project.com Linux packages – an update

Planet Ubuntu - Sat, 11/15/2014 - 09:21

It’s been pointed out to me that many people aren’t aware of the current status of Linux packages on mono-project.com, so I’m here’s a summary:

Stable packages

Mono 3.10.0, MonoDevelop 5.5.0.227, NuGet 2.8.1 and F# 3.1.1.26 packages are available. Plus related bits. MonoDevelop on Linux does not currently include the F# addin (there are a lot of pieces to get in place for this to work).

These are built for x86-64 CentOS 7, and should be compatible with RHEL 7, openSUSE 12.3, and derivatives. I haven’t set up a SUSE 1-click install file yet, but I’ll do it next week if someone reminds me.

They are also built for Debian 7 – on i386, x86-64, and IBM zSeries processors. The same packages ought to work on Ubuntu 12.04 and above, and any derivatives of Debian or Ubuntu. Due to ABI changes, you need to add a second compatibility extension repository for Ubuntu 12.04 or 12.10 to get anything to work, and a different compatibility extension repository for Debian derivatives with Apache 2.4 if you want the mod-mono ASP.NET Apache module (Debian 8+, Ubuntu 13.10+, and derivatives, will need this).

MonoDevelop 5.5 on Ubuntu 14.04

In general, see the install guide to get these going.

Docker

You may have seen Microsoft recently posting a guide to using ASP.NET 5 on Docker. Close inspection would show that this Docker image is based on our shiny new Xamarin Mono docker image, which is based on Debian 7.The full details are on Docker Hub, but the short version is “docker pull mono:latest” gets you an image with the very latest Mono.

directhex@desire:~$ docker pull mono:latest Pulling repository mono 9da8fc8d2ff5: Download complete 511136ea3c5a: Download complete f10807909bc5: Download complete f6fab3b798be: Download complete 3c43ebb7883b: Download complete 7a1f8e485667: Download complete a342319da8ea: Download complete 3774d7ea06a6: Download complete directhex@desire:~$ docker run -i -t mono:latest mono --version Mono JIT compiler version 3.10.0 (tarball Wed Nov 5 12:50:04 UTC 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen

The Dockerfiles are on GitHub.

Ronnie Tucker: Canonical Drops Ubuntu 14.10 Dedicated Images for Apple Hardware

Planet Ubuntu - Sat, 11/15/2014 - 00:16

Ubuntu 14.10 (Utopic Unicorn) has been available for a couple of weeks and the reception has been positive for the most part, but there is one small piece of interesting information that didn’t get revealed. It looks like the Ubuntu devs don’t need to build specific images for Apple hardware.

Many Ubuntu users will remember that, until the launch of Ubuntu 14.10, there was an image of the OS available labeled amd64+mac, which was technically aimed at Apple hardware.

The Ubuntu devs marked this interesting evolution in the official announcement for Ubuntu 14.10, but it went largely unnoticed.

Source:

http://linux.softpedia.com/blog/Canonical-Drops-Ubuntu-14-10-Dedicated-Images-for-Apple-Hardware-464174.shtml

Submitted by: Silviu Stahie

Benjamin Kerensa: Ubuntu Governance: Empower It

Planet Ubuntu - Fri, 11/14/2014 - 21:25

I was really saddened to see Jono Bacon’s post today because it really seems like he still doesn’t get the Ubuntu Community that he managed for years. In fact, the things he is talking about are problems that the Community Council and Governance Boards really have no influence over because Canonical and Mark Shuttleworth limit the Community’s ability to participate in those kind of issues.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

We need for Canonical to start caring about Community again and investing in things like a physical Ubuntu Developer Summit for contributors to come together and have a really valuable event where they can do work and build relationships that really cannot be built over Google Hangout or IRC alone.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone.

If this is what we need, then Canonical and Mark need to make it so Community Members and Ubuntu Governance have some real say in the project. Sure, right now the Governance Boards can give advice to Canonical or Mark but it should be more than advice. There should be a scenario where the Contributors and Governance are stakeholders.

I will add that one Ubuntu Community Council’s members remark to Jono on IRC about his blog post really made the most sense:

the board have no power to be inspirational and forging new directions, Canonical does

I really like that this council member spoke up on this and I agree with that assessment of things.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter. We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success. This, I believe will transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

Honestly, if this is the way Jono felt then I think he should have been going to bat for the Community and Ubuntu Governance when he was Community Manager because right now the Community and Governance cannot be inspirational leaders because Canonical controls the future of Ubuntu and the Community Council, Governance Boards and Ubuntu Members have very little say in the direction of the project.

I encourage folks to go read Jono’s post and share your thoughts with him but also read the comments in his blog post from current and former members of Ubuntu’s Governance and contributors to Ubuntu.  In closing I would like to also applaud the work of the current and former Community Councils and Governance Boards you all do great work!

Randall Ross: On Building Intentional Culture, With Words - A Small Refinement

Planet Ubuntu - Fri, 11/14/2014 - 18:17

Earlier, I wrote about how words shape our thoughts, and our culture. (You can read the original post here: http://randall.executiv.es/words-build-culture)

In that post, I introduced a graphic that I have since needed to revise. After further thought, I realized that there are not only words that originated in the "Dark Ages of Computing" but also ones that are rooted in the "Good Old Days" of Ubuntu. Those days of yore when the project was smaller, simpler, and less diverse.

Here it is:

Words from the "Good Old Days of Ubuntu" are also worthy of a firewall. Those words (or phrases) have either lost their original meaning, have become irrelevant, or have been subverted over time. In some cases they were just bad choices in the first place. So, let's leave them in the past too.

Here are some examples:

  • loco
  • ubuntah
  • linux for humans
  • distro
  • newbies

Do you have suggestions for others? I'm happy to add to the list.

Svetlana Belkin: UOS 14.11

Planet Ubuntu - Fri, 11/14/2014 - 12:26

During November 12 – 14 2014 was the Ubuntu Online Submit (UOS) 14.11.  I didn’t go to that many sessions as I did for the last one which was less tiring and I also had classes that I had to go to.  The only session that I really focused on was the session that I lead myself, which was the Ubuntu Women Vivid Goals.  I posted the summary HERE.

The track summaries video:

I have learned two lessons during this one:

  • Test hardware before the first session if you want to be in a Hangout.  My netbook wasn’t ready for doing Hangouts.
  • If no one can do the Hangout or host it, doing the session in IRC only is allowed

Randall Ross: On Building Intentional Culture, With Words

Planet Ubuntu - Fri, 11/14/2014 - 12:20

Our languages and the words they contain define us.

You don't have to believe me. You can go and convince yourself first. Here's an excerpt:

New cognitive research suggests that language profoundly influences the way people see the world...

  • Russian speakers, who have more words for light and dark blues, are better able to visually discriminate shades of blue.
  • Some indigenous tribes say north, south, east and west, rather than left and right, and as a consequence have great spatial orientation.
  • The Piraha, whose language eschews number words in favor of terms like few and many, are not able to keep track of exact quantities.
  • In one study, Spanish and Japanese speakers couldn't remember the agents of accidental events as adeptly as English speakers could. Why? In Spanish and Japanese, the agent of causality is dropped: "The vase broke itself," rather than "John broke the vase."

So where are you going with this, Randall?...

I blogged about my strong distaste for the term "user" a few days back, and it generated a lively discussion (see the comments). It also triggered some further thinking and I have now realized that my initial post was just the tip of a large iceberg. Please allow me to describe what lurks beneath the water line.

We're building something new with Ubuntu. We're building a participatory culture adjacent to a place (the computer industry) that has been the antithesis of participatory. Think parched desert: a place where inclusiveness is forbidden. If that industry were to include all humans, it would break their business model. You see, the old model requires that more than 90% of humans be "obedient subjects" and "consumers". I call this the "Dark Ages of Computing".

Remember Mark's question and answer session this week at the Ubuntu Online Summit (UOS)? He opened with and emphasized these points:

  • We are a project for human beings, and that's a strong part of our ethos.
  • Ubuntu benefits our communities.
  • People care about helping humanity get over its challenges and griefs.

That's exactly what I admire about Ubuntu, and about Mark.

Yet, as we try to build this new world some of us are bringing elements of a language that forbids, or at least inhibits the realization of a dream. Words leak in.

So you might be asking, "What's to be done?" Here is my proposal:

The above diagram is meant to represent a flow (or transition) from the old to the new. See that block in the middle? That's a wall, a firewall to be precise. Imaging the language (words) from the "Dark Ages of Computing" (the cloud on the left) trying to get to the world we are trying to build, with Ubuntu (the cloud on the right). Think of the wall as the thing that keeps the language of the past firmly in the past. Words that at best are no longer useful, and at worst no longer helpful. Think of that wall as one that can help you select words that help build Ubuntu.

So, what words are part of the language of the past? here is my initial list:

  • user
  • consumer
  • permission
  • unapproved
  • linux (in certain contexts)

(Dont worry, I have many, many more... I'll share them soon. I may even pick on a few of them.)

As you talk about or write about Ubuntu, I hope that you will always remember my drawing. Are the words that you are using today helping or hurting the world that Ubuntu is trying to build?

Did you come from the cloud on the left? Don't feel bad. Many of us did.

But please, for the love of humanity, it's time to leave that world and those words behind. We are not there any more. Let's let words from the dark ages remain there.

Ted Gould: Tracking Usage

Planet Ubuntu - Fri, 11/14/2014 - 11:35

One of the long standing goals of Unity has been to provide an application focused presentation of the desktop. Under X11 this proves tricky as anyone can connect into X and doesn't necessarily have to give information on what applications they're associated with. So we wrote BAMF, which does a pretty good job of matching windows to applications, but it could never be perfect because there simply wasn't enough information available. When we started to rethink the world assuming a non-X11 display server we knew there was one thing we really wanted, to never ever have something like BAMF again.

This meant designing, from startup to shutdown, a complete tracking of an application before it started creating windows in the display server. We then were able to use the same mechanisms to create a consistent and secure environment for the applications. This is both good for developers and users as their applications start in a predictable way each and every time it's started. And we also setup the per-application AppArmor confinement that the application lives in.

Enough backstory, what's really important to this blog post is that we also get an event when an application starts and stops which is a reliable event. So I wrote a little tool that takes those events out of the log and presents them as usage data. It is cleverly called:

$ ubuntu-app-usage

And it presents a list of all the applications that you've used on the system along with how long you've used them. How long do you spend messing around on the web? Now you know. You're welcome.

It's not perfect in that it uses all the time that you've used the device, it'd be nice to query the last week or the last year to see that data as well. Perhaps even a percentage of time. I might add those little things in the future, if you're interested you can beat me too it.

Svetlana Belkin: Tip: Inviting People on Google Hangouts

Planet Ubuntu - Fri, 11/14/2014 - 11:22

There are three main ways to invite people into a Google Hangout and Google Hangout on Air: e-mail invite, invite via link, or invite within the Hangout.  I will be talking about the third one.  There are some people that only have a phone or a tablet and doing the other two ways doesn’t really work in my experience. But the third way works!

It’s easy to do when you are in a Hangout and can be done by anyone in it, not just the host.  There is a “add person” button where you can (un)mute the mic, turn on or off the cam, ect as in the screenshot below:

 


Jono Bacon: Ubuntu Governance: Reboot?

Planet Ubuntu - Fri, 11/14/2014 - 11:16

For many years Ubuntu has had a comprehensive governance structure. At the top of the tree are the Community Council (community policy) and the Technical Board (technical policy).

Below those boards are sub-councils such as the IRC, Forum, and LoCo councils, and developer assessment boards.

The vast majority of these boards are populated by predominantly non-Canonical folks. I think this is a true testament to the openness and accessibility of governance in Ubuntu. There is no “Canonical needs to have people on half the board” shenanigans…if you are a good leader in the Ubuntu community, you could be on these boards if you work hard.

So, no-one is denying the openness of these boards, and I don’t question the intentions or focus of the people who join and operate them. They are good people who act in the best interests of Ubuntu.

What I do question is the purpose and effectiveness of these boards.

Let me explain.

From my experience, the charter and role of these boards has remained largely unchanged. The Community Council, for example, is largely doing much of the same work it did back in 2006, albeit with some responsibility delegated elsewhere.

Over the years though Ubuntu has changed, not just in terms of the product, but also the community. Ubuntu is no longer just platform contributors, but there are app and charm developers, a delicate balance between Canonical and community strategic direction, and a different market and world in which we operate.

Ubuntu governance has, as a general rule, been fairly reactive. In other words, items are added to a governance meeting by members of the community and the boards sit, review the topic, discuss it, and in some cases vote. In this regard I consider this method of governance not really leadership, but instead idea, policy, and conflict arbitration.

What saddens me is that when I see some of these meetings, much of the discussion seems to focus on paperwork and administrivia, and many of the same topics pop up over and over again. With no offense meant at the members of these boards, these meetings are neither inspirational and rarely challenge the status quo of the community. In fact, from my experience, challenging the status quo with some of these boards has invariably been met with reluctance to explore, experiment, and try new ideas, and to instead continue to enforce and protect existing procedures. Sadly, the result of this is more bureaucracy than I feel comfortable with.

Ubuntu is at a critical point in it’s history. Just look at the opportunity: we have a convergent platform that will run across phones, tablets, desktops and elsewhere, with a powerful SDK, secure application isolation, and an incredible developer community forming. We have a stunning cloud orchestration platform that spans all the major clouds, making the ability to spin up large or small scale services a cinch. In every part of this the code is open and accessible, with a strong focus on quality.

This is fucking awesome.

The opportunity is stunning, not just for Ubuntu but also for technology freedom.

Just think of how many millions of people can be empowered with this work. Kids can educate themselves, businesses can prosper, communities can form, all on a strong, accessible base of open technology.

Ubuntu is innovating on multiple fronts, and we have one of the greatest communities in the world at the core. The passion and motivation in the community is there, but it is untapped.

Our inspirational leader has typically been Mark Shuttleworth, but he is busy flying around the world working hard to move the needle forward. He doesn’t always have the time to inspire our community on a regular basis, and it is sorely missing.

As such, we need to look to our leadership…the Community Council, the Technical Board, and the sub-councils for inspiration and leadership.

I believe we need to transform and empower these governance boards to be inspirational vessels that our wider community look to for guidance and leadership, not for paper-shuffling and administrivia.

We need these boards to not be reactive but to be proactive…to constantly observe the landscape of the Ubuntu community…the opportunities and the challenges, and to proactively capitalize on protecting the community from risk while opening up opportunity to everyone. This will make our community stronger, more empowered, and have that important dose of inspiration that is so critical to focus our family on the most important reasons why we do this: to build a world of technology freedom across the client and the cloud, underlined by a passionate community.

To achieve this will require awkward and uncomfortable change. It will require a discussion to happen to modify the charter and purpose of these boards. It will mean that some people on the current boards will not be the right people for the new charter.

I do though think this is important and responsible work for the Ubuntu community to be successful: if we don’t do this, I worry that the community will slowly degrade from lack of inspiration and empowerment, and our wider mission and opportunity will be harmed.

I am sure this post may offend some members of these boards, but it is not mean’t too. This is not a reflection of the current staffing, this is a reflection of the charter and purpose of these boards. Our current board members do excellent work with good and strong intentions, but within that current charter

We need to change that charter though, staff appropriately, and build an inspirational network of leaders that sets everyone in this great community up for success.

This, I believe with transform Ubuntu into a new world of potential, a level of potential I have always passionately believed in.

I have kicked off a discussion on ubuntu-community-team where we can discuss this. Please share your thoughts and solutions there!

Eric Hammond: AWS Lambda Walkthrough Command Line Companion

Planet Ubuntu - Fri, 11/14/2014 - 11:15

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name: source_bucket=alestic-lambda-example # Do not change this. Walkthrough code assumes this name target_bucket=${source_bucket}resized function=CreateThumbnail lambda_execution_role_name=lambda-$function-execution lambda_execution_access_policy_name=lambda-$function-execution-access lambda_invocation_role_name=lambda-$function-invocation lambda_invocation_access_policy_name=lambda-$function-invocation-access log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702 wget -q -OHappyFace.jpg \ https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg aws s3 cp HappyFace.jpg s3://$source_bucket/ Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough wget -q -O $function.js \ http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \ --role-name "$lambda_execution_role_name" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] }' \ --output text \ --query 'Role.Arn' ) echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:*" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::'$source_bucket'/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::'$target_bucket'/*" } ] }' Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience aws lambda upload-function \ --function-name "$function" \ --function-zip "$function.zip" \ --role "$lambda_execution_role_arn" \ --mode event \ --handler "$function.handler" \ --timeout 30 \ --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM { "Records":[ { "eventVersion":"2.0", "eventSource":"aws:s3", "awsRegion":"us-east-1", "eventTime":"1970-01-01T00:00:00.000Z", "eventName":"ObjectCreated:Put", "userIdentity":{ "principalId":"AIDAJDPLRKLG7UEXAMPLE" }, "requestParameters":{ "sourceIPAddress":"127.0.0.1" }, "responseElements":{ "x-amz-request-id":"C3D13FE58DE4C810", "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD" }, "s3":{ "s3SchemaVersion":"1.0", "configurationId":"testConfigRule", "bucket":{ "name":"$source_bucket", "ownerIdentity":{ "principalId":"A3NL1KOZZKExample" }, "arn":"arn:aws:s3:::$source_bucket" }, "object":{ "key":"HappyFace.jpg", "size":1024, "eTag":"d41d8cd98f00b204e9800998ecf8427e", "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko" } } } ] } EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \ --function-name "$function" \ --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \ --output text \ --query 'logGroups[*].[logGroupName]' log_stream_names=$(aws logs describe-log-streams \ --log-group-name "$log_group_name" \ --output text \ --query 'logStreams[*].logStreamName') echo log_stream_names="'$log_stream_names'" for log_stream_name in $log_stream_names; do aws logs get-log-events \ --log-group-name "$log_group_name" \ --log-stream-name "$log_stream_name" \ --output text \ --query 'events[*].message' done | less Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \ --role-name "$lambda_invocation_role_name" \ --assume-role-policy-document '{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringLike": { "sts:ExternalId": "arn:aws:s3:::*" } } } ] }' \ --output text \ --query 'Role.Arn' ) echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" \ --policy-document '{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "lambda:InvokeFunction" ], "Resource": [ "*" ] } ] }' Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \ --function-name "$function" \ --output text \ --query 'FunctionARN' ) echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \ --bucket "$source_bucket" \ --notification-configuration '{ "CloudFunctionConfiguration": { "CloudFunction": "'$lambda_function_arn'", "InvocationRole": "'$lambda_invocation_role_arn'", "Event": "s3:ObjectCreated:*" } }' Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=... aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \ --output text \ --query 'Functions[*].[FunctionName]' aws lambda get-function \ --function-name "$function" aws iam list-roles \ --output text \ --query 'Roles[*].[RoleName]' aws iam get-role \ --role-name "$lambda_execution_role_name" \ --output json \ --query 'Role.AssumeRolePolicyDocument.Statement' aws iam list-role-policies \ --role-name "$lambda_execution_role_name" \ --output text \ --query 'PolicyNames[*]' aws iam get-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" \ --output json \ --query 'PolicyDocument' aws iam get-role \ --role-name "$lambda_invocation_role_name" \ --output json \ --query 'Role.AssumeRolePolicyDocument.Statement' aws iam list-role-policies \ --role-name "$lambda_invocation_role_name" \ --output text \ --query 'PolicyNames[*]' aws iam get-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" \ --output json \ --query 'PolicyDocument' aws s3api get-bucket-notification \ --bucket "$source_bucket" Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg aws s3 rm s3://$source_bucket/HappyFace.jpg aws s3 rb s3://$target_bucket/ aws s3 rb s3://$source_bucket/ aws lambda delete-function \ --function-name "$function" aws iam delete-role-policy \ --role-name "$lambda_execution_role_name" \ --policy-name "$lambda_execution_access_policy_name" aws iam delete-role \ --role-name "$lambda_execution_role_name" aws iam delete-role-policy \ --role-name "$lambda_invocation_role_name" \ --policy-name "$lambda_invocation_access_policy_name" aws iam delete-role \ --role-name "$lambda_invocation_role_name" log_stream_names=$(aws logs describe-log-streams \ --log-group-name "$log_group_name" \ --output text \ --query 'logStreams[*].logStreamName') && for log_stream_name in $log_stream_names; do echo "deleting log-stream $log_stream_name" aws logs delete-log-stream \ --log-group-name "$log_group_name" \ --log-stream-name "$log_stream_name" done aws logs delete-log-group \ --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

Chuck Short: nova-compute-flex: Introduction and getting started

Planet Ubuntu - Fri, 11/14/2014 - 09:45
What is nova-compute-flex?

For the past couple of months I have been working on the OpenStack PoC called nova-compute-flex. Nova-compute-flex allows you to run native LXC containers using the python-lxc calls to liblxc. It creates small, fast, and reliable LXC containers on OpenStack. The main features of nova-compute-flex are the following:

  • Secure by default (unprivileged containers, apparmor, etc)
  • LXC 1.0.x
  • python-lxc (python2 version)
  • Uses btrfs for instance creation.

Nova-compute-flex (n-c-flex) is a new way of running native LXC containers on OpenStack. It is currently designed with Juno in mind since Juno is the latest release of OpenStack.  This tutorial to get nova-compute-flex up and running assumes that you will be using Ubuntu 14.04 release and will be running devstack with it.

How does n-c-flex work?

N-c-flex works the same way as the other virt drivers in OpenStack. It will stop and start containers,  use neutron for networking, etc. However it does not use qcow2 or raw images, it uses an image that we call “root-tar”.

“Root-tar” images are simply a tarball of the container which is similar to the ubuntu-cloud templates in LXC. They are relatively small and contain just enough to get you running a LXC container. These images are published by Ubuntu as well, and they can be found here.. If you wish to use other distros you can simply tar up the directories found on a given qcow2 image. As well as you could use the templates found in LXC. Its just that simple.

The way that nova-compute-flex works is the following:

  1. Download the tar ball from the glance server.
  2. Create a btrfs snapshot.
  3. Use lxc-usernsexec to un-tar the tar ball into the snapshot.
  4. When the instance starts create a copy of the snapshot.
  5. Create the LXC configuration files.
  6. Create the network for the container.
  7. Start the container.

It just takes seconds to create a new instance since it is just doing a copy of the btrfs snapshot when the image was downloaded from the glance server.

When the instance is created, the container is an unprivileged LXC container. This means that nova-compute-flex uses user-name-spaces with apparmor built in (if you are using Ubuntu).  The instance behaves like a container, but it looks and feels like a normal OpenStack instance.

Getting Started with n-c-flex

Assuming that you already have btrfs-tools is installed and you don’t have a free partition. You will need to create the instances directory where your n-c-flex instances are going to live. To do that you simply have to do the following:


dd if=/dev/zero of=<name of your large file> bs=1024k count=2000
sudo mkfs.btrfs <name of your large file>
sudo mount -t brfs -o user_subvol_rm_allowed <name of your large file> <mount point>

To make the changes permanent, modify your /etc/fstab accordingly.

Installing devstack and n-c-flex

In your “/opt” directory,  run the following commands:


mkdir -p /opt/stack
git clone https://github.com/zulcss/nova-compute-flex
git clone https://github.com/openstack-dev/devstack
/opt/stack/nova-compute-flex/contrib/devstack/prepare_devstack.sh

This will prepare your devstack to install software like LXC that been back ported to the Ubuntu Cloud Archive. The reason for the back port is that  some of the features that is needed in nova-compute-flex is not
found in the trusty version of LXC

After running the above commands you will have the following in your localrc:

virt_type=flex

To make your devstack more useful you should have the following in your localrc as well:


disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

GIT_BASE=https://github.com
DATA_DIR=<mount point>

ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=devstack
SERVICE_TOKEN=token

NOVA_BRANCH=stable/juno
CINDER_BRANCH=stable/juno
GLANCE_BRANCH=stable/juno
HORIZON_BRANCH=stable/juno
KEYSTONE_BRANCH=stable/juno
NEUTRON_BRANCH=stable/juno

This will allow you to use the stable branches of juno with neutron support. After modifying your localrc, you can proceed to install by running the “./stack.sh” script.

Running your first instance

As said before nova-compute-flex uses a different kind of image compared to regular nova. To upload the image to the glance server you have to do the following:


source openrc
wget http://cloud-images.ubuntu.com/utopic/current/utopic-server-cloudimg-amd64-root.tar.gz
glance image-create --name='lxc' --container-format=root-tar --disk-format=root-tar < utopic-server-cloudimg-amd64-root.tar.gz

After uploading the image you can run the regular way of creating instances by either using the python-novaclient tools or the euca2ools.

Looking forward

At the OpenStack Developrs Summit last week, Mark Shutleworth announced lxd (lex-dee). LXD is a container “hypervisor” that is built on top of the LXC project. LXD is meant to be used as system container, rather than application containers like Docker.

I will be using the knowledge that we have gained from working on  nova-compute-flex and applying it nova-compute-lxd. LXD will have a Rest-API to interact with the LXD containers, so nova-compute-lxd will be the lxd api to stop/start containers and other functions one expects to find in Nova. More discussion will be going on at the lxc-devel mailing list over the next couple of months.

However, if you want to use nova-compute-flex now go for it! If you wish to submit patches, the github project can be found at https://github.com/zulcss/nova-compute-flex. The work will be fed back into the nova-compute-lxd project as well.  It also has an issue tracker where you can submit bugs as well.

If you run into road blocks please let me know, and I will be happy to help.


Oli Warner: Python isn't always slower than C

Planet Ubuntu - Fri, 11/14/2014 - 09:08

If you ask developers to describe Python in a few words you'll probably hear easy, dynamic and slow but a recent impromptu game of Code Golf showed me that Python can actually be pretty competitive, even against compiled languages like C and C++ when you use the right interpreter: Pypy.

I use Python for its libraries. Django friends make building powerful websites really very simple, but I've never considered Python operationally fast. It's not really a requirement for me; as long as I can generate and return a page in 300ms from request, it's fast enough. That's common of most modern server-side languages.

But yesterday a Unix.SE text-processing question popped up. The problem was fairly simple. Read a file with variable length, numbered lines of DNA sequences:

1 ATTGACTAGCATGCTAGCTAGCTGACGATGCGA 2 GCTGACTGACTAGCTAGCATCGACTG 3 TAGCTGCTAGCTGCTGACTGACTAGCTAGC

And write the DNA part from each line to the a file using the first number and adding .seq extension.

All the usual suspects (awk, sed and bash loop) were already there making trouble so I decided to add some non-conventional implementations and a benchmark. The hypothesis being that when you're chunking through thousands of lines and making just as many write operations, it helps to stick to one environment and fork out less. Amongst my implementations was one for C and one for Python.

Python: It's compact and fairly self explanatory. Open the file, iterate the lines, split the line on whitespace and write accordingly.

with open("infile", "r") as f: for line in f: id, chunk = line.split() with open(id + ".seq", "w") as fw: fw.write(chunk)

C: It's been about a decade since I wrote any serious amount of C in anger. And even then it was silly University level coding. This opens and loops but instead of being able to split, we're using strtok to grab tokens from the line. And because C is C, just appending .seq becomes its own little memory reallocation nightmare.

#include <stdio.h> #include <string.h> FILE *fp; FILE *fpout; char line[100]; char *id; char *token; char *fnout; main() { fp = fopen("infile", "r"); while (fgets(line, 100, fp) != NULL) { id = strtok(line, "\t"); token = strtok(NULL, "\t"); char *fnout = malloc(strlen(id)+5); fnout = strcat(fnout, id); fnout = strcat(fnout, ".seq"); fpout = fopen(fnout, "w"); fprintf(fpout, "%s", token); fclose(fpout); } fclose(fp); } So is Python or C faster?

C obviously. Over a 100,000 line input file, C was 1.3x faster than Python...
And by Python, I mean CPython.

It was after I'd written all the other implementations that I remembered CPython wasn't the only Python I have installed on my desktop. I have Pypy. This is a highly optimised reimplementation of almost everything in the Python specs. For most people this means it's a drop-in alternative. Back to the benchmark...

Pypy was 1.03x faster than C

For all intents and purposes, the same speed but this is simple-to-write, easy to debug Python running at the speed of C. It's amazing.

I did go on to write a nice C++ option that was slightly faster again and wasn't too bad on the eye but it's still a lot more involved than the Python is. I know I'm obviously biased toward Python but it's something well worth going to as a first choice, especially for simple, scrappy text-processing jobs like this.

And if you already have Python code that you're considering switching to C (modules or full-on), give Pypy a shot first. Worst thing that'll happen is it won't work or it's still not fast enough.

Ronnie Tucker: Dropbox 2.11.34 Experimental Features a Rewritten UI for Linux Client

Planet Ubuntu - Fri, 11/14/2014 - 00:15

Dropbox, a client for an online service that lets you bring all your photos, docs, and videos anywhere, has been promoted to version 2.11.34 for the experimental branch.

The Dropbox developers don’t usually provide too many changes for the Linux platform and the latest update is not all that promising either. In fact, there is nothing specific for Linux, but the branch is an entirely different discussion. This will be a very interesting release when it becomes stable, but until then we can take a closer look at what’s coming.

Source:

http://linux.softpedia.com/blog/Dropbox-2-11-34-Experimental-Features-a-Rewritten-UI-for-Linux-Client-464468.shtml

Submitted by: Silviu Stahie

Bryan Quigley: Adobe Flash on Firefox/Linux EOL – Summary

Planet Ubuntu - Thu, 11/13/2014 - 20:51
We just ran a session [1] on what to do about the upcoming EOL for Firefox/Linux in 2017.  In short, we’re not planning to diverge from Mozilla’s direction.   The goal is to have Flash work today, and to become irrelevant over time.    Hopefully reaching the point of being irrelevant by 2017.  There are ways for you to help!  See below. Distributing Firefox and Chrom/ium plugins now possible A deal was reached with Adobe to distribute NPAPI and PPAPI Flash in Canonical Partners Repo!  (No more grabbing PPAPI from Chrome to get it to work in Chromium. No more “downloader” packages necessary for Firefox either.)  This should help make thin
How can you help make Flash go away? On any browser, any platform (that has Flash of course)

Use less Flash.  See if you can do step 1.  If you can proceed to step 2, etc.

  1. Make Flash Click to Play.
  2. Disable Flash.
  3. Uninstall Flash.

To do these on Chrome, browse to chrome://plugins/,  On Firefox go to Add-ons -> Plugins.

If their is a site that doesn’t work without Flash, see if you can load their site on a mobile device.   Either way contact them and ask them nicely about removing the Flash content to get more hits, or for enabling at least the mobile site for non Flash users.

Run a Beta browser

Generally both Firefox and Chrome will push new web technologies in their Beta browser.  Many of them have the potential to help make Flash less relevant.   Help make them more stable by testing them!

https://www.google.com/chrome/browser/beta.html

https://www.mozilla.org/en-US/firefox/channel/

Run Firefox Nightly

Try running Firefox nightly.   We could always use more testers.  Specifically, we might get a more aggressive Mozilla when MSE is done being implemented (which should make youtube even more HTML5 video friendly).

Of course, there a bunch of other useful features Mozilla is working on to make browsing better.   Help would be welcome there too!  Report bugs on issues you have.

https://nightly.mozilla.org/

[2][3][4]

Other options considered.
  • We default to Chromium  – nope, let’s specifically NOT switch browsers over Flash. 
    • Outcome: That would send the completely wrong message.
  • We default to a compatible Flash alternative (Shumway, Gnash, Lightspark)
    • Outcome: That would just be a stop gap measure.  And we’ll always be playing catchup.
  • We add PPAPI support to Firefox ourselves / Hack it in
    • Outcome:  Non-starter.  Unless Mozilla adds it we don’t want the maintenance burden.
My Todo List
  • Investigate why Youtube Live videos sometimes don’t work without Flash. (Even in Chromium).
  • Figure out why my Nightly install doesn’t have working H264..
    UPDATE – because it’s not designed to yet!  See here – http://andreasgal.com/2014/10/14/openh264-now-in-firefox/
    If you have H264 working in Firefox it’s likely due to GStreamer support included in the Ubuntu (and many other distros) builds.  Upstream Gst1.0 support is waiting on infrastructure [3].

Hopefully I captured everything right.. but if I didn’t please let me know!

[1] http://summit.ubuntu.com/uos-1411/meeting/22354/adobe-flash-on-firefoxlinux-eol/ [2] https://bugzilla.mozilla.org/show_bug.cgi?id=1083588 testers wanted, run Firefox nightly [3] https://bugzilla.mozilla.org/show_bug.cgi?id=973274  Have RHEL 6.2 experience?  Might be useful there.. [4] https://groups.google.com/forum/#!topic/mozilla.dev.tech.plugins/PK237Yk1oWM
  – thread discussing how Firefox can be more aggressive against Flash.

Aurélien Gâteau: Lightweight Project Management

Planet Ubuntu - Thu, 11/13/2014 - 14:52

Hi, my name is Aurélien and I have a problem: I start too many side projects. In fact my problem is even worse: I don't plan to stop running them, or creating new ones.

Most of those projects are tools I created to fill a personal need, but a few of them evolved to the point where I believe they can be useful to others. I restrain from talking about them however because I don't have the time to turn them into proper projects: creating a home page for them, doing regular releases and so on. This means they only exist as git repositories and end up staying unknown, unless I bump into someone who could benefit from one of them, at which point I mention the git repository.

Running software from git repositories is not always a great experience though: depending on how a project is managed, upgrading to the latest content can be a frustrating game of hit-and-miss if one cannot rely on the "master" branch being stable. I don't want others to experience random regressions. To address this, I decided that starting today, I will now run such potentially-useful-to-someone-else side-projects using what I am going to pompously call my "Lightweight Project Management Policy":

  • The "master" branch is always stable. You are encouraged to run it.

  • There is no "release" branches and no manually created release archives, but there may be release tags if need arise.

  • All developments happen in the "dev" branch or in topic branches which are then merged into "dev".

  • To avoid regressions, code inside the "dev" branch does not get merged into "master" until it has received at least three days of real usage.

  • The project homepage is the README.md file at the root of the git repository.

  • The policy is mentioned in the README.md.

Any project managed with this policy should thus always be usable as long as you stick with the "master" branch, and it should not take me too much time to keep them alive.

In the next weeks, I am going to "migrate" some of my projects to this policy. Once they are ready, I'll blog about them. Who knows, maybe you will find some of them useful?

Fabián Rodríguez: Essayer Firefox Hello sous Debian et ses dérivées

Planet Ubuntu - Thu, 11/13/2014 - 14:09

Mozilla a récemment annoncé en collaboration avec OpenTok la disponibilité de Firefox Hello dans la version bêta de Firefox actuellement disponible en ligne pour téléchargement.

C’est une annonce importante, dans un contexte où beaucoup de communications en ligne dépendent de logiciels privateurs comme Skype, qu’on sait restreints, censurés et espionnés sans répercussions pour leurs éditeurs.

Firefox Hello permet grâce à WebRTC, définit comme « un canevas logiciel avec des implémentations précoces dans différents navigateurs web pour permettre une communication en temps réel. la communication audio et vidéo en temps réel à même le navigateur web » sur Wikipédia.

J’explique ici comment installer Firefox bêta sous Debian (j’ai testé sous Debian 7 et sous Debian Testing) pour qu’il co-existe avec IceWeasel et qu’il puisse être utilisé en simultané, afin de faire l’essai de Firefox Hello. Ces instructions devraient aussi fonctionner pour Trisquel et Ubuntu, deux distributions GNU/Linux dérivées de Debian. Si vous utilisez une de ces distributions et que ces instructions manquent une précision ou autre détail, faites-moi signe pour les corriger.

Des versions 32 ou 64 bits de Firefox Bêta sont disponibles pour téléchargement à partir de Mozilla.org, j’utilise dans cet exemple la version 64 bits pour GNU/Linux, en français.

Installation
  1. À partir du répertoire où l’application a été téléchargée, décompressez l’archive:
    magicfab@lap-x230:~/Téléchargements$ tar -xjf -C ~ firefox-34.0b8.tar.bz2

    • -xjf: x pour extraire les fichiers, j pour le format bzip2, f pour spécifier le nom de l’archive
    • -C ~: indique que les fichiers à extraire seront dans le dossier personnel (home)

    Cette commande fera l’extraction des fichiers vers ~/firefox (répertoire firefox dans votre dossier personnel). Les meilleures pratiques sous GNU/Linux (selon le Linux Standard Base) seraient de mettre cette installation sous /usr/local/ ou sous /opt, mais étant donné la courte durée de vie de mes tests, j’ai préféré la faire dans mon répertoire personnel.

  2. Lancez l’application Menu principal, choisissez Internet > Nouvel élément:

  3. Indiquez les informations pour choisir l’icône et le fichier de démarrage:

    L’icône se trouve dans ~/firefox/browser/icons. Le fichier de démarrage se trouve dans ~/firefox mais il faudra ajouter l’option -P pour créer un profil séparé d’utilisateur et -no-remote pour qu’elle puisse être éxécutée en simultané à IceWeasel:
    /home/magicfab/firefox/firefox -P --no-remote

  4. Une fois l’information complétée, cliquez sur Valider, puis Fermer dans l’application Menu principal.

Lors du premier lancement, Firefox Bêta demandera quel profil utiliser:

Il faudra choisir Créer un profil pour en faire un nouveau, appelé « Firefox Beta », par exemple. Assurez-vous de choisir ce profil et cliquez sur Démarrer Firefox.

Attention à vos données personelles!

Firefox Bêta envoie des rapport de performance et des données techniques sur votre utilisation à Mozilla.

Vous pouvez activer ou désactiver ce comportement à tout moment avec cette procédure:

  1. Cliquez sur le bouton menu et sélectionnez Préférences
  2. Sélectionnez le paneau Avancé.
  3. Sélectionnez l’onglet Données collectées.
  4. Cochez ou décochez la case à côté de Activer la télémétrie.
  5. Cliquez sur Fermer pour fermer la fenêtre Préférences

Pour ma part je laisse ces options activées car Firefox Bêta n’est pas mon navigateur par défaut, mais vous pouvez choisir quel comportement est le plus approprié pour vous.

Ajouter Firefox Hello dans votre barre d’outils

Le boutton Firefox Hello n’est pas visible dans la barre d’outils ni dans le menu. Pour l’ajouter à la barre d’outils:

  1. Choisissez Personnalisez dans le menu
  2. Glissez l’icône Firefox Hello vers la barre d’outils

  3. Cliquez sur Quitter la personalisation

Voilà! Vous êtes prêts à inviter quelqu’un à essayer Firefox Hello!

Vous pouvez aussi vous inscrire à Firefox Accounts pour gérer une liste de contacts, en cliquant sur S’inscrire ou se connecter:

N’hésitez pas à me contacter si vous voulez en faire l’essai, le jour (heure de l’est) je suis souvent branché et disponible entre 10h et 15h.



Pas de contributions.
Donnez l'exemple!Faites un don / Make a donation

Si vous appréciez cet article ou fichier, encouragez-moi en faisant un don. Si vous voulez faire un don en Bitcoin (c'est quoi?), utilisez le code QR à gauche avec votre téléphone intelligent ou l'adresse suivante:
1NCyfVWftbUu1ZrGccfjFmqyhtjKrFMj3w

 

Ubuntu Women: Ubuntu Women At Ubuntu 14.11 Session Summary

Planet Ubuntu - Thu, 11/13/2014 - 12:35

On Thursday, November 13 2014, the Ubuntu Women Project participated in the Ubuntu Online Submit.  These were the topics that were covered:

  • Getting a final version the Orientation Quiz matrix finalized and ready to test
  • Having a sprint to test Harvest Bugs
  • Creating a “Resource List” that will hold projects looking for women and resources for projects looking for women

Thanks to everyone who participated and we’re looking forward to continuing discussions and work on all these items in the coming months.

Blueprint: https://blueprints.launchpad.net/ubuntu-women.org/+spec/community-1411-ubuntuwomen

Logs: http://wiki.ubuntu-women.org/Meetings/13112014

Please note that there is no video for this since those who came decided on doing it all via IRC.

 

 

Randall Ross: Ubuntu Online Summit: Solving Big (Data) Problems With Juju

Planet Ubuntu - Thu, 11/13/2014 - 11:15

Amir Sanjar, our resident and charming big data guy, spoke to all humans today in his "Big Data and Juju" session.

Highlights? Why not?

  • We're generating data with everything we do.
  • The landscape of solutions is complex and becoming more so.
  • Juju vastly simplifies the deployment of big data solutions.
  • Juju extends the sidewalk of solutions, i.e. you can connect other (non-big-data) charms to your solution.
  • Amir presented a big data Charms status report and roadmap.
  • We need more help, especially charmers, to create solutions for missing pieces of the big data puzzle

Would you like to help solve big (data) problems? The team would love to hear from you.

You can reach out to Amir on his Launchpad page, https://launchpad.net/~asanjar or join the discussion on the Juju mailing list.

You can also contact me. (Consider me your concierge.) I can be reached at randall AT ubuntu DOT com

Check out the whole session here:

https://www.youtube.com/watch?v=8ZJMJ931XHA

--

Banner cc-by-sa by author. Use it. Spread it everywhere.

Oli Warner: It's time we buried Internet Explorer 8

Planet Ubuntu - Thu, 11/13/2014 - 10:09

Every web developer loves to wail on Internet Explorer but we need to act now if we want to stop the history of IE6 repeating itself with IE8. The longer we don't, the longer we agree to limit ourselves to not using new and exciting features that make the internet better and our lives easier.

Internet Explorer 8 is 35 years old! Okay, okay, it's only 5½ but consider that in Internet Years. When IE8 was released in 2009, neither Android or Twitter were mainstream... and people still found BlackBerrys desirable. In digital terms, IE8 was released a million years ago.

A large part of being a web developer is supporting crappy old browsers. We spend all day with Mozilla and Chrome splashing their fancy new features around in our faces before we remember that we're not allowed to have fun; we need cross-browser support.

Our clients only care about their users buying stuff from them. There's simply no time or will to fight over browsers. And that means we developers have to suck it all the way up and learn to avoid a whole raft of CSS3, SVG and HTML5 features. These are things that could ultimately make websites better and also save us time developing. Some of the cracks can be papered over with Javascript but you still have to raster out your vectors and anything complicated can quickly become an elaborate stack of hacks.

Even remembering that 5 years is forever in tech years, the volume of stuff IE8 doesn't support is astounding. It's not just fancy HTML5+CSS stuff. It doesn't support TLS Server Name Indication which means you need a unique IP for every SSL certificate you deploy. That sort of stuff just makes me furious because IPv4 isn't infinite and its addresses are rationed. Even stupidly simple things like box-shadow aren't supported, and made doubly infuriating because proprietary crap like DXImageTransform.Microsoft.DropShadow exists... That's right, IE8 is able to render a drop-shadow but Microsoft were too arrogant to make it available through [draft] standard CSS.

I think we've established that kaka kaka IE8 is awful... And yet 5-10% of the Internet is still using it.

If IE9 has been around for 3 years, why hasn't everybody and their dog upgraded already?!

There are a couple of things going on.

IE8 was the last version of IE8 that Microsoft made for Windows XP so XP users either stick with IE8, use another browser or an upgrade to Windows Vista, 7 or 8. Even if they know this is the case, some people will just they wait for their computers to die of natural causes rather than upgrade. Things lag.

My assumption had always been that all IE8 users were on Windows XP. Windows Vista and 7 users could upgrade to IE9 and IE10 respectively so they'd never be an issue... But looking at my web statistics, 45% of IE8 users are using Windows 7, they just haven't updated IE. This is all down to the Microsoft's awful approach to upgrades. Despite stories to the contrary, if you want to upgrade IE, you have to go into Windows Update and manually select the upgrade. This same stupid "feature" was what left us with IE6 for all those years. People just didn't know they needed to upgrade. So didn't.

That does still have 55% of users down as computing dinosaurs who refuse to upgrade their machines. Since neither XP or IE8 get security updates any more, they're bumbling around the internet picking up every bit of malware known to man. They'll inevitably be recruited into a botnet at some point. They're frankly as much a risk to us as they are themselves.

And there are two groups the buck all other trends: Large enterprise IT departments are occasionally filled with lazy and largely incompetent jobsworths figuring out the paths of least resistance to make their lives easier. And China because there's so much piracy there (and therefore so little working Windows Update). Seriously, ~5% of China is using IE6 and ~13% on IE8.

But we're just web developers, what could we possibly do to influence the great unwashed?

Enterprise IT and China can both go swivel on it. There's nothing we or Microsoft can do to influence them; they're lost causes. If you need to support them, I guess that's that. We'll see you in 2020 when they've finally upgraded. For everybody else...

Education is going to be the easiest issue to solve. Unfortunately Windows computers don't explode when something on them leaves its support period. This is a major design flaw which keeps people oblivious people using ancient software indefinitely. Let's just tell them what the score is. Knowing is half the battle, right?

So if you own a website, you can make your IE8 users aware their browser is ancient. There are numerous solutions out there but it can be as simple as just adding a conditional header to your main template. You can highlight a few things:

  • IE8 is ancient in technology terms.
  • IE8 will become vulnerable to exploits.
  • Developers have to work extra hard to support IE8. They'd like to use that time to make other things better.
  • Enumerate or link to the steps they need to take to upgrade or change browser.

Here's my simple example using conditional comments. It's fairly low impact but won't annoy too many people.

<!--[if lt IE 9]> <p class="alert">Thanks for visiting my website. The browser you're using is <em>ancient</em>. Even Microsoft has stopped supporting it which means it's quite <strong>dangerous</strong> to keep using it. It also means I have to take extra steps to make sure this site works on your ancient browser.</p> <p class="alert">Do the Internet a favour and <a href="http://windows.microsoft.com/internet-explorer/">upgrade to the latest Internet Explorer</a>... or use another browser like Chrome or Firefox. If you're on Windows XP, your whole operating system is putting you and others at risk. Please upgrade.</p> <![endif]-->

I use a shorter one in my template but my IE8 users are pretty few already. The next step is to actively stop supporting IE8 and start using the things you've been holding back from. When people notice their browser isn't offering them the experience they want, they'll get the message.

But this didn't work for IE6, did it? Why would it work for IE8?

It did work but it took us forever to notice there was a problem. We expected IE6 to die naturally. It didn't.

Our rationale was that new versions of Windows were generally considered good upgrades and computer hardware was also improving at pace. Every couple of years, everybody would chuck out their old desktop and bring in this shiny new thing with a new copy of Windows. When Vista came out and was slated, people stopped upgrading. Hardware development pace has also slowed and with it (and a shift to mobile) people's enthusiasm for expensive Hardware+Windows has evaporated.

Once developers realised XP and IE6 would never die organically and started to lose their shit, banners and tools like sevenup started popping up to tell IE6 users of their insolence. Within a few years, IE6 dropped under 5%. Then the big sites like Google started dropping support and usage flew below 2% within a year. IE6 usage is currently around 1% and largely unusable on the modern internet; the way it should be.

We need to start the wheel turning for IE8. Just hoping it'll go away doesn't work because that's just not what happens with software.

What's next? What will the deaths of IE9 and IE10 bring?

Until Microsoft fix how they upgrade Internet Explorer —so major versions are little more than an automatic update like in Chrome, or packaged up separately from the core OS as in Ubuntu— we're destined to go through this cycle of uptake, decline and venomous hatred every couple of years. I already hate IE9, I just hate IE8 more.

Can I Use has a great search function that allows you to compare dinosaurs like IE9 and IE10 against the development builds of Chrome and Firefox. I'm personally looking forward to using FlexBox for layouts but I know a lot of people want decent 3D transforms.

The internet of tomorrow is going to be a beautiful thing to develop on, but only if we can get rid of the crappy old browsers that hold us back.

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator