Feed aggregator

Ben Howard: Snappy images for Vagrant

Planet Ubuntu - Wed, 01/14/2015 - 08:26
I am pleased to announce initial Vagrant images [1, 2]. These images are bit-for-bit the same as the KVM images, but have a Cloud-init configuration that allows Snappy to work within the Vagrant workflow.

Vagrant enables a cross platform developer experience on MacOS, Windows or Linux [3].

Note: due to the way that Snappy works, shared file systems within Vagrant is not possible at this time. We are working on getting the shared file system support enabled, but it will take us a little bit to get going.

If you want to use Vagrant packaged in the Ubuntu archives, in a terminal run::

  • sudo apt-get -y install vagrant
  • cd <WORKSPACE>
  • vagrant box add snappy http://goo.gl/6eAAoX
  • vagrant init snappy
  • vagrant up
  • vagrant ssh
If you use Vagrant from [4] (i.e Windows, Mac or install the latest Vagrant) then you can run:
  • vagrant init ubuntu/ubuntu-core-devel-amd64
  • vagrant up
  • vagrant ssh

These images are a work in progress. If you encounter any issues, please report them to "snappy-devel@lists.ubuntu.com" or ping me (utlemming) on Launchpad.net

---

[1] http://cloud-images.ubuntu.com/snappy/devel/core/current/devel-core-amd64-vagrant.box
[2] https://atlas.hashicorp.com/ubuntu/boxes/ubuntu-core-devel-amd64
[3] https://docs.vagrantup.com/v2/why-vagrant/index.html
[4] https://www.vagrantup.com/downloads.html

Jorge Castro: Juju Quickstart 1.60 is out

Planet Ubuntu - Wed, 01/14/2015 - 07:14

Francesco Banconi has just announced Juju Quickstart 1.60.

Too many improvements to list here, so just check out his blog post, available in ppa:juju/stable and Homebrew for your OSX users.

Oli Warner: Breaking the Internet won't stop terrorism

Planet Ubuntu - Wed, 01/14/2015 - 04:58

Governments want to intercept all terrorist communication but... they can't. You can talk online with perfect secrecy if you know what you're doing. Their solution to this is banning strong encryption. Most people have probably switched off but this will affect you (and won't stop terrorists).

Encryption is furiously dull but please give this ten minutes.

Since I started writing this, both Charlie Stross and Cory Doctorow have written excellent but fairly in-depth articles. For the sake of covering the important parts, I'll try to keep this high-level.

The government can read what you do on Facebook and read your email. They can get a rough idea what you do online and can almost certainly trigger alerts if you search for the wrong thing. It's generally accepted that for the UK and US, this is now an automated and warrantless process.

But which terrorists are using Facebook Events and Twitter to schedule attacks? The Charlie Hebdo attack has jolted numerous obnoxious politicians into realising that this PRISM-style access isn't enough to detect and track terrorism. Terrorists can use unmonitored or endpoint-encrypted services to stop GCHQ et al from spying on them.

Our Prime Minister, Dave WebCameron has the answer, ban it all.

If I am prime minister, I will make sure we do not allow terrorists safe space to communicate with each other. BBC News

Are we going to allow a means of communications which it simply isn’t possible to read? My answer to that question is: ‘No, we must not.’ New York Times

On the surface, these might be sentiments you agree with, but once you sit down and work out what it would mean to implement this, things look a little murkier... We could ban every service GCHQ can't tap into, but even if you can't use Whatsapp, the technology it uses is widespread and easy to reimplement in another app. Ban that too? Okay what if we just ban the evil technology that GCHQ can't crack and only allow weak encryption they can meddle with?

The first point I'd like to make is that terrorists are renowned for breaking the law. Why don't we just ban terrorism? It'll be as effective. Terrorists will keep using the strongest encryption they can. And they can do that offline if needs be. As Stross points out, you will already go to jail if you refuse to decrypt something if ordered to by a court.

What about the rest of us? Well in order to comply with Dave's wishes, all our applications, all our services would have to switch to a weak encryption cypher. What's wrong with weak crypto? It breaks... And not just for governments. Hobbyists and criminals would have a field day with a known-weak system.

But what are criminals going to decrypt? Well, it's not just government or ISPs who can intercept your data... Criminals, terrorists and even teenagers could all probably intercept your connections. That's why strong encryption is so important. If they can intercept weakly encrypted data, they can probably decrypt it.

And the UK software industry would be toast. Who the hell's going to buy software from a company legally-bound to rig their software up with duff encryption? How will open source even work if large parts of it are illegal here? Are we going to ban Github like India did?

It needs restating: Banning or breaking encryption only harms the law-abiding citizens.

What happens when that doesn't work?

Don't think for a second it'll stop with encryption.

Other people have talking about authoritarian-style "firewalls" that block many sites but I think we're going toward something much worse than anything we've seen suggested so far, a network-level Internet whitelist. I'm talking about completely prescriptive networking, a list of sites and networks your computer is allowed to talk to.

I'm being serious; this isn't just technically possible, it's probably easier than a blacklist. Government would be able to control what we read and where we talked. Peer-to-peer arrangements would be licensed (and otherwise blocked at network level). It would cut the UK off from most of the internet and leave what remained within the purview of GCHQ. If what they say is what they really want, this is the way they'll eventually do it.

It'd also be a wet-dream-come-true for media companies. Goodbye piracy.

It might sound fairly win-win until you consider that the Home Secretary (or whoever) had complete control. Don't like a political news story? It's gone. Don't want Scotland to be able to see London events? Blocked. You'll never know about it because nobody will ever be able to report about it. It would make the blacklists of China and Russia look like toys.

I'm not sure what's worse... That it's possible or that we're so close to it already...

But no, it still won't stop terrorism.

We all need to do something.

If we let our politicians continue panicking, discussing technology they don't understand to prevent people whose beliefs they cannot comprehend from performing actions they won't prevent, we're going to end up with more shitty laws that harm us more than terrorists.

The insidiousness of radicalisation and insular propaganda is truly something that needs to be fought but we don't accomplish that by repeatedly breaking the Internet. Unless that is you want to control what people think.

While we still have free choice, make sure your MP (who is probably up for re-election in a few months) and MEPs all know that online security and freedom is important to you. Write To Them makes this whole process furiously simple. Put in your postcode and spend another five minutes making sure your views count.

If you've got any questions about this, leave a comment and I'll try my best to get you an answer.

Lead photo by George Rex.

Benjamin Kerensa: Support Collab House Indiegogo Campaign

Planet Ubuntu - Tue, 01/13/2015 - 16:04

I wanted to quickly post a simple ask to Mozillians to please share this indiegogo campaign being ran by Mozillian Rockstar Vineel Pindy

A Mozilla event at Collab House

who has been a contributor for many years. Vineel is raising money for Collab House, a Collaborative Community Space in India which has been used for many Mozilla India events and other open source projects.

By sharing the link to this campaign or contributing some money to the campaign, you will not only support the Mozilla India community but will further Mozilla’s Mission by enabling communities around the globe that help support our mission.

Lets make this campaign a success and support our fellow Mozillians! If every Mozillian shared this or contributed $5 I bet we could have this funded before the deadline!

Daniel Pocock: Silent data loss exposed

Planet Ubuntu - Tue, 01/13/2015 - 13:06

I was moving a large number of image files around and decided to compare checksums after putting them in their new home.

Ouf of several thousand files, about 80GB of data, I found that seventeen of them had a checksum mismatch.

Running md5sum manually on each of those was showing a correct checksum, well, up until the sixth file and then I found this:

$ md5sum DSC_2624.NEF 94fc8d3cdea3b0f3479fa255f7634b5b DSC_2624.NEF $ md5sum DSC_2624.NEF 25cf4469f44ae5e5d6a13c8e2fb220bf DSC_2624.NEF $ md5sum DSC_2624.NEF 03a68230b2c6d29a9888d2358ed8e225 DSC_2624.NEF

Yes, each time I run md5sum on the same file it gives a different result. Out of the seventeen files, I found one other displaying the same problem and the others gave correct checksums when I ran md5sum manually. Definitely not a healthy disk, or is it?

This is the reason why checksumming filesystems like Btrfs are so important.

There are no errors or warnings in the logs on the system with this disk. Silent data loss at its best.

Is the disk to blame though?

It may be tempting to think this is a disk fault, most people have seen faulty disks at some time or another. In the old days you could often hear them too. There is another possible explanation though: memory corruption. The data read from disk is normally cached in RAM and if the RAM is corrupt, the cache would return bad data.

I dropped the read cache:

# echo 3 > /proc/sys/vm/drop_caches

and tried md5sum again and observed the file checksum is now correct.

It would appear the md5sum command had been operating on data in the page cache and the root cause of the problem is memory corruption. Time to run a memory test and then replace the RAM in the machine.

Ronnie Tucker: First Details Of The Ubuntu Phone

Planet Ubuntu - Tue, 01/13/2015 - 11:40

Straight from the good people at Canonical:

The user experience for smartphones has revolved around apps and its icon grid since the very first iPhone. Key mobile services on iOS and Android are delivered via apps in a fragmented manner and controlled by platform owners such as Google, Apple and Microsoft, which has put OEMs and Operators into a secondary role.

Users deserve a richer, faster and unfragmented experience built around the things they do most on their devices.

With the Ubuntu phone we are moving away from the app grid towards integrated content and services. And we do this by providing a user experience that is centered on bringing the key mobile digital life services directly to the screen, which at the heart we call ‘scopes.’

Scopes are a way of delivering unified experiences for various service categories, front and centre to the user, without hiding them behind a sea of apps and app icons. They are created via a simple UI toolkit with much lower development and maintenance costs than apps. There are two types of scopes – aggregation and branded.

Aggregation scopes define the device’s default experience and what makes Ubuntu phones valuable to end users. They allow OEMs and Operators to create a user experience that is unique to their devices such as the NearBy scope that aggregates local services centered around what you’re doing. We’ll go into more detail on the other aggregated scopes in an upcoming Phone Glimpse mailer.

Branded scopes are app like experiences delivered directly to the screen, fully branded. Discoverable through the default store, from a feed in an aggregation scope, or as a favourited default screen. A faster way for developers to build a rich and easier to access branded experience on a device.

Ronnie Tucker: Ubuntu Phone Details

Planet Ubuntu - Tue, 01/13/2015 - 11:37

First details of the Ubuntu Phone emerge from Canonical:

The user experience for smartphones has revolved around apps and its icon grid since the very first iPhone. Key mobile services on iOS and Android are delivered via apps in a fragmented manner and controlled by platform owners such as Google, Apple and Microsoft, which has put OEMs and Operators into a secondary role.

Users deserve a richer, faster and unfragmented experience built around the things they do most on their devices.

With the Ubuntu phone we are moving away from the app grid towards integrated content and services. And we do this by providing a user experience that is centered on bringing the key mobile digital life services directly to the screen, which at the heart we call ‘scopes.’

Scopes are a way of delivering unified experiences for various service categories, front and centre to the user, without hiding them behind a sea of apps and app icons. They are created via a simple UI toolkit with much lower development and maintenance costs than apps. There are two types of scopes – aggregation and branded.

Aggregation scopes define the device’s default experience and what makes Ubuntu phones valuable to end users. They allow OEMs and Operators to create a user experience that is unique to their devices such as the NearBy scope that aggregates local services centered around what you’re doing. We’ll go into more detail on the other aggregated scopes in an upcoming Phone Glimpse mailer.

Branded scopes are app like experiences delivered directly to the screen, fully branded. Discoverable through the default store, from a feed in an aggregation scope, or as a favourited default screen. A faster way for developers to build a rich and easier to access branded experience on a device.

Ian Weisser: Introducing Ubuntu Find-A-Task

Planet Ubuntu - Tue, 01/13/2015 - 10:50


The Ubuntu Community website has an awesome new service: Find-A-Task

It's a referral service - it helps volunteers discover teams and tasks that match their interests.

  • Link to it!
  • Refer new enthusiasts toward it!
  • Advertise your teams and projects on it!

Give it a try and see how it can work for your team or project.


How do I get my team listed?
So easy and so fast.
  1. What volunteer role do you want to advertise?
  2. What's a very short, exciting description of the role?
  3. Which Find-A-Task paths do you think this role is appropriate for? 
  4. Create a great landing page on the wiki. (example)
    • Drop by #ubuntu-community-team and let us know.
      • Role: Frobishers, 
      • Description: "Help Frobnicators add fabulous Frob!"
      • Path: One, in the Coding and Development submenu
      • Landing URL http://wiki.ubuntu.com/FrobTeam/Frobishers

    Your landing page:
    This is a volunteer's first impression of your team. Make it shine.

    When volunteers show up at your wiki page, they are already interested. They want to know how to set up, who to contact, and how to get started on their first easy work item. They want instructions and details.

    If you don't provide what they want, they may move on to their next choice. Find-A-Task makes it easy for them to move on.



    Credits
    Tremendous thanks to:

    Ubuntu Kernel Team: Kernel Team Meeting Minutes – January 13, 2015

    Planet Ubuntu - Tue, 01/13/2015 - 10:42
    Meeting Minutes

    IRC Log of the meeting.

    Meeting minutes.

    Agenda

    20150113 Meeting Agenda


    Release Metrics and Incoming Bugs

    Release metrics and incoming bug data can be reviewed at the following link:

    • http://people.canonical.com/~kernel/reports/kt-meeting.txt


    Status: Vivid Development Kernel

    Both the master and master-next branches of our Vivid kernel have been
    rebased to the v3.18.2 upstream stable kernel. This has also be
    uploaded to the archive, ie. 3.18.0-9.10. Please test and let us
    know your results. We are also starting to track the v3.19 kernel on
    our unstable branch and have pushed preliminary packages to our ppa.
    —–
    Important upcoming dates:
    Thurs Jan 22 – Vivid Alpha 2 (~1 week away)
    Thurs Feb 5 – 14.04.2 Point Release (~3 weeks away)
    Thurs Feb 26 – Beta 1 Freeze (~6 weeks away)


    Status: CVE’s

    The current CVE status can be reviewed at the following link:

    http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


    Status: Stable, Security, and Bugfix Kernel Updates – Utopic/Trusty/Precise/Lucid

    Status for the main kernels, until today:

    • Lucid – Kernel prep week.
    • Precise – Kernel prep week.
    • Trusty – Kernel prep week.
    • Utopic – Kernel prep week.

      Current opened tracking bugs details:

    • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

      For SRUs, SRU report is a good source of information:

    • http://kernel.ubuntu.com/sru/sru-report.html

      Schedule:

      cycle: 09-Jan through 31-Jan
      ====================================================================
      09-Jan Last day for kernel commits for this cycle
      11-Jan – 17-Jan Kernel prep week.
      18-Jan – 31-Jan Bug verification; Regression testing; Release


    Open Discussion or Questions? Raise your hand to be recognized

    No open discussion.

    The Fridge: Ubuntu Weekly Newsletter Issue 399

    Planet Ubuntu - Tue, 01/13/2015 - 09:18

    Welcome to the Ubuntu Weekly Newsletter. This is issue #399 for the week January 5 – 11, 2015, and the full version is available here.

    In this issue we cover:

    The issue of The Ubuntu Weekly Newsletter is brought to you by:

    • Paul White
    • Elizabeth K. Joseph
    • Jose Antonio Rey
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

    Ubuntu Weekly Newsletter Issue 399

    The Fridge - Tue, 01/13/2015 - 09:18

    Welcome to the Ubuntu Weekly Newsletter. This is issue #399 for the week January 5 – 11, 2015, and the full version is available here.

    In this issue we cover:

    The issue of The Ubuntu Weekly Newsletter is brought to you by:

    • Paul White
    • Elizabeth K. Joseph
    • Jose Antonio Rey
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

    Mohamad Faizul Zulkifli: Squid Proxy - clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (92) Protocol not available

    Planet Ubuntu - Tue, 01/13/2015 - 05:33

    solutions,

    #modprobe ip_conntrack

    and put ip_conntrack on /etc/modules

    Jono Bacon: Discourse: Saving forums from themselves

    Planet Ubuntu - Mon, 01/12/2015 - 22:31

    Many of us are familiar with discussion forums: webpages filled with chronologically ordered messages, each with a little avatar and varying degrees of cruft surrounding the content.

    Forums are a common choice for community leaders and prove to be popular, largely due to their simplicity. The largest forum in the world, Gaia Online, an Anime community, has 27 million users and over 2,200,000,000 posts. They are not alone: it is common for forums to have millions of posts and hundreds of thousands of users.

    So, they are a handy tool in the armory of the community leader.

    The thing is, I don’t particularly like them.

    While they are simple to use, most forums I have seen look like 1998 vomited into your web browser. They are often ugly, slow to navigate, have suboptimal categorization, and reward users based on the number of posts as opposed to the quality of content. They are commonly targeted by spammers and as they grow in size they invariably grow in clutter and decrease in usefulness.

    I have been involved with and run many forums and while some are better, most are just similar incarnations of the same dated norms of online communication.

    So…yes…not a fan.

    Enter Discourse

    Fortunately a new forum is on the block and it is really very good: Discourse.

    Created by Jeff Atwood, co-founder of Stack Overflow and the Stack Exchange Network, Discourse takes a familiar but uprooted approach to forums. They have re-thought through everything that is normal in forums and improved online communication significantly.

    If you want to see it in action, see the XPRIZE Community, Bad Voltage Community, and Community Leadership Forum forums that I have set up.

    Discourse is neat for a few reasons.

    Firstly, it is simple to use and read. It presents a simple list of discussions with suitable categories, as opposed to cluttered sub-forums that divide discussions. It provides a easy and effective way to highlight and pin topics and identify active discussions. Users can even hide certain categories they are not interested in.

    Creating and replying to topics is a beautiful experience. The editor supports Markdown as well as GUI controls and includes a built-in preview where you can embed videos, images, tweets, quotes, code, and more. It supports multiple headings, formatting styles, and more. I find that posts really come to life with Discourse as opposed to the limited fragments of text shown on other forums.

    Discourse is also clever in how it encourages good behavior. It has a range of trust levels that reward users for good and regular participation in the forum. This is gamified with badges which encourages users to progress, but more importantly from a community leadership perspective, it provides a simple at-a-glance view of who the rock stars in the forum are. This provides a list of people I can now encourage and engage to be leaders. Now, before you get too excited, this is based on forum usage, not content, but I find the higher trust level people are generally better contributors anyway.

    Discourse also makes identity pleasant. Users can configure their profiles in a similar way to Twitter with multiple types of imagery and details about who they are. Likewise, referencing other users is simple by pressing @ and then their username. This makes replies easier to spot in the notifications indicator and therefore keeps the discussion flowing.

    Administrating and running the site is also simple. User and content management is a breeze, configuring the look and feel of most aspects of the forum is simple, and Discourse supports multiple login providers.

    What’s more, you can install Discourse easily with docker and there are many hosting providers. While Jeff Atwood’s company has their own commercial service I ended up using DiscourseHosting who are excellent and pretty cheap.

    To top things off, the Discourse community are responsive, polite, and incredibly enthusiastic about their work. Everything is Open Source and everything works like clockwork. I have never, not once, seen a bug impact a stable release.

    All in all Discourse makes online discussions in a browser just better. It is better than previous forums I have used in pretty much every conceivable way. If you are running a community, I strongly suggest you check Discourse out; there simply is no competition.

    Daniel Pocock: Lumicall: big steps forward, help needed

    Planet Ubuntu - Mon, 01/12/2015 - 13:02

    I've recently made more updates to Lumicall, the free, open source and secure alternative to Viber and Skype.

    Here are some of the highlights:

    • The dialing popup is now optional, so if you want to call your friends with Lumicall / SIP but they don't want to see the popup when making calls themselves, you can disable the popup on their phone.
    • The dialer popup now shows results asynchronously so you can dial more quickly
    • SIP SIMPLE messaging is now supported, Lumicall is now taking on WhatsApp
    • Various bugs in the preferences/settings have been fixed and adding SIP accounts is now easier
    • Dialing with a TURN relay is now much more reliable
    • It is now possible to redial SIP calls in the call history without seeing the nasty Android bug / popup telling you that you don't have Internet calling configured
    F-Droid not updated yet

    F-Droid is not yet carrying the latest version. The F-Droid team may need assistance as they appear to be reviewing a lot of the third-party dependencies used by apps they distribute to make sure the full stack is 100% free software. If people want to continue having the option to get Lumicall and other free software through F-Droid instead of Google Play then helping the F-Droid community is the number one way you can help Lumicall.

    Other ways you can help, even without coding

    You don't have to be a developer to help with Lumicall.

    Taking on Viber, Skype and now WhatsApp as well may not sound easy. It isn't. Your help could make the difference though.

    Here are some of the things that need assistance:

    • Helping to get it on Wikipedia, they keep deleting the page while happily hosting pages about similar products like Sipdroid and CSipSimple
    • Helping get the latest dependencies and Lumicall version into F-Droid
    • UI design ideas
    • Web site assistance
    • Documentation and screenshots, e.g. for use with Asterisk and FreeSWITCH and various SIP proxies
    • Translation
    • Messaging: to be the default SMS app on an Android device, Lumicall would need a full messaging UI and the ability to replace all functions of the default SMS app. Can anybody identify any other free software that does this and is modular enough to share the relevant code with Lumicall?
    • ZRTP: can you help improve the ZRTP stack used by Lumicall?
    Try SIP SIMPLE messaging

    When composing a message to a Lumicall user, the SIP address is written sip:number @sip5060.net. E.g. if the number is +442071234567 then the SIP address to use when calling or composing a message is sip:+442071234567 @sip5060.net

    Debian Developers should be able to interact with Lumicall users from rtc.debian.org using the SIP over WebSocket messaging.

    Getting the latest Lumicall

    It is available from:

    Jono Bacon: Announcing the Community Leadership Summit 2015!

    Planet Ubuntu - Mon, 01/12/2015 - 11:27

    I am delighted to announce the Community Leadership Summit 2015, now in it’s seventh year! This year it takes place on the 18th and 19th July 2015, the weekend before OSCON at the Oregon Convention Center. Thanks again to O’Reilly for providing the venue.

    For those of you who are unfamiliar with the CLS, it is an entirely free event designed to bring together community leaders and managers and the projects and organizations that are interested in growing and empowering a strong community. The event provides an unconference-style schedule in which attendees can discuss, debate and explore topics. This is augmented with a range of scheduled talks, panel discussions, networking opportunities and more.

    The heart of CLS is an event driven by the attendees, for the attendees.

    The event provides an opportunity to bring together the leading minds in the field with new community builders to discuss topics such as governance, creating collaborative environments, conflict resolution, transparency, open infrastructure, social networking, commercial investment in community, engineering vs. marketing approaches to community leadership and much more.

    The previous events have been hugely successful and a great way to connect together different people from different community backgrounds to share best practice and make community management an art and science better understood and shared by us all.

    I will be providing more details about the event closer to the time, but in the meantime be sure to register!

    Mixing Things Up

    For those who have been to CLS before, I want to ask your help.

    This year I want to explore new ideas and methods of squeezing as much value out of CLS for everyone. As such, I am looking for your input on areas in which we can improve, refine, and optimize CLS.

    I ask you head over to the Community Leadership Forum and share your feedback. Thanks!

    Nicholas Skaggs: PSA: Community Requests

    Planet Ubuntu - Sun, 01/11/2015 - 23:00
    As you plan your ubuntu related activities this year, I wanted to highlight an opportunity for you to request materials and funds to help make your plans reality. The funds are donations made by other ubuntu enthusiasts to support ubuntu and specifically to enable community requests. In other words, if you need help traveling to a conference to support ubuntu, planning a local event, holding a hackathon, etc, the community donations fund can help.

    Check out the funding page for more information on how to apply and the requirements. In short, if you are a ubuntu member and want to do something to further ubuntu, you can request materials and funding to help. Global Jam is less than a month away, is your loco ready? Flavors, trying to plan events or hold other activities? I'd encourage all of you to submit requests if money or materials can help enable or enhance your efforts to spread ubuntu. Here's to sharing the joy of ubuntu this year!

    Valorie Zimmerman: Drawing in new contributors and growing the community

    Planet Ubuntu - Sun, 01/11/2015 - 22:23
    It's a really exciting time to be active in both KDE and Kubuntu. So many new initiatives, projects, new collaborations. And yet.

    Comment often heard over the past couple of years: we've lost $person / we're missing people to maintain/lead/do $project. This is understandable, and to be expected in a large, mature organization such as KDE; a dynamic project loses people as well as gains new contributors.

    In contrast, almost daily in #kde and #kde-devel IRC channels we have new people trying to find some way to get involved with KDE. In an effort
    to bring the solution and the problem together, at Akademy we
    brainstormed and came up with the Mission forum.

    What that forum needs is postings! When a developer is thinking
    about giving up maintainership, please write a Maintainer Wanted
    post. When you are fixing bugs, and see a valuable bit of code which needs
    porting, please write that up and put it on the forum.

    Naturally we always need ideas for possible Google Summer of Code projects, and the forum is a good place to post and develop those ideas. Eventually they will be moved to our GSoC docs, but they can be discussed and refined on the forum.

    New skills needed, documentation, internationalization, translation, artwork, promo, and web work tasks are also suitable. If you have written a "help wanted" blog or mail list email in the past, dig it out and post it on the forum. Be sure to clearly outline for people how to undertake your tasks.



    In fact, once Google Code-in is over, how about putting some of those tasks which remain into the forum? Those teams who didn't have time to mentor during the contest can still write up small tasks and put them into the forum as well. Once you get into the habit of creating postings, you'll be prepared and want to participate in GCi next year!

    I know developers often don't like forums, but guess who does like them?
    Beginners and people who are using search engines. We need these
    people to join our community and start helping out. That will happen
    when we ask in a public place, which is the forum.

    One more time:  https://forum.kde.org/viewforum.php?f=291

    Valorie Zimmerman: Students, Google Summer of Code is coming, but not quite yet....

    Planet Ubuntu - Sun, 01/11/2015 - 20:46
    We've been seeing more and more questions about GSoC and how to get involved.

    GSoC 2015 will be happening, but it is not yet time for orgs to even apply, much less be accepted. So we have no ideas page as yet for GSoC 2015.

    That said, the best way to have your GSoC proposal accepted is to join
    a team NOW and work with them on triaging and fixing bugs, and working
    on old and new projects. As you work with your team members and the
    codebase, you will be learn how to create a proposal which fits the needs
    of the project, and also find willing and able mentors willing to guide you. Remember, mentoring is hard work. Probable mentors want to choose students they can trust to complete their proposal successfully.

    The time to prepare for GSoC is now -- but it is not the time for
    creating proposals yet. The important part of GSoC is embedding
    yourself into your chosen project.Your energy can transform a project
    from one lacking a spark into one brimming with enthusiasm. Go for it!

    How to join a team:
    • Find the best list(s) and subscribe, and scan the archives.
    • Join the relevant IRC channels and hang out there All The Time.
    • Search the forum for areas/posts where you can help out.
    • Start searching https://bugs.kde.org for bugs which you can test, and fix.
    • Learn how to propose your code changes to Reviewboard.
    Other ways to prepare yourself:
    • Read the KDE Developers manual and prepare your development environment.
    • Read the GSoC Student Manual.
    • As you work your way through the documentation, please fix errors you find, or update it. Remember, documentation is part of the code we provide to both other developers and users.

    Mario Limonciello: Ambilight clone using Raspberry Pi

    Planet Ubuntu - Sun, 01/11/2015 - 10:33
    Recently I came across www.androidpolice.com/2014/04/07/new-app-huey-synchronizes-philips-hue-lights-with-your-movies-and-tv-shows-for-awesome-ambient-lighting-effects/ and thought it was pretty neat.  The lights were expensive however, and it required your phone or tablet to be in use every time you wanted to use it which seemed sub-optimal.

    I've been hunting for a useful project to do with my Raspberry Pi, and found out that there were two major projects centered around getting something similar setup.

    Ambi-TV: https://github.com/gkaindl/ambi-tv
    Hyperion: https://github.com/tvdzwan/hyperion/wiki

    With both software projects, you take an HDMI signal, convert it to analog and then capture the analog signal to analyze.  Once the signal is analyzed a string of addressable LED's is programmed to match what the borders are colored.
    I did my initial setup using both software packages but in the end preferred using Hyperion for it's easy of use of configuration and results.

    Necessary HardwareI purchased the following (links point to where I purchased):
    Other stuff I already had on hand that was needed:
    • Soldering tools
    • Spare prototyping board
    • Raspberry pi w/ case
    • Extra HDMI cables
    • Analog Composite cable 
    • Spare wires

    Electronics SetupOnce everything arrived, I soldered a handful of wires to a prototyping board so that I could house more of the pieces in the raspberry pi case.  I used a cut up micro USB cord to provide power from the 5V rail and ground to the pi itself and then also to one end of the 4 pin JST adapter.
    Prototyping board, probably this size is overkill,
    but I have flexibility for future projects to add on now.The power comes into the board and is used to power both the LEDs and the the raspberry pi from a single power source.  The clock and data lines for the LED string are connected to some header cable to plug into the raspberry pi.

    GPIO connectorsThe clock and data lines on the LPD8806 strip (DI/CI)  matched up to these pins on the raspberry pi:
    Pin 19 (MOSI) LPD8806 DAT pin
    Pin 23 (SCLK) LPD8806 CLK pinAlthough it's possible to power the raspberry pi from the 5V and ground rails in the GPIO connector on the pi instead of micro USB, there is no over current protection on those rails.  In case of any problems with a current spike the pi would be toast.

    CaseOnce I got everything put into the pi properly, I double checked all the connections and closed up the case.
    My pi case with the top removed and an
    inset put in for holding the proto boardWhole thing assembledTV mounted LEDsI proceeded to do the TV.  I have a 46" set, which works out to 18 LEDs on either side and 30 LEDs on the top and bottom.  I cut up the LED strips and used double sided tape to affix to the TV.  Once the LED strips are cut up you have to solder 4 pins from the out end of one strip to the in end of another strip.  I'd recommend looking for some of the prebuilt L corner strips if you do this.  I didn't use them and it was a pain to strip and hold such small wires in place to solder in the small corners.  All of the pins that are marked "out" on one end of the LED strip get connected to the "in" end on the next strip.

    Back of TV w/ LEDs attached
    Corner with wires soldered on from out to inExternal hardware SetupFrom the output of my receiver that would be going to my TV, I connect it to the input of the HDMI splitter.
    The HDMI splitter's primary output goes to the TV.
    The secondary output goes to the HDMI2AV adapter.
    The HDMI2AV adapter's composite video output gets connected to the video input of the USB grabber.
    The USB grabber is plugged directly into the raspberry pi.


    Software SetupOnce all the HW was together I proceeded to get the software set up.  I originally had an up to date version of raspbian wheezy installed.  It included an updated kernel (version 3.10).  I managed to set everything up using it except the grabber, but then discovered that there were problems with the USB grabber I purchased.
    Plugging it in causes the box to kernel panic.  The driver for the USB grabber has made it upstream in kernel version 3.11, so I expected it should be usable in 3.10 with some simple backporting tweaks, but didn't narrow it down entirely.
    I did find out that kernel 3.6.11 did work with an earlier version of the driver however, so I re-did my install using an older snapshot of raspbian.  I managed to get things working there, but would like to iron out the problems causing a kernel panic at some point.
    USB Grabber instructionsThe USB grabber I got is dirt cheap but not based off the really common chipsets already supported in the kernel with the versions in raspbian, so it requires some extra work.
    1. Install Raspbian snapshot from 2013-07-26.  Configure as desired.
    2. git clone https://github.com/gkaindl/ambi-tv.git ambi-tv
    3. cd ambi-tv/misc && sudo sh ./get-kernel-source.sh
    4. cd usbtv-driver && make
    5. sudo mkdir /lib/modules/3.6.11+/extra
    6. sudo cp usbtv.ko /lib/modules/3.6.11+/extra/
    7. sudo depmod -a
    Hyperiond InstructionsAfter getting the grabber working, installing hyperion is a piece of cake.  This will set up hyperiond to start on boot.
    1. wget -N https://raw.github.com/tvdzwan/hyperion/master/bin/install_hyperion.sh
    2. sudo sh ./install_hyperion.sh
    3. Edit /etc/modprobe.d/raspi-blacklist.conf using nano.  Comment out the line with blacklist spi-bcm2708
    4. sudo reboot
    Hyperion configuration fileFrom another PC that has java (OpenJDK 7 works on Ubuntu 14.04)
    1. Visit https://github.com/tvdzwan/hyperion/wiki/configuration and fetch the jar file.
    2. Run it to configure your LEDs.
    3. From the defaults, I particularly had to change the LED type and the number of LEDs around the TV.
    4. My LEDs were originally listed at RGB but I later discovered that they are GRB.  If you encounter problems later with the wrong colors showing up, you can change them here too.
    5. Save the conf file and scp it into the /etc directory on your pi
    6. sudo /etc/init.d/hyperiond restart
    Test the LED's
    1. Plug in the LEDs and install the test application at https://github.com/tvdzwan/hyperion/wiki/android-remote
    2. Try out some of the patterns and color wheel to make sure that everything is working properly.  It will save you problems later diagnosing grabber problems if you know things are sound here (this is where I found my RGB/GRB problem).
    Test pattern
    Set up things for Hyperion-V4L2I created a script in ~ called toggle_backlight.sh.  It runs the V4L2 capture application (hyperion-v4l2) and sets the LEDs accordingly.  I can invoke it again to turn off the LEDs.  As a future modification I intend to control this with my harmony remote or some other method.  If someone comes up with something cool, please share.
    #!/bin/shARG=toggleif [ -n "$1" ]; then        ARG=$1fiRUNNING=$(pgrep hyperion-v4l2)if [ -n "$RUNNING" ]; then        if [ "$ARG" = "on" ]; then                exit 0        fi        pkill hyperion-v4l2        exit 0fihyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&
    That's the exact script I use to run things.  I had to modify the crop height from the defaults that were on the directions elsewhere to avoid flicker on the top.  To diganose problems here, I'd recommend using the --screenshot argument of hyperion-v4l2 and examining output.
    Once you've got it good, add it to /etc/rc.local to start up on boot:
    su pi -c /home/pi/toggle_backlight.sh
    Test It all togetherEverything should now be working.
    Here's my working setup:
    https://www.youtube.com/watch?v=nSrGfh8asgg

    Stuart Langridge: Ubuntu phone screencasting, a minor tip

    Planet Ubuntu - Sat, 01/10/2015 - 23:14

    An Ubuntu phone has a command-line utility called mirscreencast which dumps screen frames to a file, meaning that in theory it’s possible to record a video of your phone’s screen. In practice, though, it doesn’t work for video; the phone is that busy (a) grabbing frames and (b) writing them to the phone’s storage that you can’t actually use it for jerkiness, and the resultant video includes about one frame in ten. I can’t fix this, but I did come up with a way to make it at least a bit better — instead of saving the video onto the phone’s storage, send it over the network to a real machine.

    On your computer: nc -l -p 1234 > out, which uses netcat to listen to port 1234 and send everything that comes in there to a file named out.

    On the phone: mirscreencast -n 600 -m /var/run/mir_socket -s 360 640 --stdout | nc mycomputer 1234, which uses mirscreencast to record frames (at a particular smaller size) and then send them with netcat to port 1234 on the computer. (You may need to put your computer’s IP address instead of mycomputer, especially since Ubuntu phone won’t resolve computername.local names.)

    Then, once recording finishes, mencoder -demuxer rawvideo -rawvideo fps=6:w=360:h=640:format=bgra -ovc x264 -o out.mp4 out makes a proper mp4. (Cheers Bill for the mirscreencast info.)

    It still isn’t great. But it’s a bit better.

    Pages

    Subscribe to Ubuntu Arizona LoCo Team aggregator