“Linux Mint 17.3 is a long term support release which will be supported until 2019. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use,” said Clement Lefebvre.
Highlights of the Linux Mint 17.3 “Rosa” Cinnamon Edition operating system include the latest stable Cinnamon 2.8 desktop environment, full UEFI (Unified Extensible Firmware Interface) support, and better support for NVIDIA GeForce GPUs.
Based on Ubuntu 14.04.3 LTS (Trusty Tahr), Linux Mint 17.3 (Rosa) will be powered by Linux kernel 3.19, which is supported by Canonical until Ubuntu 14.04 LTS reaches end of life on April 2019. Therefore, Linux Mint 17.3 will also be supported until 2019.
Users of the Linux Mint 17, Linux Mint 17.1 and Linux Mint 17.2 operating systems will be able to upgrade to Linux Mint 17.3 once the final release will be published on the project’s website, sometime next month.
Submitted by: Arnfried Walbrecht
According to Mr. Hoogland, Bodhi Linux 3.1.1 is an unscheduled bugfix release that resolves a very annoying issue where users weren’t prompted to enter a password when attempting to connect to an encrypted wireless network.
“The 3.1.0 release we released back in August had an issue where users were not always prompted automatically for wireless passwords when connecting to encrypted networks. This lead to enough confusion / user frustration,” says Jeff Hoogland.
As a bonus, the Bodhi Linux 3.1.1 bugfix release includes all the software updates and security patches that have been published since the debut of the Bodhi Linux 3.1.0 operating system in August 2016.
Submitted by: Arnfried Walbrecht
It’s Nov, 20th again, and as every year it’s time for the Ubuntu Community Appreciation Day :-)The spirit of UCA
And, as the wiki page of the event says, Ubuntu is not just an Operating System, it is a whole community in which everybody collaborates with everybody to bring to the life a wonderful Human experience. When you download the iso, burn it, install it and start to enjoy it, you know that a lot of people made magnificent efforts to deliver the best Ubuntu OS possible.
For all the effort exerted, for every minute of the year, some gave of their time talent or treasure. There’s the Ubuntu Community Appreciation Day, when everybody – whether user, developer, or non-developer contributor anyone who gives a hand making Ubuntu what it is today – everybody takes a moment to thank someone for their contribution. Every contribution counts! Take time to say, “Thank you!”
The word Thank You inspires and gives a huge amount of motivation to encourage people to become even better contributors thus making Ubuntu and the community even better.Choose someone to thank
While a global thanks to all contributors it is a must, I prefer to thank someone in particular, to show all my support to him/her.
Now, choosing someone is very difficult, because I met a lot of amazing people in the Ubuntu Community, and a lot more contribute to Ubuntu in some way.
So it’s a very hard task to choose someone.
Last year I said Thank you to mzanetti. I’m sure you know who he is, and he totally deserves our gratitude.
As him, a lot of other Canonical employers - I was lucky enough to contribute side by side with popey and oSoMoN and dpm and dholbach and a lot of other guys, from a lot of different teams. To all of you, and you know who you are, THANKS.
And then there a lot of guys from community, an I should thank each one of them.
But I want, for a day, turn the spotlight to someone who is not so well known in the community.
I chose that guy because I think he’s the perfect example of the perfect contributor: someone who works hard, day by day, without seeking fame and glory.
And as him there are a lot of contributors: people who work hard to make Ubuntu better every day. You never hear of them, but they are essential to create this dream called Ubuntu.
To all of you unknown hard-working contributors, thanks.
And my biggest thanks goes to Bartosz!Who’s Bartosz
I (virtually) met Bartosz working on the calculator app for Ubuntu for Phones. Together we crafted the calculator reboot. But since the end of the Summer I have no much more time to contribute to Ubuntu, and he’s keeping the calculator updated and bugs free.
He’s a pleasure to work with him: he’s talented and very patient, sometimes he waits weeks before I review his code (shame on me!), but he never complains.
I know he also contributes to clock app and weather app and reports bug about the experience on the phone.
So THANKS Bartosz, and keep up the awesome work :-)
And now it’s your turn, choose someone in the Ubuntu Community, and write a mail, a tweet, a G+ post and say Thanks. Volunteers do it for free, and your gratitude is what’s need to make them happy!
Today is Ubuntu Community Appreciation Day, but this year I am going to expand my appreciation beyond the boundaries of the Ubuntu Community to include anyone in open source that has impacted my journey in open source.
For his assistance in helping me stay calm and focused over the last two years despite the cacophony that arose from a multitude of issues. Mark provided me, and others, with friendly advice at several times when the pressure was peaking. Mark does an excellent job of balancing the needs of Canonical and the Ubuntu Community. Every time I speak with Mark I gain a new perspective on the issue we are discussing.
Elizabeth Krumbach Joseph
Lyz continues to be an inspiration for me with regards to what a dedicated person can achieve in the world of open source. Despite her success I constantly see her reaching out to help others as well. She is devoted to open source and the Ubuntu Community.
Landon and I met early along my adventure in to open source as we endeavoured to build an Ubuntu group in Syracuse, NY. Landon is another person who has proven that it is possible to succeed in the open source world and serves as an inspiratoin for me. Landon currently works for Rackspace.
Remy and I have known each other for a long time and both of us are very active in the Rochester, NY open sources community. Remy helped grow the FOSS movement at RIT and continues to be active to this day. The man is a legend in his own time. Remy is currently employed by RedHat and serves as the Community Action and Impac Lead for the Fedora Project. Remy plays an integral role in helping universities include open source in their academic programs. Remy is also the co-founder of CIVX.us. CIVX.us is a not-for-profit organization that focuses on access, openness and transparency of public information.
Joe and I have known each other since 2008 when I first got invovled with the New York State Ubuntu LoCo. Joe has a fantasticly intelligent, witty and dry sense of humor that comes out when discussing open source topics that normally devolve in to holy wars (Emacs vs Vim). Joe constantly inspires me to think more deeply about the open source movement.
We’re using NPM as Vanilla’s package manager. Which gives use a number of advantages such as, an easy way to install and update the CSS framework. This all worked fine until we hit an issue with Github Pages. They do not supporting install scripts therefore it is not possible in npm install. Highlighted in this issue #4 on the Jekyll Vanilla theme project.
There are a number of ways to use Vanilla with Jekyll. Here are the number of methods we discussed with their pros and cons.Commit node_modules
This is not recommended as it duplicates a lot of code. The repo will grow in size as it will include all the framework code also.Clone and commit Vanilla without NPM
Again this will include the entire framework in the repos code base. Another downfall would be the loss of the NPM update process.Use Git submodules
This is the method we went with in the end. Creating a submodule in the git repo does not add all the code to the project but includes a reference and path to include the framework.
git submodule add -b v0.0.55 -- firstname.lastname@example.org:ubuntudesign/vanilla-framework.git _sass/vanilla-framework
By running the following command it will pull down the framework into the correct location.
git submodule update
We lose NPM’s functionality but submodules are understood and run when a Github Pages are built.Conclusion
These methods were derived from a short exploration, but solved our issue. Any better methods would be very much welcomed in the comments. You can see a demo of the Vanilla theme running on the projects Github Page below:
It’s Episode Thirty-seven of Season Eight of the Ubuntu Podcast! With Mark Johnson, Laura Cowen, Martin Wimpress, and Alan Pope recording as normal over the internets which are suffering slightly from the storms outside…
In this week’s show:
We look at what’s been going on in the news:
- Valve launches a massive sale of SteamOS (Linux) titles to coincide with the launch of Steam Machines…
- The first Linux ransomware program, Linux.Encoder.1, has been cracked
- Nvidia has launched the Jetson TX1 module…
- Raspberry Pi Foundation and Code Club have merged…
- ISPs have told the Commons select committee that £175m budgeted by the UK government will not cover ‘massive costs’ of collecting everyone’s data…. Find out more on the Open Rights Group’s campaign page where you can email your MP.
- The first release candidate for QEMU v2.5 is out…
- Edward Snowden explains how to reclaim your privacy…
We also take a look at what’s been going on in the community:
There are even events:
- Ubuntu Community Appreciation Day – 20th November (every year!)
- Google Code-in – starts 7th December
- Pi Wars – the Raspberry Pi robotics challenge competition – Saturday 5 December, 10:00 to 18:30 (GMT) – University of Cambridge Computer Laboratory
- 10th Egham Raspberry Jam – Gamification – Sunday 17 January 2016, 14:00 to 17:00 (GMT) – Gartner, Surrey
- SCaLE 14x – January 21-24, 2016 – Pasadena Convention Center, California (including UbuCon Summit – 21st to 22nd)
- FOSDEM 2016 – 30 Jan 2016 to 31 Jan 2016 – Brussels, Belgium
That’s all for this week, please send your comments and suggestions to: email@example.com
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Follow us on Facebook
Follow us on Google+
Discuss us on Reddit
Intel have written a Platform Quality of Service Tool (pqos) to use these monitoring features and I've packaged this up for Ubuntu 16.04 Xenial Xerus.
To install, use:
sudo apt-get install intel-cmt-cat
The tool requires access to the Intel MSRs, so one has to also install the msr module if it is not already loaded:
sudo modprobe msr
To see the Last Level Cache (llc) utilisation on a system, listing the most used first, use:
sudo pqos -T
pqos running on a 48 thread Xeon based server
The -p option allows one to specify specific monitoring events for specific process IDs. Event types can be Last Level Cache (llc), Local Memory Bandwidth (mbl) and Remote Memory Bandwidth (mbr). For example, on a Xeon E5-2680 I have just Last Level Cache monitoring capability, so lets view the llc for stress-ng while running some VM stressor tests:
sudo pqos -T -p llc:$(pidof stress-ng | tr ' ' ',')
pqos showing equally shared cache between two stressor processes
Cache and Memory Bandwidth monitoring is especially useful to examine the impact of memory/cache hogging processes (such as VM instances). pqos allows one to identify these processes simply and effectively.
Future Intel Xeon processors will provide capabilities to configure cache resources to specific classes of service using Intel Cache Allocation Technology (CAT). The pqos tool allows one to modify the CAT settings, however, not having access to a CPU with these capabilities I was unable to experiment with this feature. I refer you to the pqos manual for more details on this useful feature. The beauty of CAT is that is allows one to tweak and fine tune the cache allocation for specific demanding use cases. Given that the cache is a shared resource that can be impacted by badly behaving processes, the ability to tune the cache behaviour is potentially a big performance win.
For more details of these features, see the Intel 64 And IA-32 Architecture Software Development manual, section 17.15 "Platform Share Resource Monitoring: Cache Monitoring Technology" and 17.16 "Platform Shared Resource Control: Cache Allocation Technology".
Tony Aubé writes interestingly about how No UI is the New UI:Out of all the possible forms of input, digital text is the most direct one. Text is constant, it doesn’t carry all the ambiguous information that other forms of communication do, such as voice or gestures. Furthermore, messaging makes for a better user experience than traditional apps because it feels natural and familiar. When messaging becomes the UI, you don’t need to deal with a constant stream of new interfaces all filled with different menus, buttons and labels. This explains the current rise in popularity of invisible and conversational apps, but the reason you should care about them goes beyond that.
He’s talking here about “invisible apps”: Magic and Operator and to some extent Google Now and Siri; apps that aren’t on a screen. Voice or messaging or text control. And he’s wholly right. Point and click has benefits — it’s a lot easier to find a thing you want to do, if you don’t know what it’s called — but it throws away all the nuance and skill of language and reduces us to cavemen jabbing a finger at a fire and grunting. We’ve spent thousands of years refining words as a way to do things; they are good at communicating intent1. On balance, they’re better than pictures, although obviously some sort of harmony of the two is better still. Ikea do a reasonable job of providing build instructions for Billy bookcases without using any words at all, but I don’t think I’d like to see their drawings of what “honour” is, or how to run a conference.
The problem is that, until very recently, and honestly pretty much still, a computer can’t understand the nuance of language. So “use language to control computers” meant “learn the computer’s language”, not “the computer learns yours”. Echo, Cortana, Siri, Google Now, Mycroft are all steps in a direction of improving that; Soli is a step in a different direction, but still a valuable one. But we’re still at the stage of “understand the computer’s language”, although the computer’s language has got better. I can happily ask Google Now “what’s this song?”, or “what am I listening to?”, but if I ask it “who sang this?” then my result is a search rather than music identification. Interactive fiction went from “OPEN DOOR” to being able to understand a bewildering variety of more complex statements, but you still have to speak in “textadventurese”: “push over the large jewelled idol” is fine, but “gently push it over” generally isn’t. And tellingly IF still tends to avoid conversations, replacing them with conversation menus or “tell Eric about (topic)”.
“User interface” doesn’t just mean “pixels on a screen”, though. “In a world where computer can see, listen, talk, understand and reply to you, what is the purpose of a user interface?”, asks Aubé. The computer seeing you, listening to you, talking to you, understanding you, and replying to you is the user interface.
In that list, currently:
- seeing you is hard and not very reliable (obvious example: Kinect)
- listening to you is either easy (if “listening” and “hearing” are the same thing) or very difficult (if “listening” implies active interest rather than just passively recording everything said around it)
- talking to you is easy, although (as with humans) working out what to say is not, and it’s still entirely obvious that a voice is a computer
- understanding you is laughably incomplete and is obviously the core of the problem, although explaining one’s ideas and being understood by people is also the core problem of civilisation and we haven’t cracked that one yet either
- replying to you requires listening to you, talking to you, and understanding you.
Replying by having listened, talked, and understood works fine if you’re asking “what’s this song?” But “Should I eat this chocolate bar?” is a harder question to answer. The main reason it’s hard is because of an important thing that isn’t even on that list: knowing you. Which is not the same thing as “knowing a huge and rather invasive list of things about your preferences”, and is also not something a computer is good at. In fact, if a computer were to actually know you then it wouldn’t collect the huge list of trivia about your preferences because it would know that you find it a little bit disquieting. If a friend of mine asks “should I eat this chocolate bar?”, what do I consider in my answer? Do I like that particular one myself? Do I know if they like it? Do lots of other people like it? Are they diabetic? Are they on a diet? Do they generally eat too much chocolate? Did they ask the question excitedly or resignedly? Have they had a bad day and need a pick-me-up? Do I care?
That list of questions I might ask myself before replying starts off with things computers are good at knowing — did the experts rate Fry’s Turkish Delight on MSN? And ends up with things we’re still a million, million miles away from being able to analyse. Does the computer care? What does it even mean to ask that question? But we can do the first half, so we do do it… and that leads inevitably to the disquieting database collection, the replacement of understanding with a weighted search over all knowledge. Like making a chess champion by just being able to analyse all possible games. Fun technical problem, certainly. Advancement in our understanding of chess? Not so much.“When I was fifteen years old, I missed a period. I was terrified. Our family dog started treating me differently - supposedly, they can smell a pregnant woman. My mother was clueless. My boyfriend was worse than clueless. Anyway, my grandmother came to visit. And then she figured out the whole situation in, maybe, ten minutes, just by watching my face across the dinner table. I didn’t say more than ten words — ‘Pass the tortillas.’ I don’t know how my face conveyed that information, or what kind of internal wiring in my grandmother’s mind enabled her to accomplish this incredible feat. To condense fact from the vapor of nuance.”
That’s understanding, and thank you Neal Stephenson’s Snow Crash for the definition. Hell, we can’t do that, most of us, most of the time. Until we can… are apps controlled with words doomed to failure? I don’t know. I will say that point-and-grunt is not a very sophisticated way of communicating, but it may be all that technology can currently understand. Let’s hope Mycroft and Siri and Echo and Magic and Operator and Cortana and Google Now are the next step. Aulé’s right when he says this: “It will push us to leave our comfort zone and look at the bigger picture, bringing our focus on the design of the experience rather than the actual screen. And that is an exciting future for designers.” Exciting future for people generally, I think.
- and I leave completely aside here that French is not English is not Kiswahili, although this is indeed a problem for communication too ↩
In the last couple of weeks, we had to completely rework the packaging for the SDK tools and jump through hoops to bring the same experience to everyone regardless if they are on LTS or the development version of Ubuntu. It was not easy but we finally are ready to hand this beauty to the developer’s hands.
The two new packages are called “ubuntu-sdk-ide” and “ubuntu-sdk-dev” (applause now please).
The official way to get the Ubuntu SDK installed is from now on by using the Ubuntu SDK Team release PPA:https://launchpad.net/~ubuntu-sdk-team/+archive/ubuntu/ppa
Releasing from the archive with this new way of packaging is sadly not possible yet, in Debian and Ubuntu Qt libraries are installed into a standard location that does not allow installing multiple minor versions next to each other. But since both, the new QtCreator and Ubuntu UI Toolkit, require a more recent version of Qt than the one the last LTS has to offer we had to improvise and ship our own Qt versions. Unfortunately that also blocks us from using the archive as a release path.
If you have the old SDK installed, the default QtCreator from the archive will be replaced with a more recent version. However apt refuses to automatically remove the packages from the archive, so that is something that needs to be done manually, best before the upgrade:sudo apt-get remove qtcreator qtcreator-plugin*
Next step is to add the ppa and get the package installed.sudo add-apt-repository ppa:ubuntu-sdk-team/ppa \ && sudo apt update \ && sudo apt dist-upgrade \ && sudo apt install ubuntu-sdk
That was easy, wasn’t it :).
Starting the SDK IDE is just as before, either by running qtcreator or ubuntu-sdk directly and also by running it from the dash. We tried to not break old habits and just reused the old commands.
However, there is something completely new. An automatically registered Kit called the “Ubuntu SDK Desktop Kit”. That kit consists of the most recent UITK and Qt used on the phone images. Which means it offers a way to develop and run apps easily even on an LTS Ubuntu release. Awesome, isn’t it Stuart?
The old qtcreator-plugin-ubuntu package is going to be deprecated and will most likely be removed in one of the next Ubuntu versions. Please make sure to migrate to the new release path to always get the most recent versions.
Barry Kauler, the creator of the Puppy Linux computer operating system, has had the great pleasure of announcing today, November 17, the release and immediate availability for download of Puppy Linux 6.3 “Slacko.”
According to Mr. Kauler, Puppy Linux 6.3 “Slacko” has been built from the binary TXZ packages of the Slackware 14.1 GNU/Linux operating system, and it is available, for the first time, as two Live ISO images, one for each of the supported hardware architectures, 32-bit and 64-bit (Intel (IA64) or AMD (amd64)).
“This is distinct from Puppy 6.0.x, which is built from Ubuntu Trusty Tahr binary packages, coordinated by Phil Broughton. Mick coordinated Puppy 5.7.x which is also built with Slackware packages,” says Barry Kauler. “For the first time, Puppy is released in both 32-bit and 64-bit versions.”
Submitted by: Arnfried Walbrecht
Sometimes it is just quicker to type a few commands on the cli than opening your browser window, going to https://jujucharms.com and typing out a search term to see what is available to you. So I wrote a tiny plugin to speed that up a bit.Install the plugin
Currently supports Trusty, Vivid, and Wily$ sudo apt-add-repository ppa:adam-stokes/juju-query $ sudo apt-get update $ sudo apt-get install juju-query Searching the charmstore
If you know the exact name of the charm:$ juju search ghost
Results inPrecise cs:precise/ghost-3 Example: juju deploy cs:precise/ghost-3 Get additional information: juju info cs:precise/ghost-3
Or part of a charm name$ juju search nova-cloud\*
Gives usTrusty cs:trusty/nova-cloud-controller-64 Precise cs:precise/nova-cloud-controller-55 Namespaced cs:~landscape/trusty/nova-cloud-controller-6 cs:~landscape/trusty/nova-cloud-controller-next-49 cs:~openstack-charmers-next/trusty/nova-cloud-controller-17 cs:~cory-benfield/trusty/nova-cloud-controller-10 cs:~andybavier/trusty/nova-cloud-controller-3 cs:~mmenkhof/trusty/nova-cloud-controller-1 cs:~plumgrid-team/trusty/nova-cloud-controller-0 cs:~adam-collard/trusty/nova-cloud-controller-0 cs:~bjornt/trusty/nova-cloud-controller-1 cs:~chad.smith/trusty/nova-cloud-controller-0 cs:~project-calico/trusty/nova-cloud-controller-0 cs:~landscape/trusty/nova-cloud-controller-stable-integrityerror-1 cs:~niedbalski/trusty/nova-cloud-controller-0 cs:~celebdor/trusty/nova-cloud-controller-1 cs:~landscape/trusty/nova-cloud-controller-leadership-election-0 cs:~nuage-canonical/trusty/nova-cloud-controller-1 cs:~le-charmers/trusty/nova-cloud-controller-0 cs:~openstack-ubuntu-testing/precise/nova-cloud-controller-38 cs:~charmers/precise/nova-cloud-controller-27 cs:~springfield-team/precise/nova-cloud-controller-11 cs:~gandelman-a/precise/nova-cloud-controller-2 cs:~ivoks/precise/nova-cloud-controler-0 Example: juju deploy cs:~landscape/trusty/nova-cloud-controller-6 Get additional information: juju info cs:~landscape/trusty/nova-cloud-controller-6 Getting more information
This will give you the output of the README stored on https://jujucharms.com$ juju info ghost|less
And the output of the README right to your screen \o/ghost Ghost is a simple, powerful publishing platform. README # Overview Ghost is an Open Source application which allows you to write and publish your own blog, giving you the tools to make it easy and even fun to do. It's simple, elegant, and designed so that you can spend less time making your blog work and more time blogging. This is an updated charm originally written by Jeff Pihach and ported over to the charms.reactive framework and updated for Trusty and the latest Ghost release. # Quick Start After you have a Juju cloud environment running: $ juju deploy ghost $ juju expose ghost To access your newly installed blog you'll need to get the IP of the instance. $ juju status ghost Visit `<your URL>:2368/ghost/` to create your username and password. Continue setting up Ghost by following the [usage documentation](http://docs.ghost.org/usage/). You will want to change the URL that Ghost uses to generate links internally to the URL you will be using for your blog. $ juju set ghost url=<your url> # Configuration To view the configuration options for this charm open the `config.yaml` file or: $ juju get ghost
This plugin utilizes theblues python library for interfacing with the charmstore's api. Check out the project on their github page.
If you want to contribute to the plugin you can find that on my github page. Some other features I'd like to add is getting the configuration options, searching bundles, show what relations are provided/required, etc.
Are you excited?
On December 7th, we'll be gaining a whole slew of potential contributors. Interested students will select from the tasks we as a community have put forth and start working them. That means we need your help to both create those tasks, and mentor incoming students.
I know, I know, it sounds like work. And it is a bit of work, but not as much as you think. Mentors need to provide a task description and be available for questions if needed. Once the task is complete, check the work and mark the task complete. You can be a mentor for as little as a single task. The full details and FAQ can be found on the wiki. Volunteering to be a mentor means you get to create tasks to be worked, and you agree to review them as well. You aren't expected to teach someone how to code, write documentation, translate, do QA, etc, in a few weeks. Breathe easy.
You can help!
I know there is plenty of potential tasks lying in wait for someone to come along and help out. This is a great opportunity for us as a community to both gain a potential contributor, and get work done. I trust you will consider being a part of the process.
I'm still not sure
Please, do have a look at the FAQ, as well as the mentor guide. If that's not enough to convince you of the merits of the program, I'd invite you to read one student's feedback about his experience participating last year. Being a mentor is a great way to give back to ubuntu, get invovled and potentially gain new members.
I'm in, what should I do?
Contact myself, popey, or José who can add you as a mentor for the organization. This will allow you to add tasks and participate in the process. Here's to a great GCI!
Firmware update usually ends well. Previous (1.15.19) firmware failed to boot on some of Mustangs at Red Hat but worked fine on one under my desk. Yesterday I got 1.15.22 plus slimpro update and managed to get machine into non-bootable state (firmware works fine on other machines).
So how to get APM Mustang back into working state?
- Get a SD card and connect it to an PC Linux machine with reader support.
- Download Mustang software from MyAPM (1.5.19 was latest available there).
- Unpack “mustang_sq_1.15.19.tar.xz” and then “mustang_binaries_1.15.19.tar.xz” tarballs.
- Write the boot loader firmware to the SD card: “dd if=tianocore_media.img of=/dev/SDCARD“.
- Take FAT formatted USB drive and put there some files from “mustang_binaries_1.15.19.tar.xz” archive (all into root directory):
- Power off your Mustang
- Configure the Mustang to boot from SD card via these jumpers change:
- Find HDR9 (close to HDR8, which is next to PCIe port)
- Locate pin 11-12 and 17-18.
- Connect 11-12 and 17-18 with jumpers
- Insert SD card to Mustang SD port
- Connect serial cable to Mustang and your PC.
- Run minicom/picocom/screen/other-preferred-serial-terminal and connect to Mustang serial port
- Power up Mustang and you should boot with SD UEFI firmware:
- Press any key to get into UEFI menu.
- Select “Shell” option and you will be greeted with a list of recognized block devices and filesystems. Check which is USB (“FS6” in my case).
- Flash firmware using “UpgradeFirmware.efi apm_upgrade_tianocore.cmd” command.
- Power off
- Change jumpers back to normal (11-12 and 17-18 to be open).
- Eject SD card from Mustang
- Power on
And your Mustang should be working again. You can also try to write other versions of firmware of course or grab files from internal hdd.
I maintain a membership database for my canoe club and I implemented the database years ago using a PHP library called Phormation which let me make an index page with simple code like:
query = “SELECT * FROM member WHERE year=2015″
and an entry editing page with something like this:
query = “SELECT * FROM member WHERE id=$id”
widgets.append([column1, “name”, Textfield])
widgets.append([column2, “joined”, Date])
and voila I had a basic UI to edit the database.
Now I want to move to a new server but it seems PHP has made a backwards incompatible change between 5.0 and 5.5 and Phormation no longer runs and it’s no longer maintained.
So lazyweb, what’s the best way to make a basic web database editor where you can add some basic widgets for different field types and there’s two tables with a 1:many relationship which both need edited?
DruCall is one of the easiest ways to get up and running with WebRTC voice and video calling on your own web site or blog. It is based on 100% open source and 100% open standards - no binary browser plugins and no lock-in to a specific service provider or vendor.
On Debian or Ubuntu, just running a command such as# apt-get install -t jessie-backports drupal7-mod-drucall
Most of my experience is in server-side development, including things like the powerful SIP over WebSocket implementation in the reSIProcate SIP proxy repro.
In creating DruCall, I have simply concentrated on those areas related to configuring and bringing up the WebSocket connection and creating the authentication tokens for the call.
Those things provide a firm foundation for the module, but it would be nice to improve the way it is presented and optimize the integration with other Drupal features. This is where the projects (both DruCall and JSCommunicator) would really benefit from feedback and contributions from people who know Drupal and web design in much more detail.Benefits for collaboration
If anybody wants to collaborate on either or both of these projects, I'd be happy to offer access to a pre-configured SIP WebSocket server in my lab for more convenient testing. The DruCall source code is a Drupal.org hosted project and the JSCommunicator source code is on Github.
When you get to the stage where you want to run your own SIP WebSocket server as well then free community support can also be provided through the repro-user mailing list. The free, online RTC Quick Start Guide gives a very comprehensive overview of everything you need to do to run your own WebRTC SIP infrastructure.
Elections are an important opportunity for people to select who will represent them. That is the case in national elections as well as those for the Ubuntu Community. Currently there is an ongoing election for the Ubuntu Community Council and if you are an Ubuntu Member you have an opportunity to select the people who will serve on the Community Council. Last election 299 votes were cast out of 732 eligible voters. That is an election turnout of 41%. I am posting this as a reminder to all Ubuntu Members to cast their vote. It would be great to have a better turnout this election.
- C de-Avillez (hggdh)
- Chris Crisafulli (itnet7)
- Daniel Holbach (dholbach) – Incumbent
- Jose Antonio Rey (jose)
- Laura Czajkowski (czajkowski) – Incumbent
- Marco Ceppi (marcoceppi)
- Michael Hall (mhall119) – Incumbent
- Phillip Ballew (philipballew)
- Scarlett Clark (sgclark)
- Svetlana Belkin (belkinsa)
- Walter Lapchynski (wxl)
Our current turnout is 32% and there are eight days left to vote. Remember to vote! Lets ROCK the election!
HP has had the great pleasure of announcing a new release of its open source and freely distributed HPLIP (HP Linux Imaging and Printing) driver for GNU/Linux operating systems.
According to the release notes, HPLIP 3.5.11 adds support for the newly released Ubuntu 15.10 (Wily Werewolf), Fedora 23, and openSUSE Leap 42.1 GNU/Linux operating system. It also includes support for custom AppArmor profiles and SELinux Policy, along with support for automatic discovery of network scanners. A new knowledge base article has been added as well, and it includes information about unblocking ports and enabling mDNS and SLP services through the openSUSE Firewall tool.
Submitted by: Arnfried Walbrecht
According to the kernel bug fix advisory, two security flaws have been discovered and patched in the Linux kernel 2.6.18 packages. The first one is related to the incorrect setting of a utrace flag, which caused the kernel to no longer handle the NULL pointer reference in the utrace_unsafe_exec() function, leading to a system crash. The second vulnerability is about a delay in the reset execution of newly changed firmware.
“Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 5. The kernel packages contain the Linux kernel, the core of any Linux operating system. […] Users of kernel are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect,” reads the announcement.
Submitted by: Arnfried Walbrecht
The next OTA, OTA 8 is due to land in the next day or two:
This is what we will find in it:
- New weather application
- Improved Contacts sync (implements a new syncronisation engine)
- The sound indicator now provides audio playback controls - currently play and pause only, skip forward/skip backward to follow
- New Twitter scope includes the ability to tweet, comment, follow and unfollow
- New Book aggregator scope, with lots of regional content
- The OTA version is now reported in Settings > About this phone
- Location service now additionally provides location and heading information
- Web browser now includes:
- Media access permissions for sites
- Top level bookmarks view
- Thumbnails and grid view for Top Sites page
- Ubuntu store: QtPurchasing based in-app-purchases(currently in pilot mode)
- Various bug fix details can be found here.
Over the two (2+) plus years, I started many projects within the Open * communities that I’m apart of. Most of these projects I started were meant to be worked on with two or more people (including me, of course) but I never had luck in getting anyone to work together with me. Okay, once it has succeeded and two (2) or three (3) times, it was close but still failed. That one time when it succeeded happened because I was on the Membership Board where the members had to be committed.
Because many projects meant for collaboration failed that means either that the communities don’t have enough people willing to work with me (or on anything!) (or a time commitment) or I have networking issues. The latter is within my control and the earlier is one of the problems that most of the Open * communities face.
Lacking support and the feeling of not getting things done over these two plus years is making me to lose motivation to volunteer within these communities. In fact, some of this has already affected four teams within the Ubuntu Community: Ubuntu Women, Ubuntu Ohio, Ubuntu Leadership Team, and Ubuntu Scientists and no news or any activity is shown. As for others, I’m close in removing myself from the communities, something that I don’t want to do and this is why I wrote this. It’s to answer my question of: Where’s my support?! (“me” in the title, but it’s for the lightheartedness that this post needs) I know of a few that maybe feeling this also.
As a thought, as I wrote this post, is what if I worked on a site that could serve as a volunteer board for projects within the Open * communities. Something like “Find a Task” started by Mozilla (I think) and brought over to the Ubuntu Community by Ian W, but maybe as a Discourse forum or Stack Exchange. The only problem that I will face is, again, support for people who want to post and to read. I had issues getting Open Science groups/bloggers/people to add their blog’s feed to Planet Open Science hosted by OKFN’s Open Science But that might be different if it will have almost all types of Open * movements will be represented. Who knows.
Readers, please don’t worry, as this post is written during the CC election in the Ubuntu Community, it will not affect my will to run for a chair. In fact, I think, being in the CC could help me to learn to deal with this issue if others are facing this but they are afraid to talk about in public.
I really, really don’t want to leave any of the Open Communities because of lack of support and I hope some of you can understand and help me. I would like your feedback/comments/advice on this one.
P.S. If this sounded like a rant, sorry, I had to get it out.