Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.
Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.
I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.
Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.
On a good day, it seems I'm generating about 10 kWh of energy.
At least, it is, to some degree, for me. And I do feel that given sufficient focus, calm and quiet (or perhaps background noise, depending on the mood I'm in), I can get "in the zone", and solutions to what I'm trying to do come somewhat naturally. Not to say that I'm necessarily writing good code, but at least it forms some sort of sense in my mind.
People have different ways to achieve focus. Some meditate, some have it come to them more easily than others. It does happen that for some people, it works well to execute some kind of ritual to get in the right frame of mind: those can be as insignificant as getting out of bed in a certain way (for those fortunate enough to work from home), or a complicated as necessary. I believe many, if not most, integrate it in their routine, to the point they perhaps forget what it is that they do to attain focus.
For me, it now happens to be shaving, and the associated processes. It used to be kind of a chore, until I picked up wet shaving, and in particular, straight razor shaving.
There's nothing quite like putting a naked, extremely sharp blade against your skin to get you to only think about one thing at a time :)
I won't lie, the first shave with that relic was a scary experience. I wasn't at all sure of myself, with only a few tips and some videos on Youtube as training. I had bought a straight razor from Le Centre du Rasoir near my house after stumbling on articles about barbershops on the web, and it somehow interested me.
Since then, I've slowly taken up the different tasks that go with the actual act of shaving with a straight razor: honing the blade, stropping, shaving, etc.; picking up the different tools required (blade, strop, honing stones, shaving creams or soaps, etc.). It's as I was slowly honing and restoring four straight razors that got to me from eBay and as a gift from my father than I thought of writing this post, in a short break I took from the honing. Getting back home and putting the finishing touches on the four razors got me to think, and I noticed I had again become much more relaxed just by taking the time to do one thing well, taking care in what I was doing.
I think every developer.... well, everyone can benefit in acquiring some kind of ritual like this, using our hands rather that our brain to achieve something. It's at least a great experience to get a little bit away from technology for a short while, visiting old skills of earlier times.
As for the wet shaving itself, I'd be happy to respond to comments here, or blog again about it if there's enough interest in the subject; I'd love to hear that I'm not the only one in the Ubuntu and Debian communities crazy enough to take a blade to my face.
Volvimos con Fernando y Marta a la Universidad donde se realiza la Ubuconla.
Esta vez abrí yo con una charla de creación de webapps. Las tablas se van notando, estando mucho más tranquilo, aunque Naudy me dejó en blanco interrumpiéndome para darme un DVD de Ubuntu (que se sortean al final) en medio de la conferencia (¿!?).
Requisitos para una webapp Comenzando el tallerEl manifiesto de una webappCómo pregunta una web por instalar la webappCómo leer las propiedades de una web
A la vez, Fernando Lanero daba la charla a los niños, para preparar la obra de teatro.
Fernando Lanero comienza la charla a los alumnos¡Qué atentos!
El resto de la mañana apenas tuve tiempo para atender al resto de conferencias, porque varios asistentes me preguntaron dudas o problemas que tenían con sus portátiles, siempre buen momento para hacer nuevos amigos como Elías y pasaron voladas las 2 horas que había hasta la hora de comer.
Resolviendo una duda puntualY un problema que me llevó más de lo previsto
Fernando Amen en su charlaSobre I-Linux
Otra charla de por la mañanaLos alumnos haciendo la primera obra de teatroY comienzan con la segundaFernando y Marta con el profesor de los alumnosFrancisco Javier Pérez El público abarrotó todas las charlas, desde la primera a la última
Charlas, charlas y más charlas
Fernando y Marta marcharon al hotel, porque se encontraban con diarrea.
La comida me gustó mucho, nos invitó la Universidad en una sala donde estábamos todos los organizadores y conferenciantes y estuvo genial compartir un rato tranquilo con el resto de compañeros, especialmente con Fernando García Amen, Sergio Meneses, Dante y Naudy.
Una comida muy amenaEstuvo genial la invitación de la Universidad Tecnológica Simón Bolivar
Tras la comida, conseguí solucionar un problema de un chico que necesitaba un servidor en su portátil y atendí a la charla muy interesante de Ubuntu en sistemas embebidos.
Ubuntu en sistemas embebidosEl hardware también tuvo su protagonismo
La Cubieboard fue la gran triunfadoraLa charla estuvo genialIncluso con una pequeña demostración
Me impresionó mucho la potencia de SAGEMATH, en el taller impartido por Emmanuel.
Emmanuel con su taller de SAGEMATHUna de las mejores charlas
La pasión que transmite Emmanuel hace que te apasionen las matemáticasLu también empezó a tener fiebre y diarrea, así que volvimos pronto para el hotel para que descansase. Tomó un Ibuprofeno y quedó dormida.
A las 19:00 se quedó en la Torre del Reloj para ir a cenar con los conferenciantes que fueran. Yo no quería dejarla sóla, pero me convenció diciendo que realmente iba a dormirse.
Así que a las 18:50 ya estaba como un clavo en la Torre del Reloj. El primero en llegar fue Fernando García Amen, luego Victoria y finalmente Sergio sobre las 19:15 ¡cof cof cof, ejem! ;) Le provoqué diciéndole que si esperábamos un poco más, porque el día anterior a las 19:03 ya se piraron :P Y en un momento ya eramos ocho.
Paseamos un poco hasta parar en un italiano a cenar. Yo opino que viajando hay que intentar comer lo típico del lugar, pero bueno, lo importante en este caso es la compañía. En la cena charlamos y reímos mucho, sobre todo con lo que escogió alguno de los comensales para cenar, que no puedo ni hacer público jajaja > ; )
Una cena inolvidable
Tras la cena, un agradable paseo hasta una heladería donde te dan un delicioso helado de 2 bolas que debe pesar 400 gr :S Eso, junto a los espaguetis con tomate... ¡Toma dieta! :P
Y ya volví con Lu, dejando al resto del equipo que disfrutase de la noche cartaginesa :)
Y mañana el último día de la Ubuconla
Continúa leyendo más de este viaje.
The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.
The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.The SIP proxy supports any SIP client
Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.
Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.Chat should just work for any combination of clients
The new WebRTC frontend supports SIP chat messaging.
There is no presence or buddy list support yet.
You can even use a tool like sipsak to accept or send SIP chats from a script.
Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.WebRTC support in Iceweasel 24 on wheezy systems is very limited
On a wheezy system, the most recent Iceweasel update is version 24.7.
This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.
If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.
Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.
ICE is a mandatory part of WebRTC.
JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.
Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.Debian RTC testing is more than just a pipe dream
Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.
There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.
require_relay_candidate is enabled on freephonebox.net because it makes life easier for end users. It is not enabled on rtc.debian.org because some DDs may be willing to tolerate this issue when testing on a local LAN.
In this week’s show:
- We discuss whether Google are eating our lunch? Not literally. At least, I hope not…
We also discuss:
- We share some Command Line Lurve that tells you the day of the week (modified from @climagic):
$ date -d "Nov 15" | cut -d' ' -f1 Sat $ date -d "Nov 15 2015" | cut -d' ' -f1 Sun $ date -d "Nov 15 2016" | cut -d' ' -f1 Tue
- And we read your feedback. Thanks for sending it in!
We’ll be back next week, so please send your comments and suggestions to: firstname.lastname@example.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: email@example.com and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
Ronnie Tucker: Ubuntu Shopping Lens (Scopes) Declared Legal in the UK and Most Likely in the European Union
The UK authorities have declared that the Ubuntu Shopping Lens are legal and that no laws have been broken, either in Great Britain or in the European Union.
Some of you might remember that Canonical took a lot of flak from the community when the developers decided to integrate the Shopping Lens into the Ubuntu operating system. Two years have passed since then and a lot of things have changed in the meantime.
For one, the Lens are now called Scopes, but that’s beside the point. When the Ubuntu Shopping Lens were first introduced, users didn’t have any control over them, at least not in a clear and easy way. There was no warning that data was sent over the network and there was no button to turn it off.
Currently, very few people even mention the Shopping Lens, and that is a clear sign that users have gotten used to them and that they have learned to use them or shut the functionality off entirely.
Submitted by: Silviu Stahie
Comenzando en 3, 2, 1...
Siempre me sorprenderá la gran cantidad de asistentes que hay en los eventos de software libre en Latinoamérica, pero es que a primera hora, wow, esto ya estaba petado, totalmente lleno. ¡Impresionante!
Centenas de asistentesCon su correspondiente inscripción
El día comenzó con la presentación del evento por Jairo Serrano, Decano de la Universidad Tecnologica de Bolivar, Bart y Sergio.
Sergio MenesesContinuó Fernando con la migración de su colegio, a quien entrevisté hace tiempo sobre este mismo tema. La charla fue muy amena y acabó con un pequeño sorteo para quien más atento estuvo durante la conferencia.
Fernando LaneroCon muchísimos asistentes
explicando la migración de su colegio a Ubuntu¡Qué diapositiva! :))Tras su charla había 30 plazas para aprender Ubuntu Touch y mucha gente interesada. También había otra simultánea sobre cómo localizar Ubuntu por Dante Díaz Figueroa.
Yo escogí la de Dante, por mi pasado como localizador al asturianu y aproveché a aclarar alguna duda personal que Dante me resolvió muy profesionalmente.
Dante explicó cómo localizar UbuntuLa atención fue máximaTras Dante, Naudy nos mostró el buen trabajo de llevar un portátil con software libre a los estudiantes venezolanos de mano de la distro Canaima.
Naudy explicó CanaimaEnseñando también el netbook que se da a los estudiantes de Venezuela
Los más peques también fueron parte de esta Ubuconla :DY primer gran obra de teatro de los escolares, un proyecto nuevo para estos eventos, que consiste en reunir a alumnos de colegios y explicarles muy por encima qué es esto del software libre y el sentimiento Ubuntu.
Una experiencia pioneraTras la explicación, tienen que montar una obra de teatro que presentarán a todos los asistentes al evento y que obviamente, tendrá que tratar sobre Ubuntu.
Primera obra de teatro
Otra de las obras de teatro
"Conjuntando software libre con la docencia", como bien definió Cesar Vázquez, profesor del grupo de estudiantes, a los que Fernando les dio la introducción.
Todos los protagonistas que dieron una versión distinta del sentimiento Ubuntu
Tras la innovadora obra de teatro de los alumnos, nos fuimos a comer todos juntos a un restaurante típico cercano. La comida estuvo genial, pero la charla mejor aún :D
Un resumen del primer día
Y un poco antes de las 2 me acerqué con Lu para preparar el ordenador para mi conferencia sobre seguridad.
Cuando llegué estaban dando una conferencia que comenzó a las 13:00 y me perdí :( La verdad que está bien varias charlas simultáneas, pero es una pena cuando te interesan todas y sólo puedes ir a la mitad :P
Snif, la charla que me perdíMi primer conferencia salió muy bien, a pesar de mis nervios iniciales. Además hubo un montón de preguntas :)) Al salir de la conferencia me sonrojaron Nel y Víctor, pidiéndome un autógrafo y fotografiándose conmigo :$
Comenzando mi primera conferencia
Estaba más nervioso de lo que parecíaY la audiencia muy interesada en ellaSobre seguridad no podía dejar hablar de Gufw
El resto de la tarde disfruté de la charla de Rodny Silgado sobre Inkscape
Aprendiendo sobre Inkscape y de Jiliar Silgado de introducción a Python:
Aprendiendo cosas de Python
Asistentes y más asistentes :DTras la Ubuconla nos fuimos a los hoteles, para tras recuperar fuerzas con una ducha fría quedar todos a las 7 en el centro.
Pasamos a buscar a Fernando y Marta, que se habían quedado fritos y tardaron un rato en bajar. Al llegar al punto de encuentro unos 10' tarde sólo encontramos a Dante, pero por casualidad, él no estaba allí para ir a cenar, si no de paseo.
Esperamos una media hora y al ver que no venía nadie, cenamos los 4 en un restaurante típico enfrente del hotel de Fernando. La verdad, que la comida no fue tan rica como la de ayer y encima más cara.
A las 21:30 tiramos a la discoteca donde Bart invitó a cervezas y ron. La discoteca tenía una decoración muy buena, la música rumbera tirando a bien y lo mejor, como siempre, la compañía, en especial conocer a Emmanuel Armando Rosales, un matemático super simpático y folixeru, junto a otros compañeros :))
No todo va a ser Ubuntu :)Sobre las 2 de la madrugada volvimos para el hotel tras el primer día de la Ubuconla en el que lo pasé genial, compartiendo momentos muy amenos con la comunidad que hace que Ubuntu sea lo que sea, único :)
Un día rodeado de...
... compañeros excepcionales
Tux también está en la fiestaUn día inolvidable
Me encantan los portátiles personalizadosY mañana, más ;)
Continúa leyendo más de este viaje.
The URL you requested could not be found.
Rafał Cieślak: Multi-OS gaming w/o dual-booting: Excelent graphics performance in a VM with VGA passthrough
Note: This articles is a technology/technique outline, not a detailed guide and not a how-to. It explains what is VGA passthrough, why you might be interested in it, and where to start.
Even with the current abundance of Linux native games (both indies and AAAs), with WINE reliably running almost any not-so-new software, many gamers who use Linux on a daily basis tend to switch to Windows for playing games. Regardless of one’s attitude towards non-free software, it has to be admitted that if you wish to try out some of the newest titles, you have no other choice than running them on a Windows installation. This is why so many gamers dual-boot: having installed two operating systems on the same machine and using Windows for playing games and Linux for virtually anything else, they limit their usage of Microsoft’s OS for gaming only. This popular technique seems handy – you get the luxury of using a Linux, and the gaming performance of Windows.
But dual-booting is annoying because of the need of reboot to switch your context. Need to IM your friend while playing? Save your game, shut down Windows, reboot to Linux, launch IM, reboot to Windows, load your game. Switching takes a long time, is inconvenient, and therefore the player may feel discouraged to do so.
What if you could run both operating systems at once? That’s nothing new, run a virtual machine in your Linux, install Windows within it, and voilà! But a virtual machine is no good for gaming, the performance will be utter cr terrible. Playing chess might work, but any 3D graphics won’t do because of the lack of hardware acceleration. The VM emulates a simple graphics adapter to display it’s output in a window of the host OS.
And that is where VGA passthrough comes in, and solves this issue.1. The idea
The key to getting neat graphics in a VM is to grant the virtual machine a full access to your graphics card. This means that your host OS will not touch this piece of hardware at all, and the guest OS will be able to use it as any other (emulated) hardware. The guest OS (presumably Windows) will load it’s own drivers for the graphics adapter, and will communicate with it natively! Therefore it will have full access to hardware acceleration and any other goodies that gear might provide (eg. HDMI audio). The idea of passing a VGA adapter to a virtual machine is usually named VGA passthrough.
Sounds crazy? Let me tease you: my setup is capable of smoothly running Watch_Dogs, Tomb Raider (2013) on Ultra settings at 60+ FPS within that virtual machine, using NVIDIA’s GTX 770. And I get the luxury of running both OS at once – so I can switch between them in just a glimpse, without shutting down either one! This is astonishingly convenient.
Because the dedicated graphics hardware will be reserved for the guest system, the host will need another graphics adapter to display anything. So there comes the first hardware requirement: you need at least two graphic adapters. However, it is not uncommon – many new Intel processors have a build-in GMA – and if you are a gamer, chances are you have invested in a dedicated graphics card – so that makes two graphics adapters already. Let the host system use integrated graphics, and the guest will get the powerful dedicated graphics for games. Because both graphic adapters will work independently and there is no way to compose their video output¹, you will need two separate displays, one for each system. This means either a set of two monitors, or a monitor with two video inputs (so that you can switch between them). You might also experiment with a KVM switch.
Also keep in mind that it is not an easy thing to set up. While some claim they have succeeded on their first try, many others have struggled a lot. Personally, I spend about two weeks tuning things up to get my VGA passthrough running – and if we count hardware searching and preparations, then it took me two months. But it was completely worth it! My current setup contains of:
- Intel i7-4790K (4 x 2 x 4.0GHz)
- ASRock Z97 Extreme6
- NVIDIA GTX 770 4GB
- and some 16 gigs of RAM
- also, a monitor with multiple video inputs (I switch video source using buttons on the monitor)
- Ubuntu 14.04
As I have mentioned, this set is capable of running very demanding games at maxed settings with amazing results. How does it work in practice? It feels as if I was running both systems at once. For example, while playing a game under Windows, my Linux has an IM client running. Because I mix the sound from both systems, I can hear the notification when I get a message. So I pause the game, switch monitor video source with a hotkey shortcut, respond to the message, and switch the video back. If only I had two monitors, I would play on one of them, with the host system using the other one – so I wouldn’t even need to touch the monitor to switch the OS, I would just need to rotate my head a little bit ;-)
Getting here was a lot of work, but a lot of fun too! The first step is to meet the…2. Hardware requirements
Yeah. Not every machine will be able to do this trick. As already mentioned, you need two graphics adapters. However, it is not possible to passthrough the graphics integrated in your CPU! This is because passthrough works by separating a PCI device from the host system, and attaching it to the guest OS. Therefore you can only pass a dedicated graphics hardware. Not much of a problem, probably, but it’s probably an important note.
You also need to ensure that your CPU and mainboard support IOMMU – extensions for I/O visualisation, which are necessary for passing through a PCI device. Intel calls their IOMMU technology “VT-d“, while AMD refers to it as “AMD-V“. This is an absolute must, so if you are buying new hardware, make sure both your processor and the chipset will support IOMMU²!
Also, if you plan to use a CPU integrated graphics adapter for the host system, make sure that the mainboard supports it, and that it has a video output!
You will get best results if you use a multi-core CPU. Demanding games will require not only powerful graphics hardware, but a decent CPU as well! It is possible to reserve some of CPU’s cores for the VM – this way you can ensure that the guest OS will be granted enough computational power. For example, in my setup, the host OS uses 2 cores, while other 6 are at Windows’ disposal.
Also, as explained, you need a monitor with several inputs, or a set of two. I am not aware of any way to get this working on a laptop, as most of laptops I know have just one monitor, and you cannot manually switch between video sources¹.
So the full list of requirements is:
- IOMMU compatible CPU and mainboard
- A dedicated PCI graphics adapter (for passing through)
- Graphics hardware for the host OS (can be integrated in CPU)
- Monitor with multiple video inputs (recommended two monitors)
- (Recommended: multi-core CPU).
Warning: Note that you DO NOT NEED a multi-OS graphics card! Contrary to popular belief, non-Quatro NVIDIA cards will work well, with no hardware modifications of any kind!3. Methods
There are two popular passthrough techniques – one involves Xen virtualization, and one using Qemu and VFIO. Having played around with both, I am personally a fan of the Qemu way – it seems it is much easier to set up, I get more control over my VM, customizations are easier, and, most importantly, it works with virtually any PCI graphics adapter!
There is a lot of confusion on the Internet concerning what results each method may yield. Some say that Qemu method can never grant any decent performance, they claim that only Xen can perform primary VGA passthrough, while Qemu’s secondary VGA passthrough will be very inefficient. However, numerous people (including me) confirm that they have awesome performance with Qemu. On the other hand, it is clear that passthrough with Xen will only work with multi-OS graphic cards. This is not a problem for Radeon users, as probably all new Radeons will do just fine with Xen. However, if your NVIDIA card is not an NVIDIA Quadro, you have no chances with Xen! – unless you burn several resistors on the board, which can mod your card so that it thinks it is a Quadro… I do not recommend such hardware modifications to anyone, even if you trust the Internet too much, the risk of rendering your precious hardware useless is far too high to make it work the effort. Qemu, on the other hand, should work well with absolutely any PCI card.
Given these reasons, as well as customization options, I have decided to stick with Qemu. For the rest of this article, I will be describing this particular method.
There is one particular comprehensive guide on how to setup everything using the Qemu method here – at the time of writing this forums thread has more than 2500 replies, so learning details from here may be hard, but on the other hand every possible scenario is covered somewhere in there :) I can highly recommend that guide, but if you want to learn about the general idea first, stay with me before you jump there!4. The software
Obviously things won’t work out of the box. There are also necessary preparations on the software side.
First, you will need to patch your kernel a bit, and compile it with several options enabled. At the time of writing, ASC override patches and VGA arbiter fixes need to be applied manually, as they are not (yet?) included in the kernel. You can find details in that guide I linked.
You will need to configure your kernel a bit. The key is not only to ensure it activates appropriate IOMMU modules, but also to forbid it from loading any drivers to the card you will want to pass through.
Most likely it will be also necessary to use the git development version of Qemu – some necessary features are not yet available in stable releases. Also, when playing with qemu, it is worth to try KVM – chances are that hardware virtualization might significantly improve virtual machine’s CPU performance.
You may want to write a bit of scripts that set up few other details (binding the PCI card to vfio module) before you start qemu to run the virtual machine.
Also, it may be tricky to get the right order of installing drivers in the guest OS. It took me a while to realize that I need to disable qemu’s emulated VGA – otherwise NVIDIA drivers won’t detect the dedicated hardware :-)
The greatest issue I have met is that Windows is very sensitive to hardware changes. Even slightest changes in my virtual machine (different qemu options) would immediately cause my Windows to never boot anymore, and any web guides on dealing with these particular BSoDs on boot never helped… So eventually I had to re-install the whole guest OS, after ~10 times I am completely fed up with it. However, if I do not experiment with qemu settings, there are no such problems at all.5. Peryphetials
How about keyboard/mouse, should you pass them through too? You might, but this is not necessary; I use Synergy for sharing my mouse/keyboard between systems just as if they were two displays of one system. Very convenient. The script that starts qemu for me also launches synergy server on my Linux, the client running in Windows starts automatically on boot.
If you want, you can also setup networking for the guest system – qemu has very good support for interface bridging, so it is not difficult to grant internet access for the guest OS.
One could also pass-through audio devices, but I believe this is not necessary – especially if you do not care about hardware audio acceleration; in such case you can get qemu to emulate a sound device and play it as any other app in the host OS would do. In result you can hear both systems on same speakers/headphones!
Personally, I have even went so far that I prepared a simple app that talks to my monitor via I²C and tells it when to switch video input – this way I can use a hotkey shortcut instead of navigating it’s OSD menus. The same hotkey will switch my keyboard/mouse between systems, thanks to synergy’s customizability.6. Conclusions
I have used this configuration for a few weeks now, and I am yet to find a game that would not perform outstandingly in this environment. Graphics performance is just as if I dual-booted, CPU performance is only a tiny bit worse (but still awesome). The ability to keep all my apps running under Linux while I play games, be it a web browser, IM client, teamspeak or whatever else might be useful – is incredibly convenient!
Switching between systems in less then a second is really a game-changer for me (pun intended…)!
If you are excited about this technique, go ahead and read the guide. Be ready for a challenge, and do not give up it things won’t work – you won’t regret it! Good luck!
Want to know more? I will be happy to answer your general questions, but if you need help or want to learn about technical details, the best place to find answers is here.
¹) Unless your motherboard has a video multiplexer, like NVIDIA Optimus… but using it would be difficult, as you would need to manually control the mux. I believe this might be achievable, but most certainly would require specialized drivers, that do not exist right now.
²) It’s not as simple as “all new hardware supports it”, both in case of CPUs and mobos. You may find some lists of IOMMU-compatible hardware on the Internet, but it is probably best to ask the manufacturer itself – if they do not list it on their website, try dropping an email – from my experience all manufacturers are very keen to respond to enquiries concerning such sophisticated features! ;-)
Filed under: PlanetUbuntu, Ubuntu
I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both. The opportunity here is to grow large, well-managed, passionate communities, no matter what industry or area you work in.
I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.LinuxCon North America and Europe
Firstly, on Fri 22nd August 2014 (next week) I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.
Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.
Space is limited, so go and register ASAP:
So what is in the training course?
If you like videos, go and watch this:
If you prefer to read, read on!
My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.
The day is mapped out like this:
- 9.00am – Welcome and introductions
- 9.30am – The core mechanics of community
- 10.00am – Planning your community
- 10.30am – Building a strategic plan
- 11.00am – Building collaborative workflow
- 12.00pm – Governance: Part I
- 12.30pm – Lunch
- 1.30pm – Governance: Part II
- 2.00pm – Marketing, advocacy, promotion, and social
- 3.00pm – Measuring your community
- 3.30pm – Tracking, measuring community management
- 4.30pm – Burnout and conflict resolution
- 5.00pm – Finish
I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.
I hope to see you there!
I’ll be there this year!
Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!
If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.
The Ubuntu Developer Summit has been scheduled for November 12 – 14. UDS is a hotbe of ideas. It is where the Ubuntu community works to find creative solutions to problems with the intent to produce a better Ubuntu for everyone. Since moving to an on-line format UDS has enabled a diverse range of participants from across the globe to participate in the process.
As the planning for the summit continues your thoughts and ideas could help shape the next UDS. The discussion is taking place now on here.
elementary OS Freya Beta has been announced by its developers and it comes with an Ubuntu 14.04 base and lots of new features. As you can imagine, there are quite a few changes and improvements over elementary OS Luna, including the Linux kernel from Ubuntu 14.04, the 3.13 stack. This is just the tip of the iceberg.
elementary OS developers are supporting Facebook, Fastmail, Google+, Microsoft, and Yahoo account integration by default. This is done with the help of Pantheon Online Accounts, a new tool that combines features from Ubuntu Online Accounts and GNOME Online Accounts and brings its own improvements.
This is still a Beta release, which means that users will probably notice bugs with the operating system. The release date remains unknown, but that is not something new. The developers never provide a release date and they usually take their time until they are satisfied with the result.
Submitted by: Silviu Stahie
Shutter, a feature-rich screenshot program that allows users to capture nearly anything on their screen without losing control, is now at version 0.92.
The latest update for Shutter was released in June, but it was almost identical in complexity with the current build. Nothing really important has been implemented, with the exception of a few maintenance changes.
Submitted by: Silviu Stahie
Lean. Agile. Svelte. Lithe. Free.
That's how we roll our operating systems in this modern, bountiful era of broadly deployed virtual machines, densely packed with system containers.
Linux, and more generally free software, is a natural fit in this model where massive scale is the norm. And particularly Ubuntu (with its solid Debian base), is perfectly suited to this brave new world.
Introduced in Ubuntu 8.04 LTS (Hardy) -- November 19, 2007, in fact -- JeOS (pronounced, "juice") was the first of its kind. An absolutely bare minimal variant of the Ubuntu Server, tailored to perfection for virtual machines and appliances. Just enough OS.
Taken aback, I overheard a technical executive at a Fortune 50 company say this week:
"What ever happened to that Ubuntu JeOS thing? We keep looking at CoreOS and Atomic, but what we really want is just a bare minimal Ubuntu server."Somehow, somewhere along the line, an important message a got lost. I hope we can correct that now...
JeOS has been here all along, in fact. You've been able to deploy a daily, minimal Ubuntu image, all day, every single day for most of the the last decade. Sure, it changed names to Ubuntu Core along the way, but it's still the same sleek little beloved ubuntu-minimal distribution.
"How minimal?", you ask...
63 MB compressed, to be precise.
Did you get that?
That's 63 MB, including a package management system, with one-line, apt-get access to over 30,000 freely available packages across the Ubuntu universe.
That's pretty darn small. Much smaller than say, 147 MB or 268 MB, to pick two numbers not at all at random.
"How useful could such a small image actually be, in practice?", you might ask...
Ask any Docker user, for starters. Docker's base Ubuntu image has been downloaded over 775,260 to date. And this image is built directly from the Ubuntu Core amd64 tarball.
Oh, and guess what else? Ubuntu Core is available for more than just the amd64 architecture! It's also available for i386, armhf, arm64, powerpc, and ppc64el. Which is pretty cool, particularly for embedded systems.
So next time you're looking for just enough operating system, just look to the core. Ubuntu Core. There is truly no better starting point ;-)
Bart estuvo superliado y no pudo recogernos en el muelle, pero no importó, porque estábamos muy cerca del centro y nos acercamos a cambiar moneda y pasear tranquilamente por el precioso centro de Cartagena.
Una lluvia tropical ayudó a calmar el calor, pero a costa de empaparnos... ¿Qué mejor momento para disfrutar bajo techo de un delicioso jugo de mango? :P
Y tras un corto paseo, otra demostración de lo pequeño que puede llegar a ser el mundo, ¡Nos encontramos con Fernando y Marta por la calle!
Juntos disfrutamos de unas cervezas bien frías y bailamos un poco de rumba (siendo sincero, yo más bien intenté bailar) en un bar de la Plaza de los Coches, haciendo tiempo porque habíamos quedado a las 7 con Bart.
Disfrutando del ambiente de CartagenaEsperando en la plaza a la hora indicada Bart no llegaba y nos fuimos al hotel de Fernando que está a 5' andando, para contactar con él. Bart estaba a tope, liado recogiendo a más conferenciantes y no pudo acercarse a cenar. También escribimos a Sergio para ver qué hacíamos con el hotel y nos comentó que el hostal sólo tenía una plaza y que teníamos que ir muy rápido para hacer el checkin a nuestra persona.
Nos dio pereza ir tan tarde y lejos, así que buscamos un hotel cercano al de Fernando y nos deleitamos en un restaurante cercano con unos pescaitos y jugos, todo ello acompañado por música en directo :))
Tras la cena, paseamos un rato en espera ya del primer gran día de la Ubuconla...
Continúa leyendo más de este viaje.
The last day of KDE’s Randa Sprint 2014 is almost over and boy am I exhausted.
The awesome multimedia crew processed some 220 bugs in Phonon, KMix and Amarok. We did a Phonon 4.8 beta release allowing Linux distributions to smoothly transit to a newer version of GStreamer. We started writing a new PulseAudio based volume control Plasma widget as well as a configuration module to allow feature richer and more reliable volume control on systems with PulseAudio available.
In the non-multimedia area I discussed my continuous packaging integration plans with people to work out a suitable workflow. Certain planned improvements to KDE’s CI process make me very confident that in the not too distant future distributions will be able piggyback onto KDE’s CI and create daily integration builds in their regular build environments.
Many great things await!‘A Spaceship’ by Rohan Garg
The Juju Charm Store has been in a bit of a spotlight lately, as it's both a wonderful tool and a source of some frustration for new charmers getting involved in the Juju ecosystem. We wanted to take this opportunity to cover some of the finer aspects of the Juju Charm Store for new users and explain the difference between what a recommended charm is vs a charm that lives in a personal namespace.Why is there a distinction?
Quality. We want all the charms in the Charm Store to be of the highest quality so that users can depend on the charms deploying properly and do what they say they are going to do.
When the Charm Store first came into existance, it was the wild west. Everyone wanted their charm in the Charm Store and things were being promoted very rapidly into the store. There were minimal requirements, and everything was new and exciting. Now that Juju has grown into its toddler phase and is starting to walk around on it's own - we've evolved more regulations on charms. We have defined what makes a high-quality charm, and what expectations a user should have from a high quality charm. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
The bar for some of the features, and quality descriptors may seem like extremly high hurdles for your service to meet to become a ~charmer recommended service. This is why Personal Namespaces exist - as the charmer team continues to add and expand the Charm Store with charms that meet and/or exceed these quality guidelines - we encourage everyone to submit their Juju charm for world wide consumption. You may disagree with FOSS licensing, or perhaps data-handling just isn't something you're willing to do with the service that you orchestrate. These are OK! We still want your service to be orchestrate-able with Juju. Just push your charm into a Personal Namespace, and you don't even have to undergo a charm review from the Charmers team unless you really want someone proofing your code, and service behavior.What differences will this have? Deployment
We've all seen the typical CLI commands for deploying charmer recommended charms.
juju deploy cs:trusty/mysql
There will be a descriptor changed for your personal namespace
juju deploy cs:~lazypower/trusty/logstashCharm Store Display
Personal namespace charms will display the charm category icon instead of a provided service icon. This is a leftover decision in the Charm Store that is subject to change, but at present writing - is the current status of visual representation.Submission Process
To have your charm listed as a charmer team recommended charm, you have to under-go a rigorous review process where we evaluate the charm, evaluate tests for your charm, and deploy & run tests against the provided service with different configuration patterns, and even introduce some chaos monkey breakage to see how well the charm stands on its own 2 feet during less than ideal conditions.
This involves pushing to a launchpad branch, and opening a bug ticket assigned to ~charmers, and following the cycle - which at present can take a week or longer to complete from first contact, depending on Charmer resources, time, etc.I don't want to wait my service is awesome and does what I want it to do. Why am I waiting?
You dont have to! The pattern for pushing a charm into your personal namespace requires zero review, and is ready for you to complete today. The longest you will wait is ~ 30 minutes for the Charm Store to ingest the metadata about your charm.
bzr push lp:~lazypower/charms/trusty/awesome-o/trunk
Thats all that's required for you to publish a charm under your namespace for the Charm Store. To further break that down:
lp:~lazypower : This is your launchpad username
/charms/ : in this case, charms is the project descriptor
/trusty/ : We target all charms against a series
/awesome-o/ : This is the name of your service
/trunk/ : Only the /trunk branch will be ingested. So if you want to do development work in /fixing_lp1234 - you can certainly do that. When work is completed, simply merge back into /trunk! It will be available immediately in your charm listed in the Juju Charm Store.Charm Store: Personal Namespace (other)
In the Juju Charm Store as it exists today, there is a dividing bar below the recommended charms for 'other' - and this warehouses bundles, personal charms, and is a place holder for future data types as they emerge.
As you can see by the image above, there is quite a bit of information packed into the accordion. Let's take a look at the bundle description first:
As illustrated, no review process was done to submit this bundle, it has 0 deployments in the wild of 5 services/units.
Looking at a charm, we have the same basic level of information, and we see that the charm itself is in my personal namespace. trusty|lazypower - designates the series/namespace of the charm listing.Charm Store: Recommended Charms
Recommended charms have undergone a rigerous testing phase by the Juju Charmer team, include tested hooks, and tested deployment strategies using the Amulet testing framework. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
They have full service descriptor icons provided by the charm itself, and are deployable via juju deploy cs:series/service
Notice the orange earmark in the upper right corner. This denotes the charm is a ~charmer recommended service, as it has undergone the review process and accepted into the charmer's namespace of the Juju Charm Store.Which is right for me?
When deciding how to get started working with Juju and what level you should start at for your charm - I can't stress enough. Get started with your personal namespace. When you feel your charm is ready (and this can take a while during R&D) Then submit your charm for official ~charmer review.
The process of getting started with personal namespaces is cheap, easy, and open to everyone. It's still very much the wild west. Your charm will be in the hands of users 10x faster using personal namespaces, you still have the opportunity to have it reviewed by submitting a bug to the Review Queue, and you become the orchestrating master of your charmed service.
If you're an Independent Software Vendor and would like to start with your charm In the ~charmers recommended list, feel free to submit a review proposal, however - you are now agreeing to be subject to the Charm Store review policy, your charm must meet all the criteria of a good charm, and the review process can take some length of time depending on the complexity of your service.What is the future of charm publishing?
The Juju Ecosystem team has spent many hours discussing the current state of charm publishing and how to make this easier for our users. On the horizon (but with no foreseeable dates to be published) there are some new tools emerging to assist in this process.
juju publish is a command that will get you started right away by creating your personal namespace, and pushing your charm (and/or revisions) to your branch with the appropriate bugs/MP's assigned.
A new Review Queue is being implemented by Marco Ceppi that will aid us in first contact, getting 'hot' review items out the door quickly, and triaging long running reviews appropriately.Where do I go for help?
There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).
Heres how you can use it interactively:
$ mount-image-callback disk1.img -- chroot _MOUNTPOINT_
% echo "I'm chrooted inside the image here"
% echo "192.168.1.1 www.example.com" >> /etc/hosts
% exit 0
mount-image-callback disk1.img -- \
sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'
or one of my typical use cases, to add a package to an image.
mount-image-callback --system-mounts --resolv-conf --
chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit
Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.
If the command you provide has the literal argument '_MOUNTPOINT_' then it will substitute the path to the mount. It also makes that path available in the environment variable MOUNTPOINT. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.
mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.
Why not libguestfs?
libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.
Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.
In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.
La hora que dura la travesía se hizo corta y tras un tentempié como recibimiento, en el hotel nos asignaron un bungalow.
La primera vez que estoy en un bungalow :))El archipiélago de Las Islas del Rosario es de origen volcánico y sólo hay una isla que tenga playa, pero esa playa es artificial. El resto de las islas son calas de colares de aguas muy tranquilas. Nosotros escogimos la Isla del Pirata, una pequeña al este del archipiélago.
La mitad de la isla es del hotel con sus pequeños edificios centrales y varios bungalows esparcidos por la isla. El resto de la isla son casas privadas.
¿Y qué puedo contar del día? Tras estar a remojo tantas horas que ni las recuerdo (no hay mejor forma para combatir el calor), descansamos de tanto nadar en unas tumbonas, antes de la comida.
Tras descansar un poco, volvimos a la carga con un kayak. Aún con nuestro bajo fondo físico, apenas nos costó dar un par de vueltas a la isla, porque es muy pequeña. Disfrutando como lobos de mar, y es que remábamos como podíamos y bueno... la sincronización era lo de menos jajaja
Esto es el paraísoEn el atardecer llegó lo mejor. Volvimos a nadar con gafas de buceo y wow, literalmente había cientos de miles de peces muy pequeños en la orilla. La sensación de nadar entre este banco de peces es indescriptible. Si no hacíamos movimientos bruscos apenas se apartaban y casi que te sentías uno más entre ellos :P
La cena romántica en el edificio del hotel, junto con el resto de huéspedes (otras 3 parejas) a base de sopa y un pollo con arroz ponía colofón a un día inolvidable.
Durante la noche una tormenta tropical sacudió la isla con lluvia y un viento muy fuerte.
Amaneciendo... ¿Tormenta? ¿Dónde? :PEn el amanecer, a sólo 1 día para la Ubuconla, amanecimos sin un ápice de brisa y nublado (de agradecer por el bochorno caribeño) y todos los objetos ligeros por el suelo debido al viento de la tormenta nocturna.
Menudos desayunos más ricos¿Y qué hacer durante la mañana? ¡Exacto! Otra vez a remojo :P Estuvimos nadando un buen rato, hasta que decidimos aceptar una oferta para hacer snorkel en una isla que esta a 5'. Y uf... ¡que pasada! Estuvimos nadando por colares abruptos y miles de peces con sus mil colores.
Tras esta extraordinaria mini aventura, a 'tostarnos' un poco al sol y remojarnos al rato para quitar el calor, así hasta la hora de comer :P
from time import gmtime, strftime
while strftime("%H:%M", gmtime()) <> '13:00':
print('Ñam Ñam Ñam')
Algún día tenía que acabar este bucle no infinito :)Y tras la comida, tocó hacer el checkout y esperar 1 hora a que salga la lancha rumbo a Cartagena.
Continúa leyendo más de este viaje.