Feed aggregator

Dimitri John Ledkov: Swapfiles by default in Ubuntu

Planet Ubuntu - Fri, 12/16/2016 - 04:30
4MB RAM cardBy default, in Ubuntu, we usually create a swap partition.

Back in the day of 4MB RAM cards this made total sense, as the ration of RAM to disk space, was still very low. Things have changed since. Server, desktop, embedded systems have migrated to newer generations of both RAM and persistent storage. On the high performance side of things we see machines with faster storage in the form of NVMe and SSD drives. Reserving space for swap on such storage, can be seen as expensive and wasteful. This is also true for recent enough laptops and desktops too. Mobile phones have substantial amounts of RAM these days, and at times, coupled with eMMC storage - it is flash storage of lower performance, which have limited number of write cycles, hence should not be overused for volatile swap data. And there are also unicorns in a form of high performance computing of high memory (shared memory) systems with little or no disk space.

Today, carving a partition and reserving twice the RAM size for swap makes little sense. For a common, general, machine most of the time this swap will not be used at all. Or if said swap space is in use but is of inappropriate size, changing it in-place in retrospect is painful.

Starting from 17.04 Zesty Zapus release, instead of creating swap partitions, swapfiles will be used by default for non-lvm based installations.

Secondly, the sizing of swapfiles is very different. It is no more than 5% of free disk space or 2GiB, whichever is lower.

For preseeding, there are two toggles that control this behavior:
  • d-i partman-swapfile/percentage string 5
  • d-i partman-swapfile/size string 2048
Setting either of those to zero, will result in system without any swap at all. And one can tweak relative integer percentage points and absolute limits in integer percentage points or MiB.
On LVM based installations, swap logical volumes are used, since unfortunately LVM snapshots do not exclude swapfile changes. However, I would like to move partman-auto to respect the above proposed 5%-or-2GB limits.

Ps. 4MB RAM card picture is by Bub's (Photo) [GFDL or CC-BY-SA-3.0], via Wikimedia Commons

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2016

Planet Ubuntu - Fri, 12/16/2016 - 02:43

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 150 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change this month and in fact we haven’t had any new sponsor since September. We still need a couple of supplementary sponsors to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file 36. We don’t seem to really catch up the small backlog. The reasons are not clear but I noticed that there are a few packages that take a lot of time due to the number of issues found with fuzzers. We also handle many issues that the security team ends up classifying as not worth an update because we add the package to dla-needed.txt before the security team has done its review and nobody checks afterwards.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Martin Pitt: The alphabet and pitti end here: Last day at Canonical

Planet Ubuntu - Tue, 12/13/2016 - 04:38

I’ve had the pleasure of working on Ubuntu for 12½ years now, and during that time used up an entire Latin alphabet of release names! (Well, A and C are still free, but we used H and W twice, so on average.. ☺ ) This has for sure been the most exciting time in my life with tons of good memories! Very few highlights:

  • Getting some spam mail from a South African multi-millionaire about a GREAT OPPORTUNITY
  • Joining #warthogs (my first IRC experience) and collecting my first bounties for “derooting” Debian (i. e. drop privileges from root daemons and suid binaries)
  • Getting invited to Oxford to meet a bunch of people for which I had absolutely zero proof of existence, and tossing myself into debts for buying a laptop for that occasion
  • Once being there, looking into my fellows’ stern and serious faces and being amazed by their professionalism:
  • The excitement and hype around going public with Warty Warthogs Beta
  • Meeting lots of good folks at many UDSes, with great ideas and lots of enthusiasm, and sometimes “Bags of Death”. Group photo from Ubuntu Down Under:
  • Organizing UDSes without Launchpad or other electronic help:
     
  • Playing “Wish you were Here” with Bill, Tony, Jono, and the other All Stars
  • Seeing bug #1 getting closed, and watching the transformation of Microsoft from being TEH EVIL of the FOSS world to our business partner
  • Getting to know lots of great places around the world. My favourite: luring a few colleagues for a “short walk through San Francisco” but ruining their feet with a 9 hour hike throughout the city, Golden Gate Park and dipping toes into the Pacific.
  • Seeing Ubuntu grow from that crazy idea into one of the main pillars of the free software world
  • ITZ GTK BUG!
  • Getting really excited when Milbank and the Canonical office appeared in the Harry Potter movie
  • Moving between and getting to know many different teams from the inside (security, desktop, OEM, QA, CI, Foundations, Release, SRU, Tech Board, …) to appreciate and understand the value of different perspectives
  • Breaking burning wood boards, making great and silly videos, and team games in the forest (that was La Mola) at various All Hands

But all good things must come to an end — after tossing and turning this idea for a long time, I will leave Canonical at the end of the year. One major reason for me leaving is that after that long time I am simply in need for a “reboot”: I’ve piled up so many little and large things that I can hardly spend one day on developing something new without hopelessly falling behind in responding to pings about fixing low-level stuff, debugging weird things, handholding infrastructure, explaining how things (should) work, do urgent archive/SRU/maintenance tasks, and whatnot (“it’s related to boot, it probably has systemd in the name, let’s hand it to pitti”). I’ve repeatedly tried to rid myself of some of those or at least find someone else to share the load with, but it’s too sticky :-/ So I spent the last few weeks with finishing some lose ends and handing over some of my main responsibilities.

Today is my last day at work, which I spend mostly on unsubscribing from package bugs, leaving Launchpad teams, and catching up with emails and bugs, i. e. “clean up my office desk”. From tomorrow on I’ll enjoy some longer EOY holidays, before starting my new job in January.

I got offered to work on Cockpit, on the product itself and its ties into the Linux plumbing stack (storaged/udisks, systemd, and the like). So from next year on I’ll change my Hat to become Red instead of orange. I’m curious to seeing for myself how that other side of the fence looks like!

This won’t be a personal good-bye. I will continue to see a lot of you Ubuntu folks on FOSDEMs, debconfs, Plumber’s, or on IRC. But certainly much less often, and that’s the part that I regret most — many of you have become close friends, and Canonical feels much more like a family than a company. So, thanks to all lof you for being on that journey with me, and of course a special and big Thank You to Mark Shuttleworth for coming up with this great Ubuntu vision and making all of this possible!

The Fridge: Ubuntu Weekly Newsletter Issue 491

Planet Ubuntu - Mon, 12/12/2016 - 19:31

Welcome to the Ubuntu Weekly Newsletter. This is issue #491 for the week December 5 – 11, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 491

The Fridge - Mon, 12/12/2016 - 19:31

Welcome to the Ubuntu Weekly Newsletter. This is issue #491 for the week December 5 – 11, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Kees Cook: security things in Linux v4.9

Planet Ubuntu - Mon, 12/12/2016 - 12:05

Previously: v4.8.

Here are a bunch of security things I’m excited about in the newly released Linux v4.9:

Latent Entropy GCC plugin

Building on her earlier work to bring GCC plugin support to the Linux kernel, Emese Revfy ported PaX’s Latent Entropy GCC plugin to upstream. This plugin is significantly more complex than the others that have already been ported, and performs extensive instrumentation of functions marked with __latent_entropy. These functions have their branches and loops adjusted to mix random values (selected at build time) into a global entropy gathering variable. Since the branch and loop ordering is very specific to boot conditions, CPU quirks, memory layout, etc, this provides some additional uncertainty to the kernel’s entropy pool. Since the entropy actually gathered is hard to measure, no entropy is “credited”, but rather used to mix the existing pool further. Probably the best place to enable this plugin is on small devices without other strong sources of entropy.

vmapped kernel stack and thread_info relocation on x86

Normally, kernel stacks are mapped together in memory. This meant that attackers could use forms of stack exhaustion (or stack buffer overflows) to reach past the end of a stack and start writing over another process’s stack. This is bad, and one way to stop it is to provide guard pages between stacks, which is provided by vmalloced memory. Andy Lutomirski did a bunch of work to move to vmapped kernel stack via CONFIG_VMAP_STACK on x86_64. Now when writing past the end of the stack, the kernel will immediately fault instead of just continuing to blindly write.

Related to this, the kernel was storing thread_info (which contained sensitive values like addr_limit) at the bottom of the kernel stack, which was an easy target for attackers to hit. Between a combination of explicitly moving targets out of thread_info, removing needless fields, and entirely moving thread_info off the stack, Andy Lutomirski and Linus Torvalds created CONFIG_THREAD_INFO_IN_TASK for x86.

CONFIG_DEBUG_RODATA mandatory on arm64

As recently done for x86, Mark Rutland made CONFIG_DEBUG_RODATA mandatory on arm64. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so there’s no reason to make the protection optional.

random_page() cleanup

Cleaning up the code around the userspace ASLR implementations makes them easier to reason about. This has been happening for things like the recent consolidation on arch_mmap_rnd() for ET_DYN and during the addition of the entropy sysctl. Both uncovered some awkward uses of get_random_int() (or similar) in and around arch_mmap_rnd() (which is used for mmap (and therefore shared library) and PIE ASLR), as well as in randomize_stack_top() (which is used for stack ASLR). Jason Cooper cleaned things up further by doing away with randomize_range() entirely and replacing it with the saner random_page(), making the per-architecture arch_randomize_brk() (responsible for brk ASLR) much easier to understand.

That’s it for now! Let me know if there are other fun things to call attention to in v4.9.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Dustin Kirkland: Ubiquiti Networks UniFi Controller in an Ubuntu LXD Machine Container

Planet Ubuntu - Mon, 12/12/2016 - 06:42


I've been one of DD-WRT's biggest fans, for more than 10 years.  I've always flashed my router with custom firmware, fine-tuned my wired and wireless networks, and locked down a VPN back home.  I've genuinely always loved tinkering with network gear.

A couple of weeks ago, I decided to re-deploy my home network.  I've been hearing about Ubiquiti Networks from my colleagues at Canonical, where we use Ubiquiti gear for our many and varied company events.  Moreover, it seems a number of us have taken to running the same kits in our home offices.

So I ordered a Ubiquiti UniFi Security Gateway (USG) and a pair of Dual Radio PRO Wireless Access Points, and I couldn't be more pleased with the end result!  Screaming fast wireless access, beautiful command line and web interfaces, and a fantastic product.

There's something quite unique about the UniFi Controller -- the server that "controls" your router, gateway, and access points.  Rather than being built into the USG itself, you run the server somewhere else.

Sure you can buy their hardware appliance (which I'm sure is nice).  But you can just as easily run it on an Ubuntu machine yourself.  That machine could be a physical machine on your network, a virtual machine locally or in the cloud, or it could be an LXD machine container.

I opted for the latter.  I'm happily running the UniFi Controller in a LXD machine container, and it's easy for you to setup, too.

I'm running Ubuntu 16.04 LTS 64-bit on an Intel NUC somewhere in my house.  It happens to be running Ubuntu Desktop, as it's attached to one of the TVs in my house, as a media playing device.  In it's spare time, it's a server I use for LXD, Docker, and other development purposes.

I've configured the network on the machine to "bridge" LXD to my USG router, which happens to be running DHCP and DNS.  I'm going to move that to a MAAS server, but that's a post for another day.

Here's /etc/network/interfaces on that machine:

kirkland@masterbr:~⟫ cat /etc/network/interfaces
# interfaces(5) file used by ifup(8) and ifdown(8)
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

So eth0 is bridged, to br0.  ifconfig looks like this:

kirkland@masterbr:~⟫ ifconfig eth0
eth0 Link encap:Ethernet HWaddr ec:a8:6b:fb:a1:f2
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1111309 errors:0 dropped:8294 overruns:0 frame:0
TX packets:539270 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:850773437 (850.7 MB) TX bytes:85706158 (85.7 MB)
Interrupt:20 Memory:f7c00000-f7c20000

kirkland@masterbr:~⟫ ifconfig br0
br0 Link encap:Ethernet HWaddr ec:a8:6b:fb:a1:f2
inet addr:10.0.0.8 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::eea8:6bff:fefb:a1f2/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:435576 errors:0 dropped:0 overruns:0 frame:0
TX packets:182097 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:325950072 (325.9 MB) TX bytes:35439980 (35.4 MB)

And I've configured LXD to have its default profile instances draw their IP address from br0, rather than from the default, internally NAT'd dnsmasq lxdbr0.

kirkland@masterbr:/etc⟫ lxc profile show default
name: default
config: {}
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic

Now, let's launch a LXD container running Ubuntu 16.04 LTS.

kirkland@masterbr:~⟫ lxc launch ubuntu:xenial unifi-controller
Creating unifi-controller
Starting unifi-controller
kirkland@masterbr:~⟫ lxc list
+------------------+---------+-------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------------------+---------+-------------------+------+------------+-----------+
| unifi-controller | RUNNING | 10.0.0.183 (eth0) | | PERSISTENT | 0 |
+------------------+---------+-------------------+------+------------+-----------+

It's important to notice that this container drew an IP address on my 10.0.0.0/24 LAN.  It will need this, to detect, federate, and manage the Ubiquiti hardware.

Now, let's exec into it, and import our SSH keys, so that we can SSH into it later.

kirkland@masterbr:~⟫ lxc exec unifi-controller bash
root@unifi-controller:~# ssh-import-id kirkland
2016-12-09 21:56:36,558 INFO Authorized key ['4096', 'd3:dd:e4:72:25:18:f3:ea:93:10:1a:5b:9f:bc:ef:5e', 'kirkland@x220', '(RSA)']
2016-12-09 21:56:36,568 INFO Authorized key ['2048', '69:57:f9:b6:11:73:48:ae:11:10:b5:18:26:7c:15:9d', 'kirkland@mac', '(RSA)']
2016-12-09 21:56:36,569 INFO [2] SSH keys [Authorized]
root@unifi-controller:~# exit
exit
kirkland@masterbr:~⟫ ssh root@10.0.0.183
The authenticity of host '10.0.0.183 (10.0.0.183)' can't be established.
ECDSA key fingerprint is SHA256:we0zAxifd0dcnAE2tVE53NFbQCop61f+MmHGsyGj0Xg.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.183' (ECDSA) to the list of known hosts.
root@unifi-controller:~#
Now, let's add the Unifi repository and install the deb and all its dependencies.  It's a big pile of Java and MongoDB, which I'm happy to keep nicely "contained" in this LXD instance!

root@unifi-controller:~# echo deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
root@unifi-controller:~# echo "deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti" | sudo tee -a /etc/apt/sources.list
deb http://www.ubnt.com/downloads/unifi/debian stable ubiquiti
root@unifi-controller:~# apt-key adv --keyserver keyserver.ubuntu.com --recv C0A52C50
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.hhgdd0ssJQ --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv C0A52C50
gpg: requesting key C0A52C50 from hkp server keyserver.ubuntu.com
gpg: key C0A52C50: public key "UniFi Developers " imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
root@unifi-controller:~# apt update >/dev/null 2>&1
root@unifi-controller:~# apt install unifi
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following package was automatically installed and is no longer required:
os-prober
Use 'apt-get autoremove' to remove it.
The following extra packages will be installed:
binutils ca-certificates-java default-jre-headless fontconfig-config
fonts-dejavu-core java-common jsvc libasyncns0 libavahi-client3
libavahi-common-data libavahi-common3 libboost-filesystem1.54.0
libboost-program-options1.54.0 libboost-system1.54.0 libboost-thread1.54.0
libcommons-daemon-java libcups2 libflac8 libfontconfig1 libgoogle-perftools4
libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libogg0
libpcrecpp0 libpcsclite1 libpulse0 libsctp1 libsnappy1 libsndfile1
libtcmalloc-minimal4 libunwind8 libv8-3.14.5 libvorbis0a libvorbisenc2
lksctp-tools mongodb-clients mongodb-server openjdk-7-jre-headless tzdata
tzdata-java
Suggested packages:
binutils-doc default-jre equivs java-virtual-machine cups-common
liblcms2-utils pcscd pulseaudio icedtea-7-jre-jamvm libnss-mdns
sun-java6-fonts fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho
ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts
ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts
The following NEW packages will be installed:
binutils ca-certificates-java default-jre-headless fontconfig-config
fonts-dejavu-core java-common jsvc libasyncns0 libavahi-client3
libavahi-common-data libavahi-common3 libboost-filesystem1.54.0
libboost-program-options1.54.0 libboost-system1.54.0 libboost-thread1.54.0
libcommons-daemon-java libcups2 libflac8 libfontconfig1 libgoogle-perftools4
libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libogg0
libpcrecpp0 libpcsclite1 libpulse0 libsctp1 libsnappy1 libsndfile1
libtcmalloc-minimal4 libunwind8 libv8-3.14.5 libvorbis0a libvorbisenc2
lksctp-tools mongodb-clients mongodb-server openjdk-7-jre-headless
tzdata-java unifi
The following packages will be upgraded:
tzdata
1 upgraded, 44 newly installed, 0 to remove and 10 not upgraded.
Need to get 133 MB of archives.
After this operation, 287 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
...
done.

Finally, we point a web browser at this server, http://10.0.0.183:8443/ in my case, and run through the UniFi setup there.

Enjoy!

:-Dustin

Eric Hammond: How Much Does It Cost To Run A Serverless API on AWS?

Planet Ubuntu - Mon, 12/12/2016 - 03:00

Serving 2.1 million API requests for $11

Folks tend to be curious about how much real projects cost to run on AWS, so here’s a real example with breakdowns by AWS service and feature.

This article walks through the AWS invoice for charges accrued in November 2016 by the TimerCheck.io API service which runs in the us-east-1 (Northern Virginia) region and uses the following AWS services:

  • API Gateway
  • AWS Lambda
  • DynamoDB
  • Route 53
  • CloudFront
  • SNS (Simple Notification Service)
  • CloudWatch Logs
  • CloudWatch Metrics
  • CloudTrail
  • S3
  • Network data transfer
  • CloudWatch Alarms

During this month, TimerCheck.io service processed over 2 million API requests. Every request ran an AWS Lambda function that read from and/or wrote to a DynamoDB table.

This AWS account is older than 12 months, so any first year free tier specials are no longer applicable.

Total Cost Overview

At the very top of the AWS invoice, we can see that the total AWS charges for the month of November add up to $11.12. This is the total bill for processing the 2.1 million API requests and all of the infrastructure necessary to support them.

Service Overview

The next part of the invoice lists the top level services and charges for each. You can see that two thirds of the month’s cost was in API Gateway at $7.47, with a few other services coming together to make up the other third.

API Gateway

Current API Gateway pricing is $3.50 per million requests, plus data transfer. As you can see from the breakdown below, the requests are the bulk of the expense at $7.41. The responses from TimerCheck.io probably average in the hundreds of bytes, so there’s only $0.06 in data transfer cost.

You currently get a million requests at no charge for the first 12 months, which was not applicable to this invoice, but does end up making API Gateway free for the development of many small projects.

CloudTrail

I don’t remember enabling CloudTrail on this account, but at some point I must have done the right thing, as this is something that should be active for every AWS account. There were are almost 400,000 events recorded by CloudTrail, but since the first trail is free, there is no charge listed here.

Note that there are some charges associated with the storage of the CloudTrail event logs in S3. See below.

CloudWatch

The CloudWatch costs for this service come from logs being sent to CloudWatch Logs and the storage of those logs. These logs are being generated by AWS Lambda function execution and by API Gateway execution, so you can consider them as additional costs of running those services. You can control some of the logs generated by your AWS Lambda function, so a portion of these costs are under your control.

There are also charges for CloudWatch Alarms, but for some reason, those are listed under EC2 (below) instead of here under CloudWatch.

Data Transfer

Data transfer costs can be complex as they depend on where the data is coming from and going to. Fortunately for TimerCheck.io, there is very little network traffic and most of it falls into the free tiers. What little we are being charged for here amounts to a measly $0.04 for 4 GB of data transferred between availability zones. I presume this is related to AWS services talking to each other (e.g., AWS Lambda to DynamoDB) because there are no EC2 instances in this stack.

Note that this is not the entirety of the data transfer charges, as some other services break out their own network costs.

DynamoDB

The DynamoDB pricing includes a permanent free tier of up to 25 write capacity units and 25 read capacity units. The TimerCheck.io has a single DynamoDB table that is set to a capacity of 25 write and 25 read so there are no charges for capacity.

The TimerCheck.io DynamoDB database size falls well under the 25 GB free tier, so that has no charge either.

Elastic Compute Cloud

The TimerCheck.io service does not use EC2 and yet there is a section in the invoice for EC2. For some reason this section lists the CloudWatch Alarm charges.

Each CloudWatch Alarm costs $0.10 per month and this account has eight for a total of $0.80/month. But, for some reason, I was only billed $0.73. *shrug*

This AWS account has four AWS billing alarms that will email me whenever the cumulative charges for the month pass $10, $20, $30, and $40.

There is one CloudWatch alarm that emails me if the AWS Lambda function invocations are being throttled (more than 100 concurrent functions being executed).

There are two CloudWatch alarms that email me if the consumed read and write capacity units are trending so high that I should look at increasing the capacity settings of the DynamoDB table. We are nowhere near that at current usage volume.

Yes, that leaves one CloudWatch alarm, which was a duplicate. I have since removed it.

AWS Lambda

Since most of the development of the TimerCheck.io API service focuses on writing the 60 lines of code for the AWS Lambda function, this is where I was expecting the bulk of the charges to be. However, the AWS Lambda costs only amount to $0.22 for the month.

There were 2.1 million AWS Lambda function invocations, one per external consumer API request, same as the API Gateway. The first million AWS Lambda function calls are free. The rest are charged at $0.20 per million.

The permanent free tier also includes 400,000 GB-seconds of compute time per month. At an average of 0.15 GB-seconds per function call, we stayed within the free tier at a total of 320,000 GB-seconds.

I have the AWS Lambda function configuration cranked up to the max 1536 GB memory so that it will run as fast as possible. Since the charges are rounded up in units of 100ms, we could probably save GB-seconds by scaling down the memory once we exceed the free tier. Most of the time is probably spent in DynamoDB calls anyway, so this should not affect API performance much.

Route 53

Route 53 charges $0.50 per hosted zone (domain). I have two domains hosted in Route 53, the expected timercheck.io plus the extra timercheck.com. The timercheck.com domain was supposed to redirect to timercheck.io, but I apparently haven’t gotten around to tossing in that feature yet. These two hosted zones account for $1 in charges.

There were 1.1 million DNS queries to timercheck.io and www.timercheck.io, but since those resolve to aliases for the API Gateway, there is no charge.

The other $0.09 comes from the 226,000 DNS queries to random timercheck.io and timercheck.com hostnames. These would include status.timercheck.io, which is a page displaying the uptime of TimerCheck.io as reported by StatusCake.

Simple Notification Service

During the month of November, there was one post to an SNS topic and one email delivery from an SNS topic. These were both for the CloudWatch alert notifying me that the charges on the account had exceeded $10 for the month. There were no charges for this.

Simple Storage Service

The S3 costs in this account are entirely for storing the CloudTrail events. There were 222 GET requests ($0) and 13,000 requests of other types ($0.07). There was no charge for the 0.064 GB-Mo of actual data stored. Has Amazon started rounding fractional pennies down instead of up in some services?

External Costs

The domains timercheck.io and timercheck.com are registered through other registrars. Those cost about $33 and $9 per year, respectively.

The SSL/TLS certificate for https support costs around $10-15 per year, though this should drop to zero once CloudFront distributions created with API Gateway support certificates with ACM (AWS Certificate Manager) #awswishlist

Not directly obvious from the above is the fact that I have spent no time or money maintaining the TimerCheck.io API service post-launch. It’s been running for 1.5 years and I haven’t had to upgrade software, apply security patches, replace failing hardware, recover from disasters, or scale up and down with demand. By using AWS services like API Gateway, AWS Lambda, and DynamoDB, Amazon takes care of everything.

Notes

Your Mileage May Vary.

For entertainment use only.

This is just one example from one month for one service architected one way. Your service running on AWS will not cost the same.

Though 2 million TimerCheck.io API requests in November cost about $11, this does not mean that an additional million would cost another $5.50. Some services would cost significantly more and some would cost about the same, probably averaging out to significantly more.

If you are reading this after November 2016, then the prices for these AWS services have certainly changed and you should not use any of the above numbers in making decisions about running on AWS.

Conclusion

Amazon, please lower the cost of the API Gateway; or provide a simpler, cheaper service that can trigger AWS Lambda functions with https endpoints. Thank you!

Original article and comments: https://alestic.com/2016/12/aws-invoice-example/

Paul Tagliamonte: DNSync MAC Addresses

Planet Ubuntu - Sun, 12/11/2016 - 20:30

I’ve been hacking on a project on and off for my LAN called DNSync. This will take a DNSMasq leases file and sync it to Amazon Route 53.

I’ve added a new feature, which will create A reccords for each MAC address on the LAN.

Since DNSync won’t touch CNAME records, I use CNAME records (manually) to point to the auto-synced A records for services on my LAN (such as my Projector, etc).

Since It’s easy for two machines to have the same name, I’ve decided to add A records for each MAC as well as their client name. They take the fomm of something like ab-cd-ef-ab-cd-ef.by-mac.paultag.house., which is harder to accedentally collide.

Elizabeth K. Joseph: UbuCon EU 2016

Planet Ubuntu - Sun, 12/11/2016 - 17:03

Last month I had the opportunity to travel to Essen, Germany to attend UbuCon EU 2016. Europe has had UbuCons before, but the goal of this one was to make it a truly international event, bringing in speakers like me from all corners of the Ubuntu community to share our experiences with the European Ubuntu community. Getting to catch up with a bunch of my Ubuntu colleagues who I knew would be there and visiting Germany as the holiday season began were also quite compelling reasons for me to attend.

The event formally kicked off Saturday morning with a welcome and introduction by Sujeevan Vijayakumaran, he reported that 170 people registered for the event and shared other statistics about the number of countries attendees were from. He also introduced a member of the UbPorts team, Marius Gripsgård, who announced the USB docking station for Ubuntu Touch devices they were developing, more information in this article on their website: The StationDock.

Following these introductions and announcements, we were joined by Canonical CEO Jane Silber who provided a tour of the Ubuntu ecosystem today. She highlighted the variety of industries where Ubuntu was key, progress with Ubuntu on desktops/laptops, tablets, phones and venturing into the smart Internet of Things (IoT) space. Her focus was around the amount of innovation we’re seeing in the Ubuntu community and from Canonical, and talked about specifics regarding security, updates, the success in the cloud and where Ubuntu Core fits into the future of computing.

I also loved that she talked about the Ubuntu community. The strength of local meetups and events, the free support community that spans a variety of resources, ongoing work by the various Ubuntu flavors. She also spoke to the passion of Ubuntu contributors, citing comics and artwork that community members have made, including the stunning series of release animal artwork by Sylvia Ritter from right there in Germany, visit them here: Ubuntu Animals. I was also super thrilled that she mentioned the Ubuntu Weekly Newsletter as a valuable resource for keeping up with the community, a very small group of folks works very hard on it and that kind of validation is key to sustaining motivation.

The next talk I attended was by Fernando Lanero Barbero on Linux is education, Linux is science. Ubuntu to free educational environments. Fernando works at a school district in Spain where he has deployed Ubuntu across hundreds of computers, reaching over 1200 students in the three years he’s been doing the work. The talk outlined the strengths of the approach, explaining that there was cost savings for his school and also how Ubuntu and open source software is more in line with the school values. One of the key takeaways from his experience was one that I know a lot about from our own Linux in schools experiences here in the US at Partimus: focus on the people, not the technologies. We’re technologists who love Linux and want to promote it, but without engagement, understanding and buy-in from teachers, deployments won’t be successful. A lot of time needs to be spent making assessments of their needs, doing roll-outs slowly and methodically so that the change doesn’t happen to abruptly and leave them in a lurch. He also stressed the importance of consistency with the deployments. Don’t get super creative across machines, use the same flavor for everything, even the same icon set. Not everyone is as comfortable with variation as we are, and you want to make the transition as easy as possible across all the systems.

Laura Fautley (Czajkowski) spoke at the next talk I went to, on Supporting Inclusion & Involvement in a Remote Distributed Team. The Ubuntu community itself is distributed across the globe, so drawing experience from her work there and later at several jobs where she’s had to help manage communities, she had a great list of recommendations as you build out such a team. She talked about being sensitive to time zones, acknowledgement that decisions are sometimes made in social situations rather than that you need to somehow document and share these decisions with the broader community. She was also eager to highlight how you need to acknowledge and promote the achievements in your team, both within the team and to the broader organization and project to make sure everyone feels valued and so that everyone knows the great work you’re doing. Finally, it was interesting to hear some thoughts about remote member on-boarding, stressing the need to have a process so that new contributors and team mates can quickly get up to speed and feel included from the beginning.

I went to a few other talks throughout the two day event, but one of the big reasons for me attending was to meet up with some of my long-time friends in the Ubuntu community and finally meet some other folks face to face. We’ve had a number of new contributors join us since we stopped doing Ubuntu Developer Summits and today UbuCons are the only Ubuntu-specific events where we have an opportunity to meet up.


Laura Fautley, Elizabeth K. Joseph, Alan Pope, Michael Hall

Of course I was also there to give a pair of talks. I first spoke on Contributing to Ubuntu on Desktops (slides) which is a complete refresh of a talk I gave a couple of times back in 2014. The point of that talk was to pull people back from the hype-driven focus on phones and clouds for a bit and highlight some of the older projects that still need contributions. I also spoke on Building a career with Ubuntu and FOSS (slides) which was definitely the more popular talk. I’ve given a similar talk for a couple UbuCons in the past, but this one had the benefit of being given while I’m between jobs. This most recent job search as I sought out a new role working directly with open source again gave a new dimension to the talk, and also made for an amusing intro, “I don’t have a job at this very moment …but without a doubt I will soon!” And in fact, I do have something lined up now.


Thanks to Tiago Carrondo for taking this picture during my talk! (source)

The venue for the conference was a kind of artists space, which made it a bit quirky, but I think worked out well. We had a couple social gatherings there at the venue, and buffet lunches were included in our tickets, which meant we didn’t need to go far or wait on food elsewhere.

I didn’t have a whole lot of time for sight-seeing this trip because I had a lot going on stateside (like having just bought a house!) but I did get to enjoy the beautiful Christmas Market in Essen a few of nights while I was there.

For those of you not familiar with German Christmas Markets (I wasn’t), they close roads downtown and pop up streets of wooden shacks that sell everything from Christmas ornaments and cookies to hot drinks, beers and various hot foods. We went the first night I was in town we met up with several fellow conference-goers and got some fries with mayonnaise, grilled mushrooms with Bearnaise sauce, my first taste of German Glühwein (mulled wine) and hot chocolate. The next night we went was a quick walk through the market that landed us at a steakhouse where we had a late dinner and a couple beers.

The final night we didn’t stay out late, but did get some much anticipated Spanish churros, which inexplicably had sugar rather than the cinnamon I’m used to, as well as a couple more servings of Glühwein, this time in commemorative Christmas mugs shaped like boots!


Clockwise from top left: José Antonio Rey, Philip Ballew, Michael Hall, John and Laura Fautley, Elizabeth K. Joseph

The next morning I was up bright and early to catch a 6:45AM train that started me on my three train journey back to Amsterdam to fly back to Philadelphia.

It was a great little conference and a lot of fun. Huge thanks to Sujeevan for being so incredibly welcoming to all of us, and thanks to all the volunteers who worked for months to make the event happen. Also thanks to Ubuntu community members who donate to the community fund since I would have otherwise had to self-fund to attend.

More photos from the event (and the Christmas Market!) here: https://www.flickr.com/photos/pleia2/albums/72157676958738915

Colin Watson: The sad tale of CVE-2015-1336

Planet Ubuntu - Sun, 12/11/2016 - 16:42

Today I released man-db 2.7.6 (announcement, NEWS, git log), and uploaded it to Debian unstable. The major change in this release was a set of fixes for two security vulnerabilities, one of which affected all man-db installations since 2.3.12 (or 2.3.10-66 in Debian), and the other of which was specific to Debian and its derivatives.

It’s probably obvious from the dates here that this has not been my finest hour in terms of responding to security issues in a timely fashion, and I apologise for that. Some of this is just the usual life reasons, which I shan’t bore you by reciting, but some of it has been that fixing this properly in man-db was genuinely rather complicated and delicate. Since I’ve previously advocated man-db over some of its competitors on the basis of a better security posture, I think it behooves me to write up a longer description.

I took over maintaining man-db over fifteen years ago in slightly unexpected circumstances (I got annoyed with its bug list and made a couple of non-maintainer uploads, and then the previous maintainer died, so I ended up taking over both in Debian and upstream). I was a fairly new developer at the time, and there weren’t a lot of people I could ask questions of, but I did my best to recover as much of the history as I could and learn from it. One thing that became clear very quickly, both from my own inspection and from the bug list, was that most of the code had been written in a rather more innocent time. It was absolutely riddled with dangerous uses of the shell, poor temporary file handling, buffer overruns, and various common-or-garden deficiencies of that kind. I spent several years reworking large swathes of the codebase to be more robust against those kinds of bugs by design, and for example libpipeline came out of that effort.

The most subtle and risky set of problems came from the fact that the man and mandb programs were installed set-user-id to the man user. Part of this was so that man could maintain preformatted “cat pages”, and part of it was so that users could run mandb if the system databases were out of date (this is now much less useful since most package managers, including dpkg, support some kind of trigger mechanism that can run mandb whenever new system-level manual pages are installed). One of the first things I did was to make this optional, and this has been a disabled-by-default debconf option in Debian for a long time now. But it’s still a supported option and is enabled by default upstream, and when running setuid man and mandb need to take care to drop privileges when dealing with user-controlled data and to write files with the appropriate ownership and permissions.

My predecessor had problems related to this such as Debian #26002, and one of the ways they dealt with them was to make /var/cache/man/ set-group-id root, in order that files written to that directory would have consistent group ownership. This always struck me as rather strange and I meant to do something about it at some point, but until the first vulnerability report above I regarded it as mainly a curiosity, since nothing in there was group-writeable anyway. As a result, with the more immediate aim of making the system behave consistently and dealing with bug reports, various bits of code had accreted that assumed that /var/cache/man/ would be man:root 2755, and not all of it was immediately obvious.

This interacted with the second vulnerability report in two ways. Firstly, at some level it caused it because I was dealing with the day-to-day problems rather than thinking at a higher level: a series of bugs led me down the path of whacking problems over the head with a recursive chown of /var/cache/man/ from cron, rather than working out why things got that way in the first place. Secondly, once I’d done that, I couldn’t remove the chown without a much more extensive excursion into all the code that dealt with cache files, for fear of reintroducing those bugs. So although the fix for the second vulnerability is very simple in itself, I couldn’t get there without dealing with the first vulnerability.

In some ways, of course, cat pages are a bit of an anachronism. Most modern systems can format pages quickly enough that it’s not much of an issue. However, I’m loath to drop the feature entirely: I’m generally wary of assuming that just because I have a fast system that everyone does. So, instead, I did what I should have done years ago: make man and mandb set-group-id man as well as set-user-id man, at which point we can simply make all the cache files and directories be owned by man:man and drop the setgid bit on cache directories. This should be simpler and less prone to difficult-to-understand problems.

I expect that my next substantial upstream release will switch to --disable-setuid by default to reduce exposure, though, and distributions can start thinking about whether they want to follow that (Fedora already does, for example). If this becomes widely disabled without complaints then that would be good evidence that it’s reasonable to drop the feature entirely. I’m not in a rush, but if you do need cat pages then now is a good time to write to me and tell me why.

This is the fiddliest set of vulnerabilities I’ve dealt with in man-db for quite some time, so I hope that if there are more then I can get back to my previous quick response time.

Svetlana Belkin: What Programs Do I Use: Manuskript

Planet Ubuntu - Sun, 12/11/2016 - 16:02

Like I said, I try to write fiction which I recently realized that what I was planning to write back in 2009 (gave up in 2013) is more of a Dungeons and Dragons (D&D) world/realm then an original world (now I decided to make it back to one).  But I never really had a way to organize my thoughts/parts expect with individual word processor or .txt files within folders.  I either outline or just write something down in those files.

Recently I came by a program called Manuskript, which is an Open Source clone of Scrivener.  I took the time to review it by video because I found that it would be hard to do it by text:

I have a playlist where I talk about the app along with me creating a D&D campaign with the program.

The main problem of this program that it looks like the development has stalled and maybe is already dead because, “QT Webkit is depracated [and] may have built it on dying tech” (Darrell Breeden, comments on the story on OMG! Ubuntu).  Hopefully it isn’t because none of the other suggested programs are suited for what I’m trying to do with what I’m working on.

Ross Gammon: Manual Tests of Ubuntu Studio Packages

Planet Ubuntu - Sun, 12/11/2016 - 12:51

We have been caught out a few times in the lead up to some of the recent releases of Ubuntu Studio, where we discovered very late that there were problems with a particular package. If you are an experienced Ubuntu Studio user, or you would like to begin helping out in the Ubuntu Studio Developers Team, why not start testing packages for the next release (Zesty 17.04)?

Step 1 – Install the Ubuntu Studio Development Release

It is not recommended to install the development release on a computer where you cannot afford to loose important data. In order of preference, install it on:

  1. A spare computer with lots of audio/video hardware plugged in.
  2. A spare computer.
  3. Your main desktop/laptop computer with a spare hard disk plugged in.
  4. A Virtual Machine on your main desktop/laptop (not really suitable for audio/video applications).

Instructions for installing the Ubuntu Studio Development Release can be found here.

Step 2 – Choose a package to test

The list of Test Cases for Ubuntu Studio Zesty 17.04 can be found on the QA Package Tracker.

Step 3 – Check package versions

It is a good idea to note down the version number of the package in the Ubuntu development release (you will need it when reporting any bugs you find), and also in Debian (and also upstream if you are keen). Let us in the Ubuntu Studio Development Team know if our package is way out of date so that we can look into what is blocking the newer version.

To find the version in Ubuntu use the search form at the bottom of this page. For Debian, use the search form at the bottom of this page. Make sure you search in the right distribution (Ubuntu – Zesty at the moment, Debian – unstable).

Step 4 – Run the test

Click on the package you want to test in the QA package tracker (see screenshot above), and the test case should appear.

Follow the steps of the test case. It is as simple as that. If you are an experienced user of that package, feel free to test further functions. The more bugs we find early in the release cycle, the more chance they will be fixed before the release.

Step 5 – Record the results & report bugs

For this step you will need to have a Launchpad login. Log into the package tracker. You can see the button on the above screen-shot. Record your results (hopefully a “pass”) in the bottom of the tracker. The results will be stored, so feel free to come back and test the same package later and add another result. If you spot a minor bug, then see if it has already been reported in Launchpad, and if not then report it. Add the bug number to the applicable column in your test result. If you cannot complete the test case due to a bug, please mark the test as failed (and add the bug number to the report). Feel free to add as many comments to the test result as you like. In particular, we are interested in your test environment (e.g. laptop/desktop/Virtual Machine), and the version of the package when you tested it.

Step 6 – Improve the Test Cases

If you have got this far, and finished a test, then well done and thank you! You deserve a break. But why stop there? Test a different package. We also need help maintaining the Test Cases. If you spot a mistake in a Test Case, or a note a possible improvement, then report a bug against the manual-tests in Launchpad. If you think we are missing a Test Case for an Ubuntu Studio package, then please also report a bug (after checking that there isn’t already one).

You could also help out further by actually correcting, or creating the Test Case yourself. There are excellent documents on how to do this on the QA wiki here:

Contributing Manual Test Cases


Kubuntu General News: Kubuntu and Linux Mint doing Plasma 5.8 testing

Planet Ubuntu - Sun, 12/11/2016 - 08:02

Since Linux Mint 18 KDE uses the Kubuntu Backports we both thought it would be very productive if we asked both our users to help test our work, bring Plasma 5.8 to both Xenial and Yakkety. This update will also bring updated Frameworks 5.28 and Applications 16.04.3 so your favorite applications like Dolphin, Konsole, Kate will be updated with fixes, new features.

You should have some command line knowledge before testing.
Having a quick read about how to use ppa purge is also a big plus

How to test Plasma 5.8
Plasma 5.8.4 is currently in the Kubuntu Backports Landing PPA.

If your currently on Kubuntu 16.04.1 or 16.10 you can add this ppa with Konsole:
sudo add-apt-repository ppa:kubuntu-ppa/backports-landing

Then update:
sudo apt update

Once that is complete:
sudo apt dist-upgrade

Once everything is done you can reboot.

Aurélien Gâteau: Cat Avatar Generator, the Android app!

Planet Ubuntu - Sat, 12/10/2016 - 11:02

David Revoy, the author of the amazing Pepper & Carrot webcomic, recently published an online cat avatar generator. My kids loved it, so last Sunday I took this as a pretext to do a bit of Android development by building an Android app for it.

It's a bit spartan for now since it was built in one day, but it works. You can get it on Google Play.

If you want to improve it, you can find the source code on GitHub.

Synchronizing cats is complicated

One of the things I wanted to improve was to modify the generator so that entering a name in the app would produce the same cat as David online generator. In this early version I did not copy the way the seed for the random generator is computed, because I knew it would not be enough: PHP and Java do not share the same random number generator.

This week I decided to try to extract the code used in PHP to generate random numbers so that I could re-implement it in my app. I created a PHP sample page which seeded the random generator and printed 3 numbers, then I downloaded PHP 7.1.0 source code, extracted the random code and turned it into a standalone C program doing the same thing my PHP sample page did. After a bit of tweaking I managed to get it to compile and run, but the numbers did not match what my PHP sample produced. After much head scratching and random (sic) experiments, I decided I needed more info so I recompiled PHP 7.1.0, planning to add some log output to understand what was going wrong. Once this was done, I ran my PHP sample using my recompiled PHP 7.1.0 instead of the PHP I had installed via apt... Surprise! the numbers matched my C program!

Since the PHP installed on my machine was 7.0.8, I looked at the changelog for 7.1.0, and found this:

rand() and srand() have now been made aliases to mt_rand() and mt_srand(), respectively. This means that the output for the following functions have changes: rand(), shuffle(), str_shuffle(), and array_rand().

The rationale behind this change is that the output of rand() was weak and system dependent, so did not produce consistent values when ran on different systems... still, that's annoying.

At this point, I am not sure I want to spend more time on trying to synchronize avatars.

Other possible future features

One feature I would love to add is the ability to define contact pictures in my phone based on the generator. My kids on the other hand, would like to pick avatar parts themselves. We'll see what I (or you?) decide to add next.

Jonathan Riddell: KDE neon User LTS Edition Out Now

Planet Ubuntu - Fri, 12/09/2016 - 10:04

KDE Plasma 5.8 is designated an LTS edition with bugfixes and new releases being made for 18 months (rather than the normal four months).  This will please a category of user who don’t want new features on their desktop but do want it to keep working and bugs to be removed.  Because Neon aims to service Plasma and its users in every way we have now created the KDE neon User LTS Edition.

This comes with Plasma 5.8 LTS, updated for new bug fix releases (e.g. 5.8.5 is out at the end of this month) and will not change to Plasma 5.9 when they becomes available.  A common critisism of LTS editions is that it just means users get old versions with known bugs.  KDE neon User LTS Edition comes with the latest KDE Applications and it comes with the latest KDE Frameworks release and Qt 5.7, so all the KDE software we ship is the latest stable version.  Along with other KDE neon editions we’ll also ship the HWE updates for Linux and Mesa when they become available.

For those interested in archive details it’s

deb http://archive.neon.kde.org/user/lts xenial main

Switching from User Edition to User LTS Edition archive is unsupported but will likely work.

KDE Neon is so stable I completely forgot I was using it.

A recent Reddit post gave some pleasing feedback about KDE neon, allow me the indulgence of picking some pleasing quotes from it:

I feel like the KDE neon team has done such a great job with an out-of-the-box experience with this distro that it feels insanely polished.

Jep, I’m even using KDE neon at work. I’ve been able to simply focus on my tasks, and not worry about troubleshooting the OS.

KDE neon cured my distro hopping as well.

KDE neon is the bee’s knees.

Anyone else feel this last should become an official marketing slogan?

by

David Wonderly: Hello world!

Planet Ubuntu - Fri, 12/09/2016 - 09:29

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

Ubuntu Podcast from the UK LoCo: S09E41 – Pine In The Neck - Ubuntu Podcast

Planet Ubuntu - Thu, 12/08/2016 - 08:00

It’s Season Nine Episode Forty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joe Ressington are connected and speaking to your brain.

We are four once more, thanks to some help from our mate Joe!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

Dustin Kirkland: A Touch of Class at Sir Ludovic, Bucharest, Romania

Planet Ubuntu - Thu, 12/08/2016 - 07:33
A few weeks ago, I traveled to Bucharest, Romania for a busy week of work, planning the Ubuntu 17.04 (Zesty) cycle.

I did have a Saturday and Sunday to myself, which I spent mostly walking around the beautiful, old city. After visiting the Romanian Athenaeum, I quite randomly stumbled into one truly unique experience. I passed a window shop for "Sir Ludovic Master Suit Maker" which somehow caught my eye.



I travel quite a bit on business, and you'll typically find me wearing a casual sports coat, a button up shirt, nice jeans, cowboy boots, and sometimes cuff links. But occasionally, I feel a little under-dressed, especially in New York City, where a dashing suit still rules the room.

Frankly, everything I know about style and fashion I learned from Derek Zoolander. Just kidding. Mostly.

Anyway, I owned two suits. One that I bought in 2004, for that post-college streak of weddings, and a seersucker suit (which is dandy in New Orleans and Austin, but a bit irreverent for serious client meetings on Wall Street).

So I stepped into Sir Ludovic, merely as a curiosity, and walked out with the most rewarding experience of my week in Romania. Augustin Ladar, the master tailor and proprietor of the shop, greeted me at the door. We then spent the better part of 3 hours, selecting every detail, from the fabrics, to the buttons, to the stylistic differences in the cut and the fit.




Better yet, I absorbed a wealth of knowledge on style and fashion: when to wear blue and when to wear grey, why some people wear pin stripes and others wear checks, authoritative versus friendly style, European versus American versus Asian cuts, what the heck herringbone is, how to tell if the other guy is also wearing hand tailored attire, and so on...

Augustin measured me for two custom tailored suits and two bespoke shirts, on a Saturday. I picked them up 6 days later on a Friday afternoon (paying a rush service fee).

Wow. Simply, wow. Splendid Italian wool fabric, superb buttons, eye-catching color shifting inner linings, and an impeccably precise fit.









I'm headed to New York for my third trip since, and I've never felt more comfortable and confident in these graceful, classy suits. A belated thanks to Augustin. Fabulous work!



Cheers,
Dustin

Alessio Treglia: The new professionals of the interconnected world

Planet Ubuntu - Thu, 12/08/2016 - 07:02

There is an empty chair at the conference table of business professionals, a not assigned place that increasingly demands for the presence of a new type of integration manager. The demands for an ever-increasing specialization, imposed by the modern world, are bringing out with great emphasis the need for an interdisciplinary professional who understands the demands of specialists and who is able to coordinate and to link actions and decisions. This need, often still ignored, is a direct result of the growing complexity of the modern world and the fast communications inside the network.

Complexity” is undoubtedly the most suitable paradigm to characterize the historical and social model of today’s world, in which the interactions and connections between the various areas now form an inextricable network of relations. Since the ’60s and’ 70s a large group of scholars – including the chemist Ilya Prigogine and the physicist Murray Gell-Mann – began to study what would become a true Science of Complexity.

Yet this is not an entirely new concept: the term means “composed of several parts connected to each other and dependent on each other“, exactly as reality, nature, society, and the environment around us. A “complex” mode of thought integrates and considers all contexts, interconnections, interrelationships between the different realities as part of the vision.

What is professionalism? And who are professionals? What can define a professional? <…>

<Read More…[by Fabio Marzocca]>

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator