Feed aggregator

Mythbuntu: Mythbuntu 16.04 Released

Planet Ubuntu - Wed, 08/10/2016 - 17:26
CORRECTION 2016.04.23 - It was previously stated that 16.04 is a point release to 14.04. This was due to a silly copy&paste issue from our previous release statement for 14.04. The Mythbuntu 16.04 release is a flavor of Ubuntu 16.04. We're sorry for any confusion this has caused.
Mythbuntu 16.04 has been released. This is our third LTS release and will be supported until shortly after the 18.04 release.The Mythbuntu team would like to thank our ISO testers for helping find critical bugs before release. You guys rock!With this release, we are providing torrents only. It is very important to note that this release is only compatible with MythTV 0.28 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.You can get the Mythbuntu ISO from our downloads page.HighlightsUnderlying system
  • Underlying Ubuntu updates are found here
MythTVWe appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 16.04 or xenial), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/16.04/).
Upgrade NodesIf you have enabled the mysql tweaks in the Mythbuntu Control Center these will need to be disabled prior to upgrading. Once upgraded, these can be reenabled.Known issues

Julian Andres Klode: Porting APT to CMake

Planet Ubuntu - Wed, 08/10/2016 - 08:52

Ever since it’s creation back in the dark ages, APT shipped with it’s own build system consisting of autoconf and a bunch of makefiles. In 2009, I felt like replacing that with something more standard, and because nobody really liked autotools, decided to go with CMake. Well, the bazaar branch was never really merged back in 2009.

Fast forward 7 years to 2016. A few months ago, we noticed that our build system had trouble with correct dependencies in parallel building. So, in search for a way out, I picked up my CMake branch from 2009 last Thursday and spent the whole weekend working on it, and today I am happy to announce that I merged it into master:

123 files changed, 1674 insertions(+), 3205 deletions(-)

More than 1500 lines less build system code. Quite impressive, eh? This also includes about 200 lines of less code in debian/, as that switched from prehistoric debhelper stuff to modern dh (compat level 9, almost ready for 10).

The annoying Tale of Targets vs Files

Talking about CMake: I don’t really love it. As you might know, CMake differentiates between targets and files. Targets can in some cases depend on files (generated by a command in the same directory), but overall files are not really targets. You also cannot have a target with the same name as a file you are generating in a custom command, you have to rename your target (make is OK with the generated stuff, but ninja complains about cycles because your custom target and your custom command have the same name).

Byproducts for the (time) win

One interesting thing about CMake and Ninja are byproducts. In our tree, we are building C++ files. We also have .pot templates depending on them, and .mo files depending on the templates (we have multiple domains, and merge the per-domain .pot with the all-domain .po file during the build to get a per-domain .mo). Now, if we just let them depend naively, changing a C++ file causes the .pot file to be regenerated which in turns causes us to build .mo files for every freaking language in the package. Even if nothing changed.

Byproducts solve this problem. Instead of just building the .pot file, we also create a stamp file (AKA the witness) and write the .pot file (without a header) into a temporary name and only copy it to its final name if the content changed. The .pot file is declared as a byproduct of the command.

The command doing the .pot->.mo step still depends on the .pot file (the byproduct), but as that only changes now if strings change, the .mo files only get rebuild if I change a translatable string. We still need to ensure that that the .pot file is actually built before we try to use it – the solution here is to specify a custom target depending on the witness and then have the target containing the .mo build commands depend on that target.

Now if you use  make, you might now this trick already. In make, the byproducts remain undeclared, though, while in CMake we can now actually express them, and they are used by the Ninja generator and the Ninja build tool if you chose that over make (try it out, it’s fast).

Further Work

Some command names are hardcoded, I should find_program() them. Also cross-building the package does not yet work successfully, but it only requires a tiny amount of patches in debhelper and/or cmake.

I also tried building the package on a Fedora docker image (with dpkg installed, it’s available in the Fedora sources). While I could eventually get the programs build and most of the integration test suite to pass, there are some minor issues to fix, mostly in the documentation building and GTest department: Fedora ships its docbook stylesheets in a different location, and ships GTest as a pre-compiled library, and not a source tree.

I have not yet tested building on exotic platforms like macOS, or even a BSD. Please do and report back. In Debian, CMake is not up-to.date enough on the non-Linux platforms to build APT due to test suite failures, I hope those can be fixed/disabled soon (it appears to be a timing issue AFAICT).

I hope that we eventually get some non-Debian backends for APT. I’d love that.


Filed under: Debian, Uncategorized

Zygmunt Krynicki: Creating your first snappy interface

Planet Ubuntu - Wed, 08/10/2016 - 03:04

Today is a day I've been waiting for a long time. We now have enough knowledge to create our first real interface from scratch. To really understand this content you need to be familiar with parts [1], [2], [3] and [4].

We will go all the way, from branching snapd all the way to running a program that uses our new interface. We will focus on the ancillary tasks this time, the actual interface will be rather basic. Still, this knowledge will be invaluable next time where we will try to do something more complicated.

Adding the new "hello" interface
Let's get started. It all begins with snapd. If you didn't already, fork snapd and clone your fork locally. You may find this small guide that I wrote earlier useful. It goes through all those steps in detail. At the end of the exercise you should be able to build your fork of snapd (make sure it is really your fork, not the upstream version!)

Let's look around. Each time a new interface is added, the following files are modified:
  • The file interfaces/builtin/foo{,_test}.go contains the actual interface
  • The file interfaces/builtin/all{,_test}.go contains tiny change that is used to register a new interface
In addition, if the interface slot should implicitly show up on the core snap we need to change the file snap/implicit{,_test}.go. We will get back to this.
So let's create our new interface now. As mentioned in part [3] this entails implementing a new type with a few methods. Let's do that now:
Using your favorite code editor create a new file and save it as interfaces/builtin/hello.go. Paste the following code inside:

// -*- Mode: Go; indent-tabs-mode: t -*-

/*
* Copyright (C) 2016 Canonical Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 3 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*/

package builtin

import (
"fmt"

"github.com/snapcore/snapd/interfaces"
)

// HelloInterface is the hello interface for a tutorial.
type HelloInterface struct{}

// String returns the same value as Name().
func (iface *HelloInterface) Name() string {
return "hello"
}

// SanitizeSlot checks and possibly modifies a slot.
func (iface *HelloInterface) SanitizeSlot(slot *interfaces.Slot) error {
if iface.Name() != slot.Interface {
panic(fmt.Sprintf("slot is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the slot side.
return nil
}

// SanitizePlug checks and possibly modifies a plug.
func (iface *HelloInterface) SanitizePlug(plug *interfaces.Plug) error {
if iface.Name() != plug.Interface {
panic(fmt.Sprintf("plug is not of interface %q", iface))
}
// NOTE: currently we don't check anything on the plug side.
return nil
}

// ConnectedSlotSnippet returns security snippet specific to a given connection between the hello slot and some plug.
func (iface *HelloInterface) ConnectedSlotSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentSlotSnippet returns security snippet permanently granted to hello slots.
func (iface *HelloInterface) PermanentSlotSnippet(slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// ConnectedPlugSnippet returns security snippet specific to a given connection between the hello plug and some slot.
func (iface *HelloInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// PermanentPlugSnippet returns the configuration snippet required to use a hello interface.
func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:
return nil, nil
case interfaces.SecurityDBus:
return nil, nil
case interfaces.SecurityUDev:
return nil, nil
case interfaces.SecurityMount:
return nil, nil
default:
return nil, interfaces.ErrUnknownSecurity
}
}

// AutoConnect returns true if plugs and slots should be implicitly
// auto-connected when an unambiguous connection candidate is available.
//
// This interface does not auto-connect.
func (iface *HelloInterface) AutoConnect() bool {
return false
}

TIP: Any time you are making code changes use go fmt to re-format all of the code in the current working directory to the go formatting standards. Static analysis checkers in the snappy tree enforce this so your code won't be able to land without first being formatted correctly.
This code is perfectly fine, if a little verbose (we'll get it shorter eventually, I promise). Now let's make one more change to ensure the interface known to snapd. All we need to do is to add it to the allInterfaces list in the same golang package. I've decided to just use an init() function so that all of the changes are in one file and cause less conflicts for other developers creating their interfaces. You can see my patch below.
diff --git a/interfaces/builtin/all_test.go b/interfaces/builtin/all_test.go
index 46ca587..86c8fad 100644
--- a/interfaces/builtin/all_test.go
+++ b/interfaces/builtin/all_test.go
@@ -62,4 +62,5 @@ func (s *AllSuite) TestInterfaces(c *C) {
c.Check(all, DeepContains, builtin.NewCupsControlInterface())
c.Check(all, DeepContains, builtin.NewOpticalDriveInterface())
c.Check(all, DeepContains, builtin.NewCameraInterface())
+ c.Check(all, Contains, &builtin.HelloInterface{})
}
diff --git a/interfaces/builtin/hello.go b/interfaces/builtin/hello.go
index d791fc5..616985e 100644
--- a/interfaces/builtin/hello.go
+++ b/interfaces/builtin/hello.go
@@ -130,3 +130,7 @@ func (iface *HelloInterface) PermanentPlugSnippet(plug *interfaces.Plug, securit
func (iface *HelloInterface) AutoConnect() bool {
return false
}
+
+func init() {
+ allInterfaces = append(allInterfaces, &HelloInterface{})
+}


Now switch to the snap/ directory, edit the file implicit.go and add "hello" to implicitClassicSlots. You can see the whole change below. I also updated the test that checks the number of implicitly added slots. This change will make snapd create a hello slot on the core snap automatically. As you can see by looking at the file, there are many interfaces that take advantage of this little trick.
diff --git a/snap/implicit.go b/snap/implicit.go
index 3df6810..098b312 100644
--- a/snap/implicit.go
+++ b/snap/implicit.go
@@ -60,6 +60,7 @@ var implicitClassicSlots = []string{
"pulseaudio",
"unity7",
"x11",
+ "hello",
}

// AddImplicitSlots adds implicitly defined slots to a given snap.
diff --git a/snap/implicit_test.go b/snap/implicit_test.go
index e9c4b07..364a6ef 100644
--- a/snap/implicit_test.go
+++ b/snap/implicit_test.go
@@ -56,7 +56,7 @@ func (s *InfoSnapYamlTestSuite) TestAddImplicitSlotsOnClassic(c *C) {
c.Assert(info.Slots["unity7"].Interface, Equals, "unity7")
c.Assert(info.Slots["unity7"].Name, Equals, "unity7")
c.Assert(info.Slots["unity7"].Snap, Equals, info)
- c.Assert(info.Slots, HasLen, 29)
+ c.Assert(info.Slots, HasLen, 30)
}

func (s *InfoSnapYamlTestSuite) TestImplicitSlotsAreRealInterfaces(c *C) {


Now let's see what we have. You should have three patches:
  1. The first one adding the dummy hello interface
  2. The second one registering it with allInterfaces
  3. The third one adding it to implicit slots on the core snap, on classic

This is how it looked like for me. You can also see the particular commit message style I was using. All the commits are prefixed with the directory where the changes were made, followed by a colon and by the summary of the change. Normally I would also add longer descriptions but here this is not required as the changes are trivial.



Seeing our interface for the first time
Let's run our changed code with devtools. Switch to the devtools directory and run:
./refresh-bits snapd setup run-snapd restore
This roughly means:
  1. Build snapd from source (based on correctly set $GOPATH)
  2. Prepare for running locally built snapd
  3. Run locally built snapd
  4. Restore regular version of snapd

The script will prompt you for your password. This is required to perform the system-wide operations. It will also block and wait until you interrupt it by pressing control-c. Please don't do this yet though, we want to check if our changes really made it through.



To check this we can simply list the available interfaces with
sudo snap interfaces | grep hello
Why sudo? Because of the particular peculiarity of how refresh-bits works. Normally it is not required. Did it work? It did for me:


TIP: If it didn't work for you and you didn't get the hello interface then the most likely cause of the issue is that you were editing your own fork but refresh-bits still built the vanilla upstream version that is checked out somewhere else.

Go to $GOPATH/src/github.com/snapcore/snapd and ensure that this is indeed the fork you were expecting. If not just remove this directory and move your fork (that you may have cloned elsewhere) here and try again.

Great. Now we are in business. Let's recap what we did so far:
  • We added a whole new interface by dropping boilerplate code into interfaces/builtin/hello.go
  • We registered the interface in the list of allInterfaces
  • We made snapd inject an implicit (internally defined) slot on the core snap when running on classic
  • We used refresh-bits to run our locally built version and confirmed it really works

Granting permissions through interfaces
With the scaffolding in place we can now work on using our new interface to actually grant additional permissions. When designing real interfaces you have to think about what kind of action should grant extra permissions. Recall from part [3] that you can freely associate extra security snippets that encapsulate permissions either permanent or connection-based snippets on both plugs and slots.
A permanent snippet is always there as long as a plug or a slot exists. Typically this is used on snaps other than the core snap (e.g. a service provider that is packaged as a snap) to allow the service to run. Examples of such services could include network-manager, modem-manager, docker, lxd, x11 any anything like that. The granted permissions allow the service to operate as well as to create a communication channel (typically a socket or a DBus bus name) that clients can connect to.
Connection based snippet is only applied when a connection (interface connection) is made. This permission is given to the client and typically involves basic IPC system calls as well as a permission to open a given socket (e.g. the X11 socket) or use specific DBus methods and objects.
A special class of connection snippets exist for things that don't involve a client application talking to a server application but rather a program using given kernel services. The prime example of this use case is the network interface which grants access to several network system calls. This interface grants nothing to the slot, just to connected plugs.
Today we will explore that last case. Our hello interface will give us access to a system call that is usually forbidden, reboot
The graceful-reboot snap
For the purpose of this exercise I wrote a tiny C program that uses the reboot() function. The whole program is reproduced below, along with appropriate build support for snapcraft.
graceful-reboot.c
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/reboot.h>
#include <linux reboot.h>
#include <errno.h>


int main() {
sync();
if (reboot(LINUX_REBOOT_CMD_RESTART) != 0) {
switch (errno) {
case EPERM:
printf("Insufficient permissions to reboot the system\n");
break;
default:
perror("reboot()");
break;
}
return EXIT_FAILURE;
}
printf("Reboot requested\n");
return EXIT_SUCCESS;
}

Makefile
TIP: Makefiles rely on differences between tabs and spaces. When copy pasting this sample you need to ensure that tabs are preserved in the clean and install rules
CFLAGS += -Wall
.PHONY: all
all: graceful-reboot
.PHONY: clean
clean:
rm -f graceful-reboot
graceful-reboot: graceful-reboot.c
.PHONY: install
install: graceful-reboot
install -d $(DESTDIR)/usr/bin
install -m 0755 graceful-reboot $(DESTDIR)/usr/bin/graceful-reboot

snapcraft.yaml
name: graceful-reboot
version: 1
summary: Reboots the system gracefully
description: |
This snap contains a graceful-reboot application that requests the system
to reboot by talking to the init daemon. The application uses a custom
"hello" interface that is developed as a part of a tutorial.
confinement: strict
apps:
graceful-reboot:
command: graceful-reboot
plugs: [hello]
parts:
main:
plugin: make
source: .

We now have everything required. Now let's use snapcraft to build this code and get our snap ready. I'd like to point out that we use confinement: strict. This tells snapcraft and snapd that we really really don't want to run this snap in devmode where all sandboxing is off. Doing so would defeat the purpose of adding the new interface.
Let's install the snap and try it out:
$ snapcraft

$ sudo snap install ./graceful-reboot_1_amd64.snap

Ready? Let's run the command now (don't worry, it won't reboot)
$ graceful-reboot
Bad system call

Aha! That's what we are here to fix. As we know from part [4] that seccomp, which is a part of the sandbox system, blocks all system calls that are not explicitly allowed. We can check the system log to see what the actual error message was to figure out which system call need to be added.
Looking through journalctl -n 100 (last 100 messages) I found this:
sie 10 09:47:02 x200t audit[13864]: SECCOMP auid=1000 uid=1000 gid=1000 ses=2 pid=13864 comm="graceful-reboot" exe="/snap/graceful-reboot/x1/usr/bin/graceful-reboot" sig=31 arch=c000003e syscall=169 compat=0 ip=0x7f7ef30dcfd6 code=0x0
Decoding this is rather cryptic message is actually pretty easy. The key fact we are after is the system call number. Here it is 169. Which symbolic system call is that? We can use the scmp_sys_resolver program to find out.
$ scmp_sys_resolver 169
reboot

Bingo. While this case was pretty obvious, library functions don't always map to system call names directly. There are often cases where system calls evolve to gain additional arguments, flags and what not and the name of the library function is not changed.
Adjusting the hello^Hreboot interface
At this stage we can iterate on our interface. We have a test case (the snap we just wrote), we have the means to change the code (editing the hello.go file) and to make it live in the system (the refresh-bits command).
Let's patch our interface to be a little bit more meaningful now. We will rename it from hello to reboot. This will affect the file name, the type name and the string returned from Name(). We will also grant a connected plug permission, through the seccomp security backend, to the use the reboot system call. You can try to make the necessary changes yourself but I provide the essential part of the  patch below. The key thing to keep in mind is that ConnectedPlugSnippet method must return, when asked about SecuritySecComp, the name of the system call to allow, like this []byte(`reboot`).
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
return nil, nil
case interfaces.SecuritySecComp:- return nil, nil
+ return []byte(`reboot`), nilcase interfaces.SecurityDBus:
We can now return to the terminal that had refresh-bits running, interrupt the running command with control-c and simply re-start the whole command again. This will build the updated copy of snapd.
Surely enough, if we now ask snap about known interfaces we will see the reboot interface.
Let's now update the graceful-reboot snap to talk about the reboot interface and re-install it (note that you don't have to change the version string at all, just re-install the rebuilt snap). If we ask snapd about interfaces now we will see that the core snap exposes the reboot slot and the graceful-reboot snap has a reboot plug but they are disconnected.



Let's connect them now.
sudo snap connect graceful-reboot:reboot ubuntu-core:reboot
Let's ensure that it worked by running the interfaces command again:
$ sudo snap interfaces | grep reboot
:reboot              graceful-reboot

Success!
Now before we give it a try, let's have a look at the generated seccomp profile. The profile is derived from the base seccomp template that is a part of snapd source code and the set of plugs, slots and connections made to or from this snap. Each application of each snap gets a profile, we can see that for ourselves by looking at
/var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
$ grep reboot /var/lib/snapd/seccomp/profiles/snap.graceful-reboot.graceful-reboot
reboot

There are a few gotchas that we should point out though:
  • Security profiles are derived from interfaces but are only changed when a new connection is made (and that connection affects a particular snap), when the snap is initially installed or every time it is updated.
  • In practice we will either disconnect / reconnect the hello interface or reinstall the snap (whichever is more convenient)
  • Snapd remembers connections that were made explicitly and will re-establish them across snap updates. If you rename an interface while working on it, snapd may print a message (to system log, not to the console) about being unable to reconnect the "hello" interface because that interface no longer exists in snapd. To make snapd forget all those connections simply remove and reinstall the affected snap.
  • You can experiment by editing seccomp profiles directly. Just edit the file mentioned above and add additional system calls. Once you are happy with the result you can adjust snapd source code to match.
  • You can also do that with apparmor profiles but you have to re-load the profile into the kernel each time using the command apparmor_parser -r /path/to/the/profile

Okay, so did it work? Let's try the graceful-reboot command again.
$ graceful-reboot
Insufficient permissions to reboot the system

The process didn't get killed straight away but the reboot call didn't work yet either. Let's look at the system log and see if there are any hints.
sie 10 10:25:55 x200t kernel: audit: type=1400 audit(1470817555.936:47): apparmor="DENIED" operation="capable" profile="snap.graceful-reboot.graceful-reboot" pid=14867 comm="graceful-reboot" capability=22  capname="sys_boot"
This time we could use the system call but apparmor stepped in and said that we need the sys_boot capability. Let's modify the interface to allow that then. The precise apparmor syntax for this is "capability sys_boot,". You absolutely have to include the trailing comma. It was my most common mistake when hacking on apparmor profiles.
Let's patch the interface and re-run refresh-bits and re-install the snap. You can see the patch I applied below
diff --git a/interfaces/builtin/reboot.go b/interfaces/builtin/reboot.go
index 91962e1..ba7a9e3 100644
--- a/interfaces/builtin/reboot.go
+++ b/interfaces/builtin/reboot.go
@@ -91,7 +91,7 @@ func (iface *RebootInterface) PermanentSlotSnippet(slot *interfaces.Slot, securi
func (iface *RebootInterface) ConnectedPlugSnippet(plug *interfaces.Plug, slot *interfaces.Slot, securitySystem interfaces.SecuritySystem) ([]byte, error) {
switch securitySystem {
case interfaces.SecurityAppArmor:
- return nil, nil
+ return []byte(`capability sys_boot,`), nil
case interfaces.SecuritySecComp:
return []byte(`reboot`), nil
case interfaces.SecurityDBus:

TIP: I'm using sudo with a full path because of a bug in sudo where /snap/bin is not kept on the path.
Warning: the next command will reboot your system straight away!
$ sudo /snap/bin/graceful-reboot

Final touches
Since interfaces like this are very common, snapd has some support for writing less repetitive code. The exactly identical interface can be written with just this short snippet
// NewRebootInterface returns a new "reboot" interface.
func NewRebootInterface() interfaces.Interface {
return &commonInterface{
name: "reboot",
connectedPlugAppArmor: `capability sys_boot,`,
connectedPlugSecComp: `reboot`,
reservedForOS: true,
}
}

Everywhere where we used to say &RebootInterface{} we can now say NewRebootInterface(). The abbreviation helps in code reviews and in just conveying the meaning and intent of the interface.
You can find the source code of this interface in my github repository. The code of the graceful reboot snap is here. Feel free to comment below, or on Google+ or ask me questions directly.

David Tomaschik: HSC Part 3: DEF CON

Planet Ubuntu - Wed, 08/10/2016 - 00:00

This is the 3rd, and final, post in my Hacker Summer Camp 2016 series. Part 1 covered my class at Black Hat, and Part 2 the 2016 BSidesLV Pros versus Joes CTF. Now it’s time to talk about the capstone of the week: DEF CON.

DEF CON is the world’s largest (but not oldest) Hacker conference. This year was the biggest yet, with Dark Tangent stating that they produced 22,000 lanyards – and ran out of lanyards. That’s a lot of attendees. It covered both the Paris and Bally’s conference areas, and that still didn’t feel like enough.

DEF CON is also what I measure my year by. You can have your New Year’s, I measure mine from August to August (though apparently next year it’ll be the end of July…). Probably the single biggest regret in my life is that I didn’t find a way to go to DEF CON before DEF CON 20. The people and experiences there are memorable and well worth it.

Crowds

I don’t talk about it a whole lot, but I actually have pretty bad social anxiety. I do a terrible job of talking with people I don’t know, introducing myself to people, etc. At most events, I’m what you would call a “wallflower”. That doesn’t combine very well with 22,000 people. That especially doesn’t combine well with the chokepoints in a number of places, especially the packet capture village. It was hard to get through the week, but I told myself I wasn’t going to let my social anxiety ruin my con, and I think I did a good job of that.

Capture the Packet

So, since DEF CON 21, I’ve always played in Capture the Packet. At DC22, I even managed a 2nd place overall finish (just one spot away from the coveted black Uber badge). This year, I went back to play and discovered major changes:

  • Rounds are now two hours instead of 1.
  • There is now a qualifying round, semifinals, and finals.
  • They’re really pulling out the obscure protocols.
  • Significantly lower submit attempt limits (like 1-2 in most questions).

On the other hand, they had serious gameplay issues that really makes me regret spending so much of my con this year on Capture the Packet. These include:

  • Every round started late. The finals were supposed to start at 10:00 on Sunday, they started at 12:30. That was two and a half hours of sitting there waiting.
  • There were many answers where the answer in the database contained typos.
  • There were many answers where the question contained typos that made it difficult or impossible to find the traffic (wrong IP, wrong MAC, etc.)
  • There were many questions that were very poorly phrased. It was nearly impossible to parse some of the questions. One question asked for the “last three hex” of a value, but it wasn’t clear: last 3 bytes in hex format, last 3 hex characters, etc.

Combining the problems with questions and the lowered submission limits, it meant that several times we were locked out of questions just because it wasn’t clear what format the answer should be or how much data they wanted. The organizers clearly need to:

  • Increase the limits. (I’m not asking for unlimited tries, but on text answers, give us at least 3-5.)
  • Build some sort of fuzzy matching (case insensitive, automatically strip whitespace leading/trailing, etc.)
  • Write questions more clearly.

I’m actually amazed that Aries Security is able to sell CTP as a commercial offering for training to government and companies. It’s a wonderful concept and they try hard, but I’m so disappointed in the outcome. I spent ~12 hours sitting in the CTP area, but only 6 of those were actually playing. The other 6 were waiting for games to start, and then the games were disheartening when they didn’t work correctly. I’ll probably play again next year, but I really hope they’ve put some polish on the game by then.

Parties

As usual, DEF CON had a variety of parties to choose from. Most importantly, I got my hit of Dual Core in at the Friday night EDM night, and spent a little bit of time at the Queercon pool party. (Though it was too hot and humid to spend much time by the pool unless you were in the pool, and I’m not someone anyone wants in the pool…)

Just keeping track of all of the parties has become a major task, but the DCP guys have you covered there. I’d love to see some more parties that are a little more “chill”: less loud music, more just hanging out and having a drink with friends. (Or maybe I was just at all the wrong ones this year.)

Next Year

I can’t wait for next year – DEF CON 25 promises to be big, and we’re moving over to Caesars (2 years is all we got out of Bally’s/Paris). I’m trying to come up with ideas of how I can make my own personal DEF CON 25 bigger and better, without ripping off ideas like the AND!XOR badge, but I want to do something cool. Suggestions to @Matir or find my email if you know me. :) Hopefully I’ll see all of you, my hacker friends, out in Las Vegas for another fun Hacker Summer Camp.

David Tomaschik: HSC Part 2: Pros versus Joes CTF

Planet Ubuntu - Wed, 08/10/2016 - 00:00

Continuing my Hacker Summer Camp Series, I’m going to talk about one of my Hacker Summer Camp traditions. That’s right, it’s the Pros versus Joes CTF at BSidesLV. I’ve written about my experiences and even a player’s guide before, but this was my first year as a Pro, captaining a blue team (The SYNdicate).

It’s important to me to start by congratulating all of the Joes – this is an intense two days, and your pushing through it is a feat in and of itself. In past years, we had players burn out early, but I’m proud to say that nearly all of the Joes (from every team) worked hard until the final scorched earth. Every one of the players on my team was outstanding and worked their ass off for this CTF, and it paid off, as The SYNdicate was declared the victors of the 2016 BSides LV Pros versus Joes.

What worked well

Our team put in incredible amounts of effort into preparation. We built hardening scripts, discussed strategy, and planned our “first hour”. Keep in mind that PvJ simulates you being brought in to harden a network under active attack, so the first hour is absolutely critical. If you are well and thoroughly pwned in that time, getting the red cell out is going to be hard. There’s a lot of ways to persist, and finding them all is time consuming (especially since neither I nor my lieutenant does much IR).

We really jelled as a team and worked very, very, well together on the 2nd day. We hardened faster than I thought was possible and got our network very locked down. In that day, we only lost 1000 points via beacons (10 minutes on one Windows XP host). Our network was reportedly very secure, but I don’t know how thoroughly the other teams were checking versus the “low hanging fruit” approach.

What didn’t work well

The first day, we did not coordinate well. We had machines that hadn’t been touched for hardening even after 4 hours. I failed when setting up the firewall and blocked ICMP for a while, causing all of our services to score as down. I’ve said it before and I’ll say it again: coordination and organization are the most important aspects of working as a team in this environment.

The Controversy

There was an issue with scoring during the competition where tickets were being counted incorrectly. For example, my team had ticket points deducted even when we had 0 open tickets: the normal behavior being that only when you had a ticket open would you lose points. This resulted in massive ticket deductions showing up on the scoreboard, which Dichotomy was only able to correct after gameplay had ended. This was a very controversial issue because it resulted in the team that was leading on the scoreboard dropping to last place and pushed my team to the top. The final scoring (announced on Twitter) was in accordance with the written rules as opposed to the scoreboard, but it still was confusing for every team involved.

Conclusion

Overall, this was a good game, and I’m very proud of my lieutenant, my joes, and all of the other teams for playing so well. I’m also very appreciative of the hard work from Dichotomy, Gold Cell, and Grey Cell in doing all of the things necessary to make this game possible. This game is the closest thing to a live fire security exercise I’ve ever seen at a conference, and I think we all have something to learn from that environment.

Stephen Kelly: Opt-in header-only libraries with CMake

Planet Ubuntu - Tue, 08/09/2016 - 13:00

Using a C++ library, particularly a 3rd party one, can be complicated affair. Library binaries compiled on Windows/OSX/Linux can not simply be copied over to another platform and used there. Linking works differently, compilers bundle different code into binaries on each platform etc.

This is not an insurmountable problem. Libraries like Qt distribute dynamically compiled binaries for major platforms and other libraries have comparable solutions.

There is a category of libraries which considers the portable binaries issue to be a terminal one. Boost is a widespread source of many ‘header only’ libraries, which don’t require a user to link to a particular platform-compatible library binary. There are also many other examples of such ‘header only’ libraries.

Recently there was a blog post describing an example library which can be built as a shared library, or as a static library, or used directly as a ‘header only’ library which doesn’t require the user to link against anything to use the library. The claim is that it is useful for libraries to provide users the option of using a library as a ‘header only’ library and adding preprocessor magic to make that possible.

However, there is yet a fourth option, and that is for the consumer to compile the source files of the library themselves. This has the
advantage that the .cpp file is not #included into every compilation unit, but still avoids the platform-specific library binary.

I decided to write a CMake buildsystem which would achieve all of that for a library. I don’t have an opinion on whether good idea in general for libraries to do things like this, but if people want to do it, it should be easy as possible.

Additionally, of course, the CMake GenerateExportHeader module should be used, but I didn’t want to change the source from Vittorio so much.

The CMake code below compiles the library in several ways and installs it to a prefix which is suitable for packaging:

cmake_minimum_required(VERSION 3.3) project(example_lib) # define the library set(library_srcs example_lib/library/module0/module0.cpp example_lib/library/module1/module1.cpp ) add_library(library_static STATIC ${library_srcs}) add_library(library_shared SHARED ${library_srcs}) add_library(library_iface INTERFACE) target_compile_definitions(library_iface INTERFACE LIBRARY_HEADER_ONLY ) set(installed_srcs include/example_lib/library/module0/module0.cpp include/example_lib/library/module1/module1.cpp ) add_library(library_srcs INTERFACE) target_sources(library_srcs INTERFACE $<INSTALL_INTERFACE:${installed_srcs}> ) # install and export the library install(DIRECTORY example_lib/library DESTINATION include/example_lib ) install(FILES example_lib/library.hpp example_lib/api.hpp DESTINATION include/example_lib ) install(TARGETS library_static library_shared library_iface library_srcs EXPORT library_targets RUNTIME DESTINATION bin ARCHIVE DESTINATION lib LIBRARY DESTINATION lib INCLUDES DESTINATION include ) install(EXPORT library_targets NAMESPACE example_lib:: DESTINATION lib/cmake/example_lib ) install(FILES example_lib-config.cmake DESTINATION lib/cmake/example_lib )

This blog post is not a CMake introduction, so to see what all of those commands are about start with the cmake-buildsystem and cmake-packages documentation.

There are 4 add_library calls. The first two serve the purpose of building the library as a shared library and then as a static library.

The next two are INTERFACE libraries, a concept I introduced in CMake 3.0 when it looked like Boost might use CMake. The INTERFACE target can be used to specify header-only libraries because they specify usage requirements for consumers to use, such as include directories and compile definitions.

The library_iface library functions as described in the blog post from Vittorio, in that users of that library will be built with LIBRARY_HEADER_ONLY and will therefore #include the .cpp files.

The library_srcs library causes the consumer to compile the .cpp files separately.

A consumer of a library like this would then look like:

cmake_minimum_required(VERSION 3.3) project(example_user) find_package(example_lib REQUIRED) add_executable(myexe src/src0.cpp src/src1.cpp src/main.cpp ) ## uncomment only one of these! # target_link_libraries(myexe # example_lib::library_static) # target_link_libraries(myexe # example_lib::library_shared) # target_link_libraries(myexe # example_lib::library_iface) target_link_libraries(myexe example_lib::library_srcs)

So, it is up to the consumer how they consume the library, and they determine that by using target_link_libraries to specify which one they depend on.


Simos Xenitellis: How to install LXD/LXC containers on Ubuntu on cloudscale.ch

Planet Ubuntu - Tue, 08/09/2016 - 10:26

In previous posts, we saw how to configure LXD/LXC containers on a VPS on DigitalOcean and Scaleway. There are many more VPS companies.

cloudscale.ch is one more company that provides Virtual Private Servers (VPS). They are based in Switzerland.

In this post we are going to see how to create a VPS on cloudscale.ch and configure to use LXD/LXC containers.

We now use the term LXD/LXC containers (instead of LXC containers in previous articles) in order to show the LXD is a management service for LXC containers; LXD works on top of LXC. Somewhat similar to GNU/Linux where GNU software is running over the Linux kernel.

Set up the VPS

We are creating a VPS called myubuntuserver, using the Flex-2 Compute Flavor. This is the most affordable, at 2GB RAM with 1 vCPU core. It costs 1 CHF, which is about 0.92€ (or US$1).

The default capacity is 10GB, which is included in the 1 CHF per day. If you want more capacity, there is extra charging.

We are installing Ubuntu 16.04 and accept the rest of the default settings. Currently, there is only one server location at Rümlang, near Zurich (the capital city of Switzerland).

Here is the summary of the freshly launched VPS server. The IP address is shown as well.

Connect and update the VPS

In order to connect, we need to SSH to that IP address using the fixed username ubuntu. There is an option to either password authentication or public-key authentication. Let’s connect.

myusername@mycomputer:~$ ssh ubuntu@5.102.145.245 Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64) * Documentation: https://help.ubuntu.com/ Get cloud support with Ubuntu Advantage Cloud Guest: http://www.ubuntu.com/business/services/cloud 0 packages can be updated. 0 updates are security updates. The programs included with the Ubuntu system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. ubuntu@myubuntuserver:~$

Let’s update the package list,

ubuntu@myubuntuserver:~$ sudo apt update Hit:1 http://ch.archive.ubuntu.com/ubuntu xenial InRelease Get:2 http://ch.archive.ubuntu.com/ubuntu xenial-updates InRelease [95.7 kB] ... Get:31 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [1176 B] Fetched 10.5 MB in 2s (4707 kB/s) Reading package lists... Done Building dependency tree Reading state information... Done 67 packages can be upgraded. Run 'apt list --upgradable' to see them. ubuntu@myubuntuserver:~$ sudo apt upgrade Reading package lists... Done Building dependency tree ... Processing triggers for libc-bin (2.23-0ubuntu3) ... ubuntu@myubuntuserver:~$

In this case, we updated 67 packages, among which was lxd. It was important to perform the upgrade of packages.

Configure LXD/LXC

Let’s see how much free disk space is there,

ubuntu@myubuntuserver:~$ df -h / Filesystem Size Used Avail Use% Mounted on /dev/vda1 9.7G 1.2G 8.6G 12% / ubuntu@myubuntuserver:~$

There is 8.6GB of free space, let’s allocate 5GB of that for the ZFS pool. First, we need to install the package zfsutils-linux. Then, initialize lxd.

ubuntu@myubuntuserver:~$ sudo apt install zfsutils-linux Reading package lists... Done ... Processing triggers for ureadahead (0.100.0-19) ... ubuntu@myubuntuserver:~$ sudo lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: myzfspool Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 5 Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes ...accept the network autoconfiguration settings that you will be asked... LXD has been successfully configured. ubuntu@myubuntuserver:~$

That’s it! We are good to go and configure our first LXD/LXC container.

Testing a container as a Web server

Let’s test LXD/LXC by creating a container, installing nginx and accessing from remote.

ubuntu@myubuntuserver:~$ lxc launch ubuntu:x web Creating web Retrieving image: 100% Starting web ubuntu@myubuntuserver:~$

We launched a container called web.

Let’s connect to the container, update the package list and upgrade any available packages.

ubuntu@myubuntuserver:~$ lxc exec web -- /bin/bash root@web:~# apt update Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease ... 9 packages can be upgraded. Run 'apt list --upgradable' to see them. root@web:~# apt upgrade Reading package lists... Done ... Processing triggers for initramfs-tools (0.122ubuntu8.1) ... root@web:~#

Still inside the container, we install nginx.

root@web:~# apt install nginx Reading package lists... Done ... Processing triggers for ufw (0.35-0ubuntu2) ... root@web:~#

Let’s make a small change in the default index.html,

root@web:/var/www/html# diff -u /var/www/html/index.nginx-debian.html.ORIGINAL /var/www/html/index.nginx-debian.html --- /var/www/html/index.nginx-debian.html.ORIGINAL 2016-08-09 17:08:16.450844570 +0000 +++ /var/www/html/index.nginx-debian.html 2016-08-09 17:08:45.543247231 +0000 @@ -1,7 +1,7 @@ <!DOCTYPE html> <html> <head> -<title>Welcome to nginx!</title> +<title>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</title> <style> body { width: 35em; @@ -11,7 +11,7 @@ </style> </head> <body> -<h1>Welcome to nginx!</h1> +<h1>Welcome to nginx on an LXD/LXC container on Ubuntu at cloudscale.ch!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> root@web:/var/www/html#

Finally, let’s add a quick and dirty iptables rule to make the container accessible from the Internet.

root@web:/var/www/html# exit ubuntu@myubuntuserver:~$ lxc list +------+---------+---------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+---------------------+------+------------+-----------+ | web | RUNNING | 10.5.242.156 (eth0) | | PERSISTENT | 0 | +------+---------+---------------------+------+------------+-----------+ ubuntu@myubuntuserver:~$ ifconfig ens3 ens3 Link encap:Ethernet HWaddr fa:16:3e:ad:dc:2c inet addr:5.102.145.245 Bcast:5.102.145.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fead:dc2c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:102934 errors:0 dropped:0 overruns:0 frame:0 TX packets:35613 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:291995591 (291.9 MB) TX bytes:3265570 (3.2 MB) ubuntu@myubuntuserver:~$

Therefore, the iptables command that will allow access to the container is,

ubuntu@myubuntuserver:~$ sudo iptables -t nat -I PREROUTING -i ens3 -p TCP -d 5.102.145.245/32 --dport 80 -j DNAT --to-destination 10.5.242.156:80 ubuntu@myubuntuserver:~$

Here is the result when we visit the new Web server from our computer,

Benchmarks

We are benchmarking the CPU, the memory and the disk. Note that our VPS has a single vCPU.

CPU

We are benchmarking the CPU using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=cpu run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing CPU performance benchmark Threads started! Done. Maximum prime number checked in CPU test: 10000 Test execution summary: total time: 10.9448s total number of events: 10000 total time taken by event execution: 10.9429 per-request statistics: min: 0.96ms avg: 1.09ms max: 2.79ms approx. 95 percentile: 1.27ms Threads fairness: events (avg/stddev): 10000.0000/0.00 execution time (avg/stddev): 10.9429/0.00 ubuntu@myubuntuserver:~$

The total time for the CPU benchmark with one thread was 10.94s. With two threads, it was 10.23s. With four threads, it was 10.07s.

Memory

We are benchmarking the memory using sysbench with the following parameters.

ubuntu@myubuntuserver:~$ sysbench --num-threads=1 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 1 Doing memory operations speed test Memory block size: 1K Memory transfer size: 102400M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 104857600 (1768217.45 ops/sec) 102400.00 MB transferred (1726.77 MB/sec) Test execution summary: total time: 59.3013s total number of events: 104857600 total time taken by event execution: 47.2179 per-request statistics: min: 0.00ms avg: 0.00ms max: 0.80ms approx. 95 percentile: 0.00ms Threads fairness: events (avg/stddev): 104857600.0000/0.00 execution time (avg/stddev): 47.2179/0.00 ubuntu@myubuntuserver:~$

The total time for the memory benchmark with one thread was 59.30s. With two threads, it was 62.17s. With four threads, it was 62.57s.

Disk

We are benchmarking the disk using dd with the following parameters.

ubuntu@myubuntuserver:~$ dd if=/dev/zero of=testfile bs=1M count=1024 oflag=dsync 1024+0 records in 1024+0 records out 1073741824 bytes (1,1 GB, 1,0 GiB) copied, 21,1995 s, 50,6 MB/s ubuntu@myubuntuserver:~$

 

 

It took about 21 seconds to create 1024 files of 1MB each, with the DSYNC flag. The throughput was 50.6MB/s. Subsequent invocation were around 50MB/s as well.

ZFS pool free space

Here is the free space in the ZFS pool after one container, that one with nginx and other packages updated,

ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 811M 4,18G - 11% 15% 1.00x ONLINE - ubuntu@myubuntuserver:~$

Again, after a second container was just created, (new and empty)

ubuntu@myubuntuserver:~$ sudo zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT myzfspool 4,97G 822M 4,17G - 11% 16% 1.00x ONLINE - ubuntu@myubuntuserver:~$

Thanks for Copy-on-Write with ZFS, the new containers do not take up much space. The files that are added or updated, would contribute to the additional space.

Conclusion

We saw how to launch an Ubuntu 16.04 VPS on cloudscale.ch, then configure LXD.

We created a container with nginx, and configured iptables so that the Web server is accessible from the Internet.

Finally, we see some benchmarks for the vCPU, the memory and the disk.

Dustin Kirkland: Howdy, Windows! A Six-part Series about Ubuntu-on-Windows for Linux.com

Planet Ubuntu - Tue, 08/09/2016 - 06:26

I hope you'll enjoy a shiny new 6-part blog series I recently published at Linux.com.
  1. The first article is a bit of back story, perhaps a behind-the-scenes look at the motivations, timelines, and some of the work performed between Microsoft and Canonical to bring Ubuntu to Windows.
  2. The second article is an updated getting-started guide, with screenshots, showing a Windows 10 user exactly how to enable and run Ubuntu on Windows.
  3. The third article walks through a dozen or so examples of the most essential command line utilities a Windows user, new to Ubuntu (and Bash), should absolutely learn.
  4. The fourth article shows how to write and execute your first script, "Howdy, Windows!", in 6 different dynamic scripting languages (Bash, Python, Perl, Ruby, PHP, and NodeJS).
  5. The fifth article demonstrates how to write, compile, and execute your first program in 7 different compiled programming languages (C, C++, Fortran, Golang).
  6. The sixth and final article conducts some performance benchmarks of the CPU, Memory, Disk, and Network, in both native Ubuntu on a physical machine, and Ubuntu on Windows running on the same system.
I really enjoyed writing these.  Hopefully you'll try some of the examples, and share your experiences using Ubuntu native utilities on a Windows desktop.  You can find the source code of the programming examples in Github and Launchpad:
Cheers,
Dustin

The Fridge: Ubuntu Weekly Newsletter Issue 477

Planet Ubuntu - Mon, 08/08/2016 - 19:42

Welcome to the Ubuntu Weekly Newsletter. This is issue #477 for the week August 1 – 7, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Aaron Honeycutt
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Ubuntu Weekly Newsletter Issue 477

The Fridge - Mon, 08/08/2016 - 19:42

Welcome to the Ubuntu Weekly Newsletter. This is issue #477 for the week August 1 – 7, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Aaron Honeycutt
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Zygmunt Krynicki: Snap execution environment

Planet Ubuntu - Mon, 08/08/2016 - 08:14

This is the fourth article in the series about snappy interfaces. You can check out articles one, two and three though they are not directly required.

In this installment we will explore the layout and properties of the file system at the time snap application is executed. From the point of view of the user nothing special is happening. The user either runs an application by clicking on a desktop icon or by running a shell command. Internally, snapd uses a series of steps (which I will not explain today as they are largely an implementation detail) to configure the application process.

The (ch)root filesystem and a bit of magic
Let’s start with the most important fact: the root filesystem is not the filesystem of the host distribution. Using the host filesystem would lead to lots of inconsistencies. Those are rather obvious: different base libraries, potentially different filesystem layout, etc. At snap application runtime the root filesystem is changed to the core snap itself. You can think of this as a kind of chroot. Obviously the chroot itself would be insufficient as snaps are read only filesystems and the core snap is no different.

Certain directories in the core snap are bind mounted (you can think of this as a special type of a symbolic link or a hard link to a directory though neither are fully accurate) to locations on the host file system. This includes the /home directory, the /run directory and a few others (see Appendix A for the full list). Notably this does not include the /usr/lib or /usr/bin. If a snap needs a library or an executable to function, that library or executable has to be present in the snap itself. The only exception to that are very low level libraries like libc that are present in the core snap.

TIP: explore the core snap to see what is there. Having installed at least one snap you can go to /snap/ubuntu-core/current to see the list of files provided there.
With all those mounts and chroots in place one might wonder how mounts look like for all the other processes in the system? The answer is simple, they look as if nothing special related to snappy was happening.

To understand the answer you have to know a little about Linux namespaces. Namespaces are a way to create a separate view of a given aspect of a Linux system at a per-process level. Unlike full-blown virtual machines (where you run a whole emulated computer with a potentially different operating system kernel and emulated peripheral devices) namespaces are fine grained. Snappy uses just one of the available namespaces, the mount namespace. Now I won’t fool you, while the idea seems simple “mounts in the namespace are isolated from the mounts outside of the namespace” the reality is far more complex because of the so-called shared-subtrees. One interesting consequence is that mounts performed after a snap application is stated (e.g. in the /media directory) are visible to the said application (e.g. to VLC) while the reverse is not true. If a malicious snap tries (and manages despite various defenses put in place) to mount something in say, /usr/ that change will be visible only to the snap application process.

Don’t worry if you don’t fully understand this topic. The main point is that your application sees a different view of the filesystem but that view is consistent across distributions.

TIP: you may have seen the core snap as it looks like on disk if you followed the earlier tip. Now see the real file system at runtime! Install the snapd-hacker-toolbelt snap and run snapd-hacker-toolbelt.busybox sh. This will give you a shell with many of the familiar commands that let you peek and poke at the environment.Now there are a few more tweaks I should point out but won’t go into too much detail:


  • Each process gets a private /tmp directory with a fresh tmpfs mounted there. This is a security measure. One simple consequence is that you cannot expect to share files by dropping them there and that you cannot create arbitrarily large files there since tmpfs is backed by a fraction of available system memory.
  • There’s also a private instance of /dev/pts directory with pseudo terminal emulators. This is an another security measure. In practice you will not care about this much. It’s just a part of the Linux plumbing that has to be setup in a given way.
  • The whole host filesystem is mounted at /var/lib/snapd/hostfs. This can be used by interfaces similar to the content sharing interface, for example. This is super interesting and we will devote a whole article to using this later on.
  • There’s special code that exists to support Nvidia proprietary drivers. I will discuss this with a separate installment that may be of interest to game developers.
  • The current working directory may be impossible to preserve across the whole chroot and mount and bind mount magic. The easiest way to experience this is to create a directory in /tmp (e.g. /tmp/foo) and try to run any snap command there. Because of the private (and empty) /tmp directory the /tmp/foo directory does not exist for the snap application process. Snap-confine will print an informative error message and refuse to run.

This now much more comfortable. Many of the usual places exist and contain the data the applications are familiar with. This doesn’t mean those directories are readable or writable to the application process, they are just present. Confinement and interfaces decide if something is readable or writable. This brings us to the second big part of snap-confine
Process confinementSnap-confine (as of version 1.0.39) supports two sandboxing technologies: seccomp and apparmor.

Seccomp is used to constrain system calls that an application can use. System calls are the interface between the kernel and user space. If you are unfamiliar with the concept then don’t worry. In very rough terms some of the functions of your programming language are implemented as as system calls and seccomp is the linux subsystem that is responsible for mediating access to them.

Right now when your application runs in devmode you will not get any advice on the system calls you are relying on that are not allowed by the set of used interfaces. The so-called complain mode of seccomp is being actively developed upstream so the situation may change by the time you are reading this.

In strict confinement any attempt to use a disallowed system call will instantly kill the offending process. This is accompanied by a rather cryptic message that you can see in the system log:
sie 08 12:36:53 gateway kernel: audit: type=1326 audit(1470652613.076:27): auid=1000 uid=1000 gid=1000 ses=63 pid=66834 comm="links" exe="/snap/links/2/usr/bin/links" sig=31 arch=c000003e syscall=54 compat=0 ip=0x7f8dcb8ffc8a code=0x0What this tells us is that process ID 66834 was killed with signal 31 (SIGSYS) because it tried to use system call 54 (setsockopt) Note that system call numbers are architecture specific. The output above was from an amd64 machine.

Even when a system call is allowed the particular operation may be intercepted and denied by apparmor. For example, the sandbox is setup so that applications can freely write to $SNAP_USER_DATA (or $SNAP_DATA for services) but cannot, by default, either read or write from the real home directory.
sie 08 12:56:40 gateway kernel: audit: type=1400 audit(1470653800.724:28): apparmor="DENIED" operation="open" profile="snap.snapd-hacker-toolbelt.busybox" name="/home/zyga/.ssh/authorized_keys" pid=67013 comm="busybox" requested_mask="r" denied_mask="r" fsuid=1000 ouid=1000
Here we see that process ID 67013 tried to “open” /home/zyga/.ssh/authorized_keys and that the “r” (read) mask was denied. In devmode that is obviously allowed but is accompanied with an appropriate warning message instead.

TIP: Whenever you run into problems like this you should give the snappy-debug snap a try. It is designed to read and understand messages like that and give you useful advice.
Apparmor has much wider feature set and can perform checks for linux capabilities, traditional UNIX IPC like signals and sockets, DBus messages (including details of the object and method invoked). The vast majority of the current snap confinement is made with apparmor profiles. We will look at all the features in greater detail in the next few articles in this series where we will actively implement simple new interfaces from scratch.

There's one last thing that snap-confine does, in some cases is creates...

A device control groupIn certain cases a device control group is created and the application process is moved there. The control group has only a small set of typical device nodes (e.g. /dev/null) and explicitly defined additional devices. This is done by using udev rules to tag appropriate devices. This is listed for completeness, you should never worry about or even think about this topic. Once the need arises we will explore it and how it can be used to simplify handling of certain situations. Again, by default there is no device control group being used.Putting it all togetherWith all of those changes in place snap-confine executes (using execv) a wrapper script corresponding to the application command entry in the snapcraft.yaml file. This happens each time you run an application.

If you are interested in learning more about snap-confine I encourage you to check out its manual page (snap-confine.5) and source code. If you have any questions please feel free to ask at the snapcraft.io mailing list or using comments on this blog below.

Next time we will look at creating our first, extremely simple, interface in practice.
Appendix A: List of host directories that are bind-mounted.NOTE: This list will slowly get less and less accurate as more of the mount points become dynamic and controlled by available interfaces.
  • /dev
  • /etc (except for /etc/alternatives)
  • /home
  • /root
  • /proc
  • /snap
  • /sys
  • /var/snap
  • /var/lib/snapd
  • /var/tmp
  • /var/log
  • /run
  • /media
  • /lib/modules
  • /usr/src

Paul Tagliamonte: Using PKCS#11 on GNU/Linux

Planet Ubuntu - Sun, 08/07/2016 - 17:17

PKCS#11 is a standard API to interface with HSMs, Smart Cards, or other types of random hardware backed crypto. On my travel laptop, I use a few Yubikeys in PKCS#11 mode using OpenSC to handle system login. libpam-pkcs11 is a pretty easy to use module that will let you log into your system locally using a PKCS#11 token locally.

One of the least documented things, though, was how to use an OpenSC PKCS#11 token in Chrome. First, close all web browsers you have open.

sudo apt-get install libnss3-tools certutil -U -d sql:$HOME/.pki/nssdb modutil -add "OpenSC" -libfile /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -dbdir sql:$HOME/.pki/nssdb modutil -list "OpenSC" -dbdir sql:$HOME/.pki/nssdb modutil -enable "OpenSC" -dbdir sql:$HOME/.pki/nssdb

Now, we'll have the PKCS#11 module ready for nss to use, so let's double check that the tokens are registered:

certutil -U -d sql:$HOME/.pki/nssdb certutil -L -h "OpenSC" -d sql:$HOME/.pki/nssdb

If this winds up causing issues, you can remove it using the following command:

modutil -delete "OpenSC" -dbdir sql:$HOME/.pki/nssdb

Martin-&Eacute;ric Racine: Debian within a Windows partition?

Planet Ubuntu - Sun, 08/07/2016 - 10:18
A few years ago, I remember that Ubuntu had a trick that allowed the distribution to be installed as one large file within a Windows partition. Does the same thing exist to install Debian?

Joe Barker: Configuring msmtp on Ubuntu 16.04

Planet Ubuntu - Sun, 08/07/2016 - 08:56

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in my previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mt ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults. defaults # Enable or disable TLS/SSL encryption. tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt # Setup WP account's settings. account <MSMTP_ACCOUNT_NAME> host smtp.gmail.com port 587 auth login user <EMAIL_USERNAME> password <PASSWORD> from <FROM_ADDRESS> logfile /var/log/msmtp/msmtp.log account default : <MSMTP_ACCOUNT_NAME>

Any of the uppercase items (i.e. <PASSWORD>) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp sudo chown -R www-data:adm /var/log/msmtp sudo chown 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log { rotate 12 monthly compress missingok notifempty }

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a <MSMTP_ACCOUNT_NAME> -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : <MSMTP_ACCOUNT_NAME> was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text'); exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

David Mohammed: budgie-remix 16.10 wallpaper contest

Planet Ubuntu - Sat, 08/06/2016 - 07:59
budgie-remix 16.10 Yakkety Yak Wallpaper contest We invite everyone to contribute – even if you are not currently a budgie-remix user For guidelines, rules & submissions : Official Flickr group https://www.flickr.com/groups/budgie-remix-16-10/  

Ubuntu 14.04.5 LTS released

The Fridge - Fri, 08/05/2016 - 06:32

The Ubuntu team is pleased to announce the release of Ubuntu 14.04.5 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

We have expanded our hardware enablement offering since 12.04, and with 14.04.5, this point release contains an updated kernel and X stack for new installations to support new hardware across all our supported architectures, not just x86.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 14.04 LTS.

Kubuntu 14.04.5 LTS, Edubuntu 14.04.5 LTS, Xubuntu 14.04.5 LTS, Mythbuntu 14.04.5 LTS, Ubuntu GNOME 14.04.5 LTS, Lubuntu 14.04.5 LTS, Ubuntu Kylin 14.04.5 LTS, and Ubuntu Studio 14.04.5 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Core, Ubuntu Kylin, Edubuntu, and Kubuntu. All the remaining flavours will be supported for 3 years.

To get Ubuntu 14.04.5

In order to download Ubuntu 14.04.5, visit:

http://releases.ubuntu.com/14.04.5/

Users of Ubuntu 12.04 will be offered an automatic upgrade to 14.04.5 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/TrustyUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 14.04.5 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Aug 4 23:29:30 UTC 2016 by Adam Conrad, on behalf of the Ubuntu Release Team

Ubuntu App Developer Blog: Introducing React Native Ubuntu

Planet Ubuntu - Fri, 08/05/2016 - 00:21

In the Webapps team at Canonical, we are always looking to make sure that web and near-web technologies are available to developers. We want to make everyone's life easier, enable the use of tools that are familiar to web developers and provide an easy path to using them on the Ubuntu platform.

We have support for web applications and creating and packaging Cordova applications, both of these enable any web framework to be used in creating great application experiences on the Ubuntu platform.

One popular web framework that can be used in these environments is React.js; React.js is a UI framework with a declarative programming model and strong component system, which focuses primarily on the composition of the UI, so you can use what you like elsewhere.

While these environments are great, sometimes you need just that bit more performance, or to be able to work with native UI components directly, but working in a less familiar environment might not be a good use of time. If you are familiar with React.js, it's easy to move into full native development with all your existing knowledge and tools by developing with React Native. React Native is the sister to React.js, you can use the same style and code to create an application that works directly with native components with native levels of performance, but with the ease of and rapid development you would expect.

We are happy to announce that along with our HTML5 application support, it is now possible to develop React Native applications on the Ubuntu platform. You can port existing iOS or Android React Native applications, or you can start a new application leveraging your web-dev skills.

You can find the source code for React Native Ubuntu here,

To get started, follow the instructions in README-ubuntu.md and create your first application.

The Ubuntu support includes the ability to generate packages. Managed by the React Native CLI, building a snap is as easy as 'react-native package-ubuntu --snap'. It's also possible to build a click package for Ubuntu devices; meaning React Native Ubuntu apps are store ready from the start.

Over the next little while there will be blogs posts on everything you need to know about developing a React Native Application for the Ubuntu platform; creating the app, the development process, packaging and releasing to the store. There will also be some information on how to develop new reusable modules, that can add extra functionality to the runtime and be distributed as Node Package Manager (npm) modules.

Go and experiment, and see what you can create.

Lubuntu Blog: Lubuntu 14.04.5 LTS has been released!

Planet Ubuntu - Thu, 08/04/2016 - 16:53
The latest and last point release of the Lubuntu Trusty Tahr LTS, 14.04.5, is now available for download. This release includes images for amd64 (commonly referred to as 64-bit), i386 (commonly referred to as 32-bit), and PowerPC. Please read the release notes, which covers the latest update as well as the whole history of the […]

Ubuntu GNOME: The Ubuntu GNOME 16.10 Wallpaper contest has started, guys!

Planet Ubuntu - Thu, 08/04/2016 - 10:47

We have created a Flick group for your submissions: https://www.flickr.com/groups/ubuntu-gnome-16-10

From all submissions, 10 wallpapers will be selected to be included in the 16.10 (Yakkety Yak) release of Ubuntu GNOME.

Rules of the Contest:

It is important to note Ubuntu – and hence Ubuntu GNOME – is shipped to users from every part of the globe. Your images should be considerate of this diversity and refrain from the following:

  • No brand names or trademarks of any kind.
  • No branding assets like “Ubuntu GNOME” or text in order to permit use by derivative distributions.
  • No version numbers as some may prefer to continue to use your theme with an older version of Ubuntu.
  • No illustrations some may consider inappropriate, offensive, hateful, tortuous, defamatory, slanderous or libelous.
  • No sexually explicit or provocative images.
  • No images of weapons or violence.
  • No alcohol, tobacco, or drug use imagery.
  • No designs which promotes bigotry, racism, hatred or harm against groups or individuals; or promotes discrimination based on race, gender, religion, nationality, disability, sexual orientation or age.
  • No religious, political, or nationalist imagery.
Constraints:
  • Each user can submit up to 2 wallpapers. The final dimension should be at least 2560×1440 px (and 16:9 proportion if possible). Any smaller size will not be considered.
  • Use PNG format for bitmap files, use JPG format for photos.
  • Submissions must adhere to the Creative Commons ShareAlike 4.0 (see www.flickr.com/creativecommons/). If not specified, it’ll be assumed that the work is released under CC-BY-SA 4.0.
  • Attribution must be declared if the submission is based on another design.

Riccardo Padovani: Auto deploy a Jekyll website with Gitlab CI

Planet Ubuntu - Thu, 08/04/2016 - 10:45

In last three months one of the task I had at archon.ai was to implement a pipeline to autodeploy our services. We use an instance of Gitlab to host our code, so after some proof of concepts we chose to use Gitlab CI to test and deploy our code.

Gitlab CI is amazing (as Gitlab is), Gitlab team is doing a great work and they implement new features every month. So today I chose to move also this blog to Gitlab CI.

This blog is based on Jekyll. The source code was already hosted on Gitlab but, until yesterday, it didn’t use Gitlab CI: every time I pushed something, a webhook called a script on my server, the server downloaded the source code, compiled it and then published it.

The bad in this that approach is the same server which runs the website (and other services as well) wasted CPU, storage and time doing compilation.

I have others servers as well (but if you do not, don’t worry, Gitlab offers free runners for Gitlab CI if you host your project on Gitlab.com), so I installed a Gitlab runner as explained here and set it to use Docker.

gitlab-ci.yml

The first thing to do after enabling the runner was to create a gitlab-ci.yml file to explain to the runner how to do its job. The fact Gitlab uses a file to configure runners it’s a winning choice: developers can have it versioned in the source and each branch can have its own rules.

My configuration file is this:

image: ruby:2.3 stages: - deploy cache: paths: - vendor key: "$CI_BUILD_REPO" before_script: - gem install bundler deploy_site: stage: deploy only: - master script: - bundle install --path=vendor/ - bundle exec jekyll build artifacts: paths: - _site/

Quite simple, isn’t it?

In the end I need only to deploy the website, I do not have tests, so I have only one stage, the deploy one.

There are however few interesting things to highlight:

  • I install gems in vendor/ instead of the default directory, so I can cache them and reuse in others builds, to save time and bandwidth
  • The cache is shared between all the branches in the repo (key: “$CI_BUILD_REPO”). By default it is shared only between multiple builds of the same branch
  • The deploy step is executed only when I push to master branch
  • The site is build in _site/ directory, so I need to specify it in the artifacts section

If you want to see how to tune these settings, or learn about others (there are a lot of them, it is a very versatile system which can do anything), take a look to the official guide.

The Gemfile for bundler is very basic:

source 'https://rubygems.org' gem "github-pages" gem "pygments.rb"

It is important to add vendor directory to the exclude section in _config.yml, otherwise Jekyll will publish it as well.

Deploy

If you push these files on your Gitlab’s repo, and if you have done a good job setting up the runner, you will have an artifact in your repo to download.

Next step is to deploy it to the server. There are tons of different possible solutions to do that. I created a sh script which is invoked by an hook.

Since I already have PHP-fpm installed on the server due my Nextcloud installation, I use it to invoke the sh script through a php script.

When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.

Unfortunately, the documentation about webhooks is very poor, and there isn’t any mention about builds payload.

Anyway, after a couple of tries, I created this script:

<?php // Check token $security_file = parse_ini_file("../token.ini"); $gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"]; if ($gitlab_token !== $security_file["token"]) { echo "error 403"; exit(0); } // Get data $json = file_get_contents('php://input'); $data = json_decode($json, true); // We want only success build on master if ($data["ref"] !== "master" || $data["build_stage"] !== "deploy" || $data["build_status"] !== "success") { exit(0); } // Execute the deploy script: shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");

Since the repo of this blog is public, I cannot insert the token in the script itself (and I cannot insert it in the script on the server, because it is overwritten at every deploy).

So I created a token.ini file outside the webroot, which is just one line:

token = supersecrettoken

In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.

Also the deploy script is very very basic, but there are a couple of interesting things:

#!/bin/bash # See 'Authentication' section here: http://docs.gitlab.com/ce/api/ SECRET_TOKEN=$PERSONAL_TOKEN # The path where to put the static files DEST="/usr/share/nginx/html/" # The path to use as temporary working directory TMP="/tmp/" # Where to save the downloaded file DOWNLOAD_FILE="site.zip"; cd $TMP; wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE; ls; unzip $DOWNLOAD_FILE; # Whatever, do not do this in a real environment without any other check rm -rf $DEST; cp -r _site/ $DEST; rm -rf _site/; rm $DOWNLOAD_FILE;

First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).

The script needs to have an access token (which you can create here) to access the data. Again, I cannot put it in the script itself, so I inserted it as environment variable:

sudo vi /etc/environment

in the file you have to add something like:

PERSONAL_TOKEN="supersecrettoken"

and then remember to reload the file:

source /etc/environment

You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.

Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.

The url of the API is

https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname

While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.

It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.

Kudos to the Gitlab team (and others guys who help in their free time) for their awesome work!

If you have any question or feedback about this blog post, please drop me an email at riccardo@rpadovani.com :-)

Bye for now,
R.

Pages

Subscribe to Ubuntu Arizona LoCo Team aggregator