29 Jun 2016

feedPlanet Ubuntu

Kubuntu: Kubuntu Podcast goes Open and Unplugged

Podcast fans will know that we were struck down with lucky show thirteen. Google Hangouts crashed out twice, and we lost the live stream. We ended up half an hour late, with no Hangouts, and a hastily make-shift YouTube live stream hooked together in record time by the #awesome Ovidiu-florin Bogdan.

The upsimascot_konqi-commu-journalistde of this being that we were rescued again by the amazing Big Blue Button.

We have decided the we are going to move to using Big Blue Button permanently for the Podcast show, which is great news for you in the audience.

Why ?

It means that you can join us on the show live. That's right; You too can join us in the Big Blue Button conference server whilst we are making and recording the show. Maybe you just want to listen in live and watch, or perhaps ask questions and make comments in the built-in chat system.

Of course you can take it a step further and join our Audio conference bridge and interact, chat, make comments and ask questions. Provided you use the "Hand Up" feature to grab our attention first.

So come and join us in Room 1 of the Kubuntu Big Blue Button Conference Server. Password is welcome.

Wednesday 6th July at 19:00 UTC

To get the access details drop by IRC a few minutes before the show starts, at freenode.net #kubuntu-podcast. Or you can join IRC directly from this website via the embedded IRC client on our Podcast page.

29 Jun 2016 9:32pm GMT

Colin King: What's new in stress-ng 0.06.07?

Since my last blog post about stress-ng, I've pushed out several more small releases that incorporate new features and (as ever) a bunch more bug fixes. I've been eyeballing gcov kernel coverage stats to find more regions in the kernel where stress-ng needs to exercise. Also, testing on a range of hardware (arm64, s390x, etc) and a range of kernels has eeked out some bugs and helped me to improve stress-ng. So what's new?

New stressors:

Improved stressors:

If any new features land in Linux 4.8 I may add stressors for them, but for now I suspect that's about it for the big changes for stress-ng for the Ubuntu Yakkey 16.10 release.

29 Jun 2016 4:46pm GMT

Ubuntu App Developer Blog: Snapcraft 2.12: an ecosystem of parts, qmake and gulp

Snapcraft 2.12 is here and is making its way to your 16.04 machines today.

This release takes Snapcraft to a whole new level. For example, instead of defining your own project parts, you can now use and share them from a common, open, repository. This feature was already available in previous versions, but is now much more visible, making this repo searchable and locally cached.

Without further ado, here is a tour of what's new in this release.

Commands

2.12 introduces 'snapcraft update', 'search' and 'define', which bring more visibility to the Snapcraft parts ecosystem. Parts are pieces of code for your app, that can also help you bundle libraries, set up environment variables and other tedious tasks app developers are familiar with.

They are literally parts you aggregate and assemble to create a functional app. The benefits of using a common tool is that these parts can be shared amongst developers. Here is how you can access this repository.

5273725bbff337eaf4eb07a81af97cd82051866b.png

To get a sense of how these commands are used, have a look at the above example, then you can dive into details and what we mean by "ecosystem of parts".

Snap name registration

Another command you will find useful is the new 'register' one. Registering a snap name is reserving the name on the store.

6875784c98c671707e1de1b27bb0cdba4690d68e.png

As a vendor or upstream, you can secure snap names when you are the publisher of what most users expect to see under this name.

Of course, this process can be reverted and disputed. Here is what the store workflow looks like when I try to register an already registered name:

snap-name-register.png

On the name registration page of the store, I'm going to try to register 'my-cool-app', which already exists.

snap-name-register-failed.png

I'm informed that the name has already been registered, but I can dispute this or use another name.

snap-name-register-dispute.png

I can now start a dispute process to retrieve ownership of the snap name.

Plugins and sources

Two new plugins have been added for parts building: qmake and gulp.

qmake

The qmake plugin has been requested since the advent of the project, and we have seen many custom versions to fill this gap. Here is what the default qmake plugin allows you to do:

gulp

The hugely popular nodejs builder is now a first class citizen in Snapcraft. It inherits from the existing nodejs plugin and allows you to:

Subversion

SVN is still a major version control system and thanks to Simon Quigley from the Lubuntu project, you can now use svn: URIs in the source field of your plugins.

Highlights

Many other fixes made their way into the release, with two highlights:

The full changelog for this milestone is available here and the list of bugs in sight for 2.13 can be found here. Note that this list will probably change until the next release, but if you have a Snapcraft itch to scratch, it's a good list to pick your first contribution from.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Get snapping!

There is a thriving community of developers who can give you a hand getting started or unblock you when creating your snap. You can participate and get help in multiple ways:

29 Jun 2016 3:20pm GMT

Kubuntu Wire: Plasma 5.6.5 and Frameworks 5.23 available in Kubuntu 16.04 Backports

Plasma565XX

1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

29 Jun 2016 2:43pm GMT

28 Jun 2016

feedPlanet Ubuntu

Aaron Honeycutt: SELF 2016

This post has been in my box since the 19th, I just got a bit lazy on finishing it up and posting it sorry!

This SELF (SouthEast LinuxFest) was as great the one before… ok maybe a little bit better with all the beer sponsored by my favorite VPS Linode and Google. I mean A LOT of beer!

received_1207441689276852

There was also a ton of Ubuntu devices at the booth. From gaming, convergence and a surprise visit from the UbuntuFL LoCo penguin!

img_20160610_102623_26997424843_o img_20160610_104832_27572691806_o img_20160610_123009_26997408133_o img_20160610_112609_27533559201_o

I even found a BQ M10 Ubuntu Tablet out in the wild!

IMG_20160611_101524

We also had awesome booth neighbors: system76 and Linode! I loved this trip from exploring the city again to making n

IMG_20160611_095103

I loved this trip from exploring the city again to making new friends!

img_20160610_211320_26997269103_o img_20160610_200539_27606537975_o img_20160610_203315_27329392750_o 27329347250_d2e6733091_o

photo644958832521488306

28 Jun 2016 11:31pm GMT

Bryan Quigley: When should i386 support for Ubuntu end?

Are you running i386 (32-bit) Ubuntu? We need your help to decide how much longer to build i386 images of Ubuntu Desktop, Server, and all the flavors.

There is a real cost to support i386 and the benefits have fallen as more software goes 64-bit only.

Please fill out the survey here ONLY if you currently run i386 on one of your machines. 64-bit users will NOT be affected by this, even if you run 32-bit applications.
http://goo.gl/forms/UfAHxIitdWEUPl5K2

28 Jun 2016 8:04pm GMT

Zygmunt Krynicki: The /etc/os-release zoo

If you've ever wanted to do something differently depending on the /etc/os-release but weren't in the mood of installing every common distribution under the sun, look no further. I give you the /etc/os-release zoo project.

A project like this is never complete so please feel free to contribute additional distribution bits there.

28 Jun 2016 12:16pm GMT

Canonical Design Team: Juju GUI 2.0

Juju is a cloud orchestration tool which enables users to build web environments to run applications. You can just as easily use it on a simple WordPress blog or on a complex big data platform. Juju is a command line tool but also has a graphical user interface (GUI) where users can choose services from a store, assemble them visually in the GUI, build relations and configure them with the service inspector.

Juju GUI allows users to

Over the last year we've been working on a redesign of the Juju GUI. This redesign project focused on improving four key areas, which also acted as our guiding design principles.

1. Improve the functionality of the core features of the GUI

Hero before
Hero after
‹ ›

Empty state of the canvas

Hero before
Hero after
‹ ›

Integrated store

Hero before
Hero after
‹ ›

Apache charm details

2. Reduce cognitive load and pace the user

Hero before
Hero after
‹ ›

Mediawiki deployment

3. Provide an at-a-glance understanding of environment health

Hero before
Hero after
‹ ›

Mediawiki deployment with errors

4. Surface functions and facilitate task-driven navigation

Hero before
Hero after
‹ ›

Inspector home view

Hero before
Hero after
‹ ›

Inspector errors view

Hero before
Hero after
‹ ›

Inspector config view

The project has been amazing, we're really happy to see that it's launched and are already planning the next updates.



28 Jun 2016 10:39am GMT

Canonical Design Team: Design in the open

As the Juju design team grew it was important to review our working process and to see if we could improve it to create a more agile working environment. The majority of employees at Canonical work distributed around the globe, for instance the Juju UI engineering team has employees from Tasmania to San Francisco. We also work on a product which is extremely technical and feedback is crucial to our velocity.

We identified the following aspects of our process which we wanted to improve:

Finding the right tool

I've always been interested in the concept of designing in the open. Benefits of the practice include being more transparent, faster and more efficient. They also give the design team more presence and visibility across the organisation. Kasia (Juju's project manager) and I went back and forth on which products to use and eventually settled on GitHub (GH).

The Juju design team works in two week iterations and at the beginning of a new iteration we decided to set up a GH repo and trial the new process. We outlined the following rules to help us start:

Reaction

As the iteration went on, feedback started rolling in from the engineering team without us requesting it. A few developers mentioned how cool it was to see how the design process unfolded. We also saw a lot of improvement in the Juju design team: it allowed us to collaborate more easily and it was much easier to keep track of what was happening.

At the end of the trial iteration, during our clinic day, we closed completed issues and uploaded the final assets to the "code" section of the repo, creating a single place for our files.

After the first successful iteration we decided to carry this on as a permanent part of our process. The full range of benefits of moving to GH are:

Conclusion

As a result of this change our designs are more accessible which allows developers and stakeholders to comment and collaborate with the design team aiding in our agile process. Below is an example thread where you can see how GH is used in the process. I shows how we designed the new contextual service block actions.

GH_conversation_new_navigation

28 Jun 2016 9:52am GMT

Ubuntu App Developer Blog: New Ubuntu SDK Beta Version

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the "usdk-target" tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the "video" group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

28 Jun 2016 5:53am GMT

The Fridge: Ubuntu Weekly Newsletter Issue 471

Welcome to the Ubuntu Weekly Newsletter. This is issue #471 for the week June 20 - 26, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

28 Jun 2016 2:19am GMT

27 Jun 2016

feedPlanet Ubuntu

Sergio Schvezov: The Snapcraft Parts Ecosystem

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft's general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you'd need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I'll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
PART NAME  DESCRIPTION
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there's a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

curl:
  configflags:
  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  snap:
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it's what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

parts:
    client:
       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don't yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.

Implicitly

The implicit path is really straightforward. It only involves making the part look like:

parts:
    client:
       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.

Composing

What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.

Copy/Pasting

This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here's how it would look like:

parts:
    client:
        plugin: autotools
        source: .
        after: [curl]
    curl:
        configflags:
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
        snap:
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
description:
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
  HTTPS, TELNET, DICT, FILE and LDAP.
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

27 Jun 2016 3:57pm GMT

Alessio Treglia: A – not exactly United – Kingdom

Tweet

Island of Ventotene - Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the "Manifesto for a free and united Europe". An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

Tweet

27 Jun 2016 7:54am GMT

Paul Tagliamonte: Hello, Sense!

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on api.hello.is. So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email: paultag@gmail.com
Sense password: 
Attempting to log into Sense's API
Success!
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.
Success!

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
45.73904
>>> max(values)
45.985928
>>> 

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

27 Jun 2016 1:42am GMT

Dustin Kirkland: HOWTO: Host your own SNAP store!


SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future. And we're already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical. Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, "SNAPs are awesome, but how can I stand up my own SNAP store?!?"

The answer is really quite simple... SNAP stores are really just HTTP web servers! Of course, you can get fancy with branding, and authentication, and certificates. But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it's pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub. We're already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here's a little HOWTO install and use it.

First, I launched an instance in AWS. Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance! In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone.. See snapcraft.io.



Now, let's find and install a snapstore SNAP. (Note that in this AWS instance of Fedora 24, I also had to 'sudo yum install squashfs-tools kernel-modules'.


At this point, you're running a SNAP store (webserver) on port 5000.


Now, let's reconfigure snapd to talk to our own SNAP store, and search for a SNAP.


Finally, let's install and inspect that SNAP.


How about that? Easy enough!

Cheers,
Dustin

27 Jun 2016 1:09am GMT

26 Jun 2016

feedPlanet Ubuntu

Simos Xenitellis: Trying out LXD containers on Ubuntu on DigitalOcean

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it's interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean

do-create-droplet

When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@128.199.41.205    # change with the IP address you get from the DigitalOcean panel
The authenticity of host '128.199.41.205 (128.199.41.205)' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '128.199.41.205' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
...
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
lxd:
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
 zfs-initramfs
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
 zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let's allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
 lxd.socket
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

Let's create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by 123.59.134.76 port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from 121.18.238.29 port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
Done.
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.

do-lxc-list

Create a Web server in a container

We launch (init and start) a container named c1.

do-lxd-launch

The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let's enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
...
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
...
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...
root@c1:~#

Let's install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
...
Processing triggers for ufw (0.35-0ubuntu2) ...
root@c1:~#

Is the Web server running? Let's check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*
root@c1:~#

The parameters mean

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)
root@c1:~#

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let's change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html

do-lxd-nginx-page

Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at http://128.199.41.205/ we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d 128.199.41.205/32 --dport 80 -j DNAT --to-destination 10.160.152.184:80

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, http://128.199.41.205/ so let's visit it.

do-lxd-welcome-nginx

That's it! We created an LXD container with the nginx Web server, then exposed it to the Internet.

26 Jun 2016 11:57pm GMT