28 Oct 2016

feedPlanet Ubuntu

Salih Emin: How to recover deleted photos and files from your smartphone’s SD card

In this tutorial, let's assume that we have accidentally deleted our files, from an sd card, or a USB thumb drive. Then we will try to recover them using photorec app.

28 Oct 2016 4:06pm GMT

Costales: A new uWriter for Ubuntu Phone

uWriter, an offline text editor for our Ubuntu Phone/Tablet.


In the new release, all your new documents will be stored in: ~/.local/share/uwp.costales/*.html
And it will have a full OS integration!

More nice UI
Load/Save on local files

Enjoy it from the Ubuntu Store |o/

28 Oct 2016 3:57pm GMT

Alessio Treglia: The logical contradictions of the Universe




Is Erwin Schrödinger's wave function - which did in the atomic and subatomic world an operation altogether similar to the one performed by Newton in the macroscopic world - an objective reality or just a subjective knowledge? Physicists, philosophers and epistemologist have debated at length on this matter. In 1960, theoretical physicist Eugene Wigner has proposed that the observer's consciousness is the dividing line that triggers the collapse of the wave function[1], and this theory was later taken up and developed in recent years. "The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse" [2].

The English mathematical physicist and philosopher of science Roger Penrose developed the hypothesis called Orch-OR (Orchestrated objective reduction) according to which consciousness originates from processes within neurons, rather than from the connections between neurons (the conventional view). The mechanism is believed to be a quantum physical process called objective reduction which is orchestrated by the molecular structures of the microtubules of brain cells (which constitute the cytoskeleton of the cells themselves). Together with the physician Stuart Hameroff, Penrose has suggested a direct relationship between the quantum vibrations of microtubules and the formation of consciousness.

<Read More…[by Fabio Marzocca]>


28 Oct 2016 12:16pm GMT

Stphane Graber: Network management with LXD (2.3+)

LXD logo


When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that "lxdbr0" bridge that "lxd init" would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. "lxd init" will offer to set one up for you and attach it to all new containers by default, but let's do it by hand to see what's going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don't want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address= ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won't do you much good if your containers aren't using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, "lxc network edit" will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address
root@yak:~# lxc start c1
root@yak:~# lxc list c1
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
|  c1  | RUNNING | (eth0) |      | PERSISTENT | 0         |

And same goes for IPv6 but with the "ipv6.address" property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true


LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge ("dns.domain" network property), it also supports 3 different operating modes ("dns.mode"):

The default mode is "managed" and is typically the safest and most convenient as it provides DNS records for containers but doesn't let them spoof each other's records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic "testbr0" network running with IPv4 and IPv6 on host "edfu" and want to spawn containers using it on host "djanet". The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a "testbr0" bridge on host "edfu" and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, "edfu" will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won't do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote

And then the "client" host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local= tunnel.edfu.remote=
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you'd rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan


root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that's how you make cross-host networking easily with recent LXD!


LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

28 Oct 2016 3:53am GMT

27 Oct 2016

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S09E35 – Red Nun - Ubuntu Podcast

It's Episode Thirty-Five of Season-Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Paul Tansom are connected and speaking to your brain.

We are four, made whole by a new guest presenter.

In this week's show:

That's all for this week! If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

27 Oct 2016 2:00pm GMT

Ubuntu Insights: Travel-friendly Lemur Ubuntu Laptop Updated to Kaby Lake

This is a guest post by Ryan Sipes, community manager at System76. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com


We would like to introduce you to the newest version of the extremely portable Lemur laptop. Like all System76 laptops the Lemur ships with Ubuntu, and you can choose between 16.04 LTS or the newest 16.10 release.

About System76

System76 is based out of Denver, Colorado and has been making Ubuntu computers for ten years. Creating great machines born to run Linux is our sole purpose. Members of our team are contributors to many different open source projects and we send our work enabling hardware on our computers upstream, to the benefit of everyone running our favorite operating system.

Our products have been praised as the best machines born to run Linux by fans including Chris Fisher of The Linux Action Show and Leo Laporte of This Week in Tech. We pride ourselves in offering fantastic products and providing first-class support to our users. Our support staff themselves are Linux/Ubuntu users and open source contributors, like Emma Marshall who is a host on the Ubuntu podcast.


About the Lemur

This is our 7th generation release of the Lemur, and it's now 10% faster with the 7th gen Intel processor (Kaby Lake). Loaded with the newest Intel graphics, up to 32GB of DDR4 memory, and USB type-C port, this Lemur enables more powerful multitasking on the go.

Weighing in at 3.6 lbs, this beauty is light enough to carry from meeting to meeting, or across campus. The Lemur design is thin, built with a handle grip at the back of the laptop, allowing you to easily grasp your Lemur and rush off to your next location.

The Lemur retains its reputation, as the perfect option for those who want a high-quality portable Linux laptop at an affordable price (starting at only $699 USD).

You can see the full tech specs and other details about the Lemur here.


About the author
Ryan Sipes is the Community Manager at System76. He is a regular guest on podcasts over at Jupiter Broadcasting, like The Linux Action Show and Linux Unplugged. He helped organize the first Kansas Linux Fest and the Lawrence Linux User Group. Ryan is also a longtime Ubuntu user (since Warty Warthog), and an enthusiastic open source evangelist.

27 Oct 2016 12:11pm GMT

Ubuntu Insights: Snapping Cuberite

This is a guest post by James Tait, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com


I'm a father of two pre-teens, and like many kids their age (and many adults, for that matter) they got caught up in the craze that is Minecraft. In our house we adopted Minetest as a Free alternative to begin with, and had lots of fun and lots of arguments! Somewhere along the way, they decided they'd like to run their own server and share it with their friends. But most of those friends were using Windows and there was no Windows client for Minetest at the time. And so it came to pass that I would trawl the internet looking for Free Minecraft server software, and eventually stumble upon Cuberite (formerly MCServer), "a lightweight, fast and extensible game server for Minecraft".

Cuberite is an actively developed project. At the time of writing, there are 16 open pull requests against the server itself, of which five are from the last week. Support for protocol version 1.10 has recently been added, along with spectator view and a steady stream of bug fixes. It is automatically built by Jenkins on each commit to master, and the resulting artefacts are made available on the website as .tar.gz and .zip files. The server itself runs in-place; that is to say that you just unpack the archive and run the Cuberite binary and the data files are created alongside it, so everything is self-contained. This has the nice side-effect that you can download the server once, copy or symlink a few files into a new directory and run a separate instance of Cuberite on a different port, say for testing.

All of this sounds great, and mostly it is. But there are a few wrinkles that just made it a bit of a chore:

Now none of these problems is insurmountable. We can put the work in to build distro packages for each distribution from git HEAD. We can contribute upstart and systemd and sysvinit scripts. We can run a cron job to poll for new releases. But, frankly, it just seems like a lot of work.

In truth I'd done a lot of manual work already to build Cuberite from source, create a couple of independent instances, and write init scripts. I'd become somewhat familiar with the build process, which basically amounted to something like:

$ cd src/cuberite
$ git pull
$ git submodule update --init
$ cd Release
$ make

This builds the release binaries and copies the plugins and base data files into the Server subdirectory, which is what the Jenkins builds then compress and make available as artifacts. I'd then do a bit of extra work: I've been running this in a dedicated lxc container, and keeping a production and a test instance running so we could experiment with custom plugins, so I would:

$ cd ../Server
$ sudo cp Cuberite /var/lib/lxc/miners/rootfs/usr/games/Cuberite
$ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/production
$ sudo cp brewing.txt crafting.txt furnace.txt items.ini monsters.ini /var/lib/lxc/miners/rootfs/etc/cuberite/testing
$ sudo cp -r favicon.png lang Plugins Prefabs webadmin /var/lib/lxc/miners/rootfs/usr/share/games/cuberite

Then in the container, /srv/cuberite/production and /srv/cuberite/testing contain symlinks to everything we just copied, and some runtime data files under /var/lib/cuberite/production and /var/lib/cuberite/testing, and we have init scripts to chdir to each of those directories and run Cuberite.

All this is fine and could no doubt be moulded into packages for the various distros with a bit of effort. But wouldn't it be nice if we could do all of that for all the most popular distros in one fell swoop? Enter snaps and snapcraft. Cuberite is statically linked and already distributed as a run-in-place archive, so it's inherently relocatable, which means it lends itself perfectly to distribution as a snap.
This is the part where I confess to working on the Ubuntu Store and being more than a little curious as to what things looked like coming from the opposite direction. So in the interests of eating my own dogfood, I jumped right in.
Now snapcraft makes getting started pretty easy:

$ mkdir cuberite
$ cd cuberite
$ snapcraft init

And you have a template snapcraft.yaml with comments to instruct you. Most of this is straightforward, but for the version here I just used the current date. With the basic metadata filled in, I moved onto the snapcraft "parts".

Parts in snapcraft are the basic building blocks for your package. They might be libraries or apps or glue, and they can come from a variety of sources. The obvious starting point for Cuberite was the git source, and as you may have noticed above, it uses CMake as its build system. The snapcraft part is pretty straightforward:

        plugin: cmake
        source: https://github.com/cuberite/cuberite.git
            - gcc
            - g++
            - -include
            - -lib

That last section warrants some explanation. When I built Cuberite at first, it included some library files and header files from some of the bundled libraries that are statically linked. Since we're not interested in shipping these files, they just add bloat to the final package, so we specify that they are excluded.

That gives us our distributable Server directory, but it's tucked away under the snapcraft parts hierarchy. So I added a release part to just copy the full contents of that directory and locate them at the root of the snap:

        after: [cuberite]
        plugin: dump
        source: parts/cuberite/src/Server
            "*": "."

Some projects let you specify the output directory with a -prefix flag to a configure script or similar methods, and won't need this little packaging hack, but it seems to be necessary here.

At this stage I thought I was done with the parts and could just define the Cuberite app - the executable that gets run as a daemon. So I went ahead and did the simplest thing that could work:

        command: Cuberite
        daemon: forking
            - network
            - network-bind

But I hit a snag. Although this would work with a traditional package, the snap is mounted read-only, and Cuberite writes its data files to the current directory. So instead I needed to write a wrapper script to switch to a writable directory, copy the base data files there, and then run the server:

 1 #!/bin/bash
 2 for file in brewing.txt crafting.txt favicon.png furnace.txt items.ini
 3 monsters.ini README.txt; do
 4 if [ ! -f "$SNAP_USER_DATA/$file" ]; then
 5  cp --preserve=mode "$SNAP/$file" "$SNAP_USER_DATA"
 6 fi
 7 done
 9 for dir in lang Plugins Prefabs webadmin; do
10 if [ ! -d "$SNAP_USER_DATA/$dir" ]; then
11 cp -r --preserve=mode "$SNAP/$dir" "$SNAP_USER_DATA"
12 fi
13 done
16 exec "$SNAP"/Cuberite -d 

Then add the wrapper as a part:

        plugin: dump
        source: .
            Cuberite.wrapper: bin/Cuberite.wrapper

And update the snapcraft app:

        command: bin/Cuberite.wrapper
        daemon: forking
            - network
            - network-bind 

And with that we're done! Right? Well, not quite…. While this works in snap's devmode, in strict mode it results in the server being killed. A little digging in the output from snappy-debug.security scanlog showed that seccomp was taking exception to Cuberite using the fchown system call. Applying some Google-fu turned up a bug with a suggested workaround, which was applied to the two places (both in sqlite submodules) that used the offending system call and the snap rebuilt - et voilà! Our Cuberite server now happily runs in strict mode, and can be released in the stable channel.

My build process now looks like this:

$ vim snapcraft.yaml
$ # Update version
$ snapcraft pull cuberite
$ # Patch two fchown calls
$ snapcraft
I can then push it to the edge channel:
$ snapcraft push cuberite_20161023_amd64.snap --release edge
Revision 1 of cuberite created.
And when people have had a chance to test and verify, promote it to stable:
$ snapcraft release cuberite 1 stable

There are a couple of things I'd like to see improved in the process:

With those two wishlist items fixed, I could fully automate the Cuberite builds and have a fresh snap released to the edge channel on each commit to git master! I'd also like to make the wrapper a little more advanced and add another command so that I can easily manage multiple instances of Cuberite. But for now, this works - my boys have never had it so good!

Download the Cuberite Snap

27 Oct 2016 11:17am GMT

Chris Glass: Making LXD fly on Ubuntu!

Since my last article, lots of things happened in the container world! Instead of using LXC, I find myself using the next great thing much much more now, namely LXC's big brother, LXD.

As some people asked me, here's my trick to make containers use my host as an apt proxy, significantly speeding up deployment times for both manual and juju-based workloads.

Metal As An Attitude

Setting up a cache on the host

First off, we'll want to setup an apt cache on the host. As is usually the case in the Ubuntu world, it all starts with an apt-get:

sudo apt-get install squid-deb-proxy

This will setup a squid caching proxy on your host, with a specific apt configuration listening on port 8000.

Since it is tuned for larger machines by default, I find myself wanting to make it use a slightly smaller disk cache, using 2Gb instead of the default 40Gb is way more reasonable on my laptop.

Simply editing the config file takes care of that:

$EDITOR /etc/squid-deb-proxy
# Look for the "cache_dir aufs" line and replace with:
cache_dir aufs /var/cache/squid-deb-proxy 2000 16 256 # 2 gb

Of course you'll need to restart the service after that:

sudo service squid-deb-proxy restart

Setting up LXD

Compared to the similar procedure on LXC, setting up LXD is a breeze! LXD comes with configuration templates, and so we can conveniently either create a new template if we want to use the proxy selectively, or simply add the configuration to the "default" template, and all our containers will use the proxy, always!

In the default template

Since I never turn the proxy off on my laptop I saw no reason to apply the proxy selectively, and simply added it to the default profile:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set default user.user-data -

Of course the first part of the first command line automates the discovery of your IP address, conveniently, as long as your LXD bridge is called "lxdbr0".

Once set in the default template, all LXD containers you start now have an apt proxy pointing to your host set up!

In a new template

Should you not want to alter the default template, you can easily create a new one:

export PROFILE_NAME=proxy
lxc profile create $PROFILE_NAME

Then substitute the newly created profile in the previous command line. It becomes:

export LXD_ADDRESS=$(ifconfig lxdbr0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
echo -e "#cloud-config\napt:\n proxy: http://$LXD_ADDRESS:8000" | lxc profile set $PROFILE_NAME user.user-data -

Launching a new container needs to add this configuration template, so that the container benefits form the proxy configuration:

lxc launch ubuntu:xenial -p $PROFILE_NAME -p default


If for some reason you don't want to use your host as a proxy anymore, it is quite easy to revert the changes to the template:

lxc profile set <template> user.user-data

That's it!

As you can see it is trivial to set an apt proxy on LXD, and using squid-deb-proxy on the host makes that configuration trivial.

Hope this helps!

Discussion and/or comments welcome on Reddit!

27 Oct 2016 6:00am GMT

Stphane Graber: LXD 2.0: LXD and OpenStack [11/12]

This is the eleventh blog post in this series about LXD 2.0.

LXD logo


First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn't be able to get networking going properly.

I finally gave up on devstack and tried "conjure-up" to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).


This post assumes you've got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we're running a full OpenStack here, this thing isn't exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we'll use a privileged container.

We'll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack -- ln -s /bin/true /usr/local/bin/modprobe

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we'll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

lxc exec openstack -- lxd init

And that's it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we'll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you're running this on. You'll see all services getting a container allocated, then getting deployed and finally interconnected.

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can't just hit it from your web browser.
The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to <IP>

Where "<ip>" is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the "openstack" container (from "lxc info openstack") and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you'll be greeted by the OpenStack dashboard!


You can now head to the "Project" tab on the left and the "Instances" page. To start a new instance using nova-lxd, click on "Launch instance", select what image you want, network, … and your instance will get spawned.

Once it's running, you can assign it a floating IP which will let you reach your instance from within your "openstack" container.


OpenStack is a pretty complex piece of software, it's also not something you really want to run at home or on a single server. But it's certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It's also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

27 Oct 2016 1:10am GMT

26 Oct 2016

feedPlanet Ubuntu

Matthew Helmke: Ubuntu Unleashed 2017

I was the sole editor and contributor of new content for Ubuntu Unleashed 2017 Edition. This book is intended for intermediate to advanced users.

26 Oct 2016 10:51pm GMT

Daniel Pocock: FOSDEM 2017 Real-Time Communications Call for Participation

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2017 takes place 4-5 February 2017 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Saturday, 4 February 2017. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 17 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real-Time devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 31 October. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

See the mailing list discussion for more details about volunteering.

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 and 3 February 2017. XMPP Summit web site - please join the mailing list for details.

We are also considering a more general RTC or telephony summit, potentially in collaboration with the XMPP summit. Please join the Free-RTC mailing list and send an email if you would be interested in participating, sponsoring or hosting such an event.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 3 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.


For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

26 Oct 2016 6:39am GMT

25 Oct 2016

feedPlanet Ubuntu

Julian Andres Klode: Introducing DNS66, a host blocker for Android


I'm proud (yes, really) to announce DNS66, my host/ad blocker for Android 5.0 and newer. It's been around since last Thursday on F-Droid, but it never really got a formal announcement.

DNS66 creates a local VPN service on your Android device, and diverts all DNS traffic to it, possibly adding new DNS servers you can configure in its UI. It can use hosts files for blocking whole sets of hosts or you can just give it a domain name to block (or multiple hosts files/hosts). You can also whitelist individual hosts or entire files by adding them to the end of the list. When a host name is looked up, the query goes to the VPN which looks at the packet and responds with NXDOMAIN (non-existing domain) for hosts that are blocked.

You can find DNS66 here:

F-Droid is the recommended source to install from. DNS66 is licensed under the GNU GPL 3, or (mostly) any later version.

Implementation Notes

DNS66's core logic is based on another project, dbrodie/AdBuster, which arguably has the cooler name. I translated that from Kotlin to Java, and cleaned up the implementation a bit:

All work is done in a single thread by using poll() to detect when to read/write stuff. Each DNS request is sent via a new UDP socket, and poll() polls over all UDP sockets, a Device Socket (for the VPN's tun device) and a pipe (so we can interrupt the poll at any time by closing the pipe).

We literally redirect your DNS servers. Meaning if your DNS server is, all traffic to is routed to the VPN. The VPN only understands DNS traffic, though, so you might have trouble if your DNS server also happens to serve something else. I plan to change that at some point to emulate multiple DNS servers with fake IPs, but this was a first step to get it working with fallback: Android can now transparently fallback to other DNS servers without having to be aware that they are routed via the VPN.

We also need to deal with timing out queries that we received no answer for: DNS66 stores the query into a LinkedHashMap and overrides the removeEldestEntry() method to remove the eldest entry if it is older than 10 seconds or there are more than 1024 pending queries. This means that it only times out up to one request per new request, but it eventually cleans up fine.

Filed under: Android, Uncategorized

25 Oct 2016 4:20pm GMT

24 Oct 2016

feedPlanet Ubuntu

Eric Hammond: AWS Git-backed Static Website

with automatic updates on changes in CodeCommit Git repository

A number of CloudFormation templates have been published that generate AWS infrastructure to support a static website. I'll toss another one into the ring with a feature I haven't seen yet.

In this stack, changes to the CodeCommit Git repository automatically trigger an update to the content served by the static website. This automatic update is performed using CodePipeline and AWS Lambda.

This stack also includes features like HTTPS (with a free certificate), www redirect, email notification of Git updates, complete DNS support, web site access logs, infinite scaling, zero maintenance, and low cost.

Here is an architecture diagram outlining the various AWS services used in this stack. The arrows indicate the major direction of data flow. The heavy arrows indicate the flow of website content.

CloudFormation stack architecture diagram

Sure, this does look a bit complicated for something as simple as a static web site. But remember, this is all set up for you with a simple aws-cli command (or AWS Web Console button push) and there is nothing you need to maintain except the web site content in a Git repository. All of the AWS components are managed, scaled, replicated, protected, monitored, and repaired by Amazon.

The input to the CloudFormation stack includes:

The output of the CloudFormation stack includes:

Though I created this primarily as a proof of concept and demonstration of some nice CloudFormation and AWS service features, this stack is suitable for use in a production environment if its features match your requirements.

Speaking of which, no CloudFormation template meets everybody's needs. For example, this one conveniently provides complete DNS nameservers for your domain. However, that also means that it assumes you only want a static website for your domain name and nothing else. If you need email or other services associated with the domain, you will need to modify the CloudFormation template, or use another approach.

How to run

To fire up an AWS Git-backed Static Website CloudFormation stack, you can click this button and fill out a couple input fields in the AWS console:

Launch CloudFormation stack

I have provided copy+paste aws-cli commands in the GitHub repository. The GitHub repository provides all the source for this stack including the AWS Lambda function that syncs Git repository content to the website S3 bucket:

AWS Git-backed Static Website GitHub repo

If you have aws-cli set up, you might find it easier to use the provided commands than the AWS web console.

When the stack starts up, two email messages will be sent to the address associated with your domain's registration and one will be sent to your AWS account address. Open each email and approve these:

The CloudFormation stack will be stuck until the ACM certificates are approved. The CloudFront distributions are created afterwards and can take over 30 minutes to complete.

Once the stack completes, get the nameservers for the Route 53 hosted zone, and set these in your domain's registrar. Get the CodeCommit endpoint URL and use this to clone the Git repository. There are convenient aws-cli commands to perform these fuctions in the project's GitHub repository linked to above.

AWS Services

The stack uses a number of AWS services including:


As far as I can tell, this CloudFormation stack currently costs around $0.51 per month in a new AWS account with nothing else running a reasonable amount of storage for the web site content, and up to 5 Git users. This minimal cost is due to there being no free tier for Route 53 at the moment.

If you have too many GB of content, too many tens of thousands of requests, etc., you may start to see additional pennies being added to your costs.

If you stop and start the stack, it will cost an additional $1 each time because of the odd CodePipeline pricing structure. See the AWS pricing guides for complete details, and monitor your account spending closely.


Thanks to Mitch Garnaat for pointing me in the right direction for getting the aws-cli into an AWS Lambda function. This was important because "aws s3 sync" is much smarter than the other currently availble options for syncing website content with S3.

Thanks to AWS Community Hero Onur Salk for pointing me in the direction of CloudPipeline for triggering AWS Lamda functions off of CodeCommit changes.

Thanks to Ryan Brown for already submitting a pull request with lots of nice cleanup of the CloudFormation template, teaching me a few things in the process.

Some other resources you might fine useful:

Creating a Static Website Using a Custom Domain - Amazon Web Services

S3 Static Website with CloudFront and Route 53 - AWS Sysadmin

Continuous Delivery with AWS CodePipeline - Onur Salk

Automate CodeCommit and CodePipeline in AWS CloudFormation - Stelligent

Running AWS Lambda Functions in AWS CodePipeline using CloudFormation - Stelligent

You are welcome to use, copy, and fork this repository. I would recommend contacting me before spending time on pull requests, as I have specific limited goals for this stack and don't plan to extend its features much more.

Original article and comments: https://alestic.com/2016/10/aws-git-backed-static-website/

24 Oct 2016 10:00am GMT

Michael Hall: Make your world a better place

For much of the past year I have been working on a game. No, not just a game, I'm been working on change. There are 122 million children in the world today who can't read or write[1]. They will grow up to join the 775 million adults who can't. Together that's almost one billion people who are effectively shut off from the information age. How many of them could make the world a better place, given even half a chance?

I've been interested in the intersection of open source and education for underprivileged children for quite some time. I even build a a Linux distro towards that end. So when Jono Bacon told me about a new XPRIZE contest to build open source software for teaching literacy skills to children in Africa, of course I was interested. And now, a little more than a year later, I have a game that I firmly believe can deliver that world changing ambition.

device-2016-08-25-224444This is where you come in. Don't worry, I'm not going to ask you to help build my contest entry, though it is already open source (GPLv3) and on github. But the contest entries only cover English and Kiswahili, which is going to leave a very large part of the illiterate population out. That's not enough, to change the world it needs to be available to the world. Additional languages won't be part of the contest entry, but they will be a part of making the world a better place.

I designed Phoenicia from the beginning to be able to support as many languages as possible, with as little additional work as possible. But while it may be capable of using handling multiple languages, I sadly am not. So I'm reaching out to the community to help me bring literacy to millions more children than I can do by myself. Children who speak your language, live in your community, who may be your own neighbors.

You don't need to be a programmer, in fact there shouldn't be any programming work needed at all. What I need are early reader words, for each language. From there I can show you how to build a locale pack, record audio help, and add any new artwork needed to support your localization. I'm especially looking to those of you who speak French, Spanish and Portuguese, as those languages will carry Phoenicia into many countries where childhood illiteracy is still a major problem.

24 Oct 2016 9:00am GMT

23 Oct 2016

feedPlanet Ubuntu

Valorie Zimmerman: Happy 20th birthday, KDE!

KDE turned twenty recently, which seems significant in a world that seems to change so fast. Yet somehow we stay relevant, and excited to continue to build a better future.

Lydia asked recently on the KDE-Community list what we were most proud of.

For the KDE community, I'm proud that we continue to grow and change, while remaining friendly, welcoming, and ever more diverse. Our software shows that. As we change and update, some things get left behind, only to re-appear in fresh new ways. And as people get new jobs, or build new families, sometimes they disappear for awhile as well. And yet we keep growing, attracting students, hobbyist programmers, writers, artists, translators, designers and community people, and sometimes we see former contributors re-appear too. See more about that in our 20 Years of KDE Timeline.

I'm proud that we develop whole new projects within the community. Recently Peruse, Atelier, Minuet, WikitoLearn, KDEConnect, Krita, Plasma Mobile and neon have all made the news. We welcome projects from outside as well, such as Gcompris, Kdenlive, and the new KDE Store. And our established projects continue to grow and extend. I've been delighted to hear about Calligra Author, for instance, which is for those who want to write and publish a book or article in pdf or epub. Gcompris has long been available for Windows and Mac, but now you can get it on your Android phone or tablet. Marble is on Android, and I hear that Kstars will be available soon.

I'm proud of how established projects continue to grow and attract new developers. The Plasma team, hand-in-hand with the Visual Design Group, continues to blow testers and users away with power, beauty and simplicity on the desktop. Marble, Kdevelop, Konsole, Kate, KDE-PIM, KDElibs (now Frameworks), KOffice (now Calligra), KDE-Edu, KDE-Games, Digikam, kdevplatform, Okular, Konversation and Yakuake, just to mention a few, continue to grow as projects, stay relevant and often be offered on new platforms. Heck, KDE 1 runs on modern computer systems!

For myself, I'm proud of how the KDE community welcomed in a grandma, a non-coder, and how I'm valued as part of the KDE Student Programs team, and the Community Working Group, and as an author and editor. Season of KDE, Google Summer of Code, and now Google Code-in all work to integrate new people into the community, and give more experienced developers a way to share their knowledge as mentors. I'm proud of how the Amarok handbook we developed on the Userbase wiki has shown the way to other open user documentation. And thanks to the wonderful documentation and translation teams, the help is available to millions of people around the world, in multiple forms.

I'm proud to be part of the e.V., the group supporting the fantastic community that creates the software we offer freely to the world.

23 Oct 2016 5:22am GMT

20 Oct 2016

feedPlanet Ubuntu

Kees Cook: CVE-2016-5195

My prior post showed my research from earlier in the year at the 2016 Linux Security Summit on kernel security flaw lifetimes. Now that CVE-2016-5195 is public, here are updated graphs and statistics. Due to their rarity, the Critical bug average has now jumped from 3.3 years to 5.2 years. There aren't many, but, as I mentioned, they still exist, whether you know about them or not. CVE-2016-5195 was sitting on everyone's machine when I gave my LSS talk, and there are still other flaws on all our Linux machines right now. (And, I should note, this problem is not unique to Linux.) Dealing with knowing that there are always going to be bugs present requires proactive kernel self-protection (to minimize the effects of possible flaws) and vendors dedicated to updating their devices regularly and quickly (to keep the exposure window minimized once a flaw is widely known).

So, here are the graphs updated for the 668 CVEs known today:

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

20 Oct 2016 11:02pm GMT