15 Oct 2018

feedPlanet Ubuntu

Stuart Langridge: Print to Google Drive in a non-Gnome desktop

Jeremy Bicha wrote up an unknown Ubuntu feature: "printing" direct to a Google Drive PDF. I rather wanted this, but I don't run the Gnome desktop, so I thought I might be out of luck. But no! It works fine on my Ubuntu MATE desktop too. A couple of extra tweaks are required, though. This is unfortunately a bit technical, but it should only need setting up once.

You need the Gnome Control Centre and Gnome Online Accounts installed, if you don't have them already, as well as the Google Cloud Print extension that Jeremy mentions. From a terminal, run sudo apt install gnome-control-center gnome-online-accounts cpdb-backend-gcp.

Next, you need to launch the Control Centre, but it doesn't like you if you're not running the Gnome desktop. So, we lie to it. In that terminal, run XDG_CURRENT_DESKTOP=GNOME gnome-control-center online-accounts. This should correctly start the Control Centre, showing the online accounts. Sign in to your Google account using that window. (I only have Files and Printers selected; you don't need Mail and Calendars and so on to get this printing working.)

Then… it all works. From now on, when you go to print something, the print dialogue will, after a couple of seconds, show a new entry: "Save to Google Drive". Choose that, and your document will "print" to a PDF stored in Google Drive. Easy peasy. Nice one Jeremy for the write-up. It'd be neat if Ubuntu MATE could integrate this a little more tightly.

15 Oct 2018 10:31pm GMT

The Fridge: Ubuntu Weekly Newsletter Issue 549

Welcome to the Ubuntu Weekly Newsletter, Issue 549 for the week of October 7 - 13, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

15 Oct 2018 10:04pm GMT

Michael Zanetti: nymea

It's been quite a while since I had written a post now. Lots of things have changed around here but even though I am not actively developing for Ubuntu itself any more it doesn't mean that I've left the Ubuntu and FOSS world in general. In fact, I've been pretty busy hacking on some more free software goodness. Some few have sure heard about it, but for the biggest part, allow me to introduce you to nymea.

nymea is an IoT platform mainly based on Ubuntu. Well, that's where we develop on, we provide packages for debian and snaps for all the platforms supporting snaps too.

It consists of 3 parts: nymea:core, nymea:app and nymea:cloud.
The purpose of this project is to enable easy integration of various things with each other. Being plugin-based, it allows to make all sorts of things (devices, online services…) work together.

Practically speaking this means two things:

- It will allow users to have a completely open source smart home setup which does everything offline. Everything is processed offline, including the smartness. Turning your living room lights on when it gets dark? nymea will do it, and it'll do it even without your internet connection. It comes with nymea:core to be installed on a gateway device in your home (a Raspberry Pi, or any other device that can run Ubuntu/Debian or snapd) and nymea:app, available in app stores and also as a desktop app in the snap store.

- It delivers a developer platform for device makers. Looking for a solution that easily allows you to make your device smart? Ubuntu:core + nymea:core together will get you sorted in no time to have an app for your "thing" and allow it to react on just about any input it gets.

nymea:cloud is an optional feature to nymea:core and nymea:app and allows to extend the nymea system with features like remote connection, push notifications or Alexa integration (not released yet).

So if that got you curious, check out https://wiki.nymea.io (and perhaps https://nymea.io in general) or simply install nymea and nymea-app and get going (on snap systems you need to connect some plugs and iterfaces for all the bits and pieces to work, alternatively we have a ppa ready for use too).

15 Oct 2018 5:01pm GMT

Sean Davis: Xfce Screensaver 0.1.0 Released

I am pleased to announce the release of Xfce Screensaver (xfce4-screensaver) 0.1.0! This is an early release targeted to testers and translators. Bugs and patches welcome!


Xfce Screensaver is a screen saver and locker that aims to have simple, sane, secure defaults and be well integrated with the Xfce desktop.

It is a port of MATE Screensaver, itself a port of GNOME Screensaver. It has been tightly integrated with the Xfce desktop, utilizing Xfce libraries and the Xfconf configuration backend.

Homepage · Bugzilla · Git




Click to view slideshow.


Please be aware that this is alpha-quality software. It is not currently recommended for use in production machines. I invite you to test it, report bugs, provide feedback, and submit patches so we can get it ready for the world.

Source tarball (md5, sha1, sha256)

15 Oct 2018 10:51am GMT

14 Oct 2018

feedPlanet Ubuntu

Tiago Carrondo: S01E05 – Vamos às compras!

Neste episódio falámos de Pinebooks, Librem Key, SolusOS e muito mais. Um episódio repleto de informação relevante sobre os temas que têm dominado a actualidade. Já sabes: Ouve, subscreve e partilha!

Atribuição e licenças

A imagem: Photo on Visualhunt.com

A música do genérico é: "Won't see it comin' (Feat Aequality & N'sorte d'autruche)", por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

14 Oct 2018 11:43pm GMT

Jeremy Bicha: Google Cloud Print in Ubuntu

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1gnome-control-center to the Online Accounts page. If your Google account is near the top above the Add an account section, then you're all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won't make it in until 19.04.

I haven't seen this feature packaged in any other Linux distros yet. That might be because people don't know about this feature so that's why I'm posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn't mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe's comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don't use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

14 Oct 2018 2:31pm GMT

Kubuntu General News: Please help test our initial Cosmic 18.10 RC ISOs

The Ubuntu release team have announced a 1st test ISO RC build for all 18.10 flavours.

Please help us test these and subsequent RC builds, so that we can have an amazing and well tested release in the coming week.

As noted below, the initial builds will NOT be the final ones.

Over the next few hours, builds will start popping on the Cosmic Final
milestone page[1] on the ISO tracker.  These builds are not final.
We're still waiting on a few more fixes, a few things to migrate, etc.
I've intentionally not updated base-files or the ISO labels to reflect
the release status (so please don't file bugs about those).

What there are, however, are "close enough" for people to be testing in
anger, filing bugs, fixing bugs, iterating image builds, and testing
all over again.  So, please, don't wait until Wednesday night to test,
testing just before release is TOO LATE to get anything fixed.  Get out
there, grab your favourite ISO, beat it up, report bugs, escalate bugs,
get things fixed, respin (if you're a flavour lead with access), and
test, test... And test.  Did I mention testing?  Please[2] test.


... Adam

[1] http://iso.qa.ubuntu.com/qatracker/milestones/397/builds
[2] Please.

Downloads for RC builds can be found by following the link after clicking through to 'Cosmic Final' on the Ubuntu ISO tracker. Please report test case results if you have a Ubuntu SSO account (or are prepared to make one). Feedback can also be given via our normal email lists, IRC, forums etc.

Upgrade testing from 18.04 in installed systems (VM or otherwise) is also a very useful way to help prepare for the new release. Instructions for upgrade can be found on the Ubuntu help wiki.

Ubuntu ISO tracker: http://iso.qa.ubuntu.com/qatracker/
Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
Kubuntu 18.10 Upgrade instructions: https://help.ubuntu.com/community/CosmicUpgrades/Kubuntu

14 Oct 2018 8:46am GMT

Lubuntu Blog: Help test Lubuntu 18.10 Release Candidates!

Adam Conrad always does a great job in stating that people should test the Release Candidates. Here's what he has said this time: Over the next few hours, builds will start popping on the Cosmic Final milestone page[1] on the ISO tracker. These builds are not final. We're still waiting on a few more fixes, […]

14 Oct 2018 12:04am GMT

13 Oct 2018

feedPlanet Ubuntu

Julian Andres Klode: The demise of G+ and return to blogging (w/ mastodon integration)

I'm back to blogging, after shutting down my wordpress.com hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let's talk details: In spring, I shutdown my wordpress.com hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I'd like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I'll host longer pieces on this blog, and post shorter messages on @juliank@mastodon.social. If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because mastodon.social does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from mastodon.social's file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling mastodon.social which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by @bjoern@mastodon.social - I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it's only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

13 Oct 2018 9:03pm GMT

Jeremy Bicha: Shutter removed from Debian & Ubuntu

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian "Buster" 6 months ago and some of its "optional" dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that's not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn't a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

13 Oct 2018 6:29pm GMT

12 Oct 2018

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S11E31 – Thirty-One Dates in Thirty-One Days

This week Ubuntu Podcast debuts on Spotify and re-embraces Mastodon. We've been unboxing the GPD Pocket 2 and building a Clockwork Pi. We discuss Plex releasing as a Snap, Microsoft joining the OIN, Minecraft open-sourcing some libraries, Google axing Google+, Etcher (allegedly) not honouring privacy settings, plus we also round up community news and events.

It's Season 11 Episode 31 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week's show:

That's all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there's a topic you'd like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

12 Oct 2018 2:00pm GMT

David Tomaschik: Course Review: Adversarial Attacks and Hunt Teaming

At DerbyCon 8, I had the opportunity to take the "Adversarial Attacks and Hunt Teaming" presented by Ben Ten and Larry Spohn from TrustedSec. I went into the course hoping to get a refresher on the latest techniques for Windows domains (I do mostly Linux, IoT & Web Apps at work) as well as to get a better understanding of how hunt teaming is done. (As a Red Teamer, I feel understanding the work done by the blue team is critical to better success and reducing detection.) From the course description:

This course is completely hands-on, focusing on the latest attack techniques and building a defense strategy around them. This workshop will cover both red and blue team efforts and provide methods for understanding how to best detect threats in an enterprise. It will give penetration testers the ability to learn the newest techniques, as well as teach blue teamers how to defend against them.

The Good

The course was definitely hands-on, which I really appreciate as someone who learns by "doing" rather than by listening to someone talk. Both instructors were obviously knowledgeable and able to answer questions about how tools and techniques work. It's really valuable to understand why things work instead of just running commands blindly. Having the why lets you pivot your knowledge to other tools when your first choice isn't working for some reason. (AV, endpoint protection, etc.)

Both instructors are strong teachers with an obvious passion for what they do. They presented the material well and mostly at a reasonable pace. They also tag-team well: while one is presenting, the other can help students having issues without delaying the entire class.

The final lab/exam was really good. We were challenged to get Domain Admin on a network we hadn't seen so far, with the top 5 finishers receiving challenge coins. Despite how little I do with Windows, I was happy to be one of the recipients!

TrustedSec Coin

The Bad

The course began quite slowly for my experience level. The first half-day or so involved basic reconnaisance with nmap and an introduction to Metasploit. While I understand that not everyone has experience with these tools, the course description did not make me feel like it would be as basic as was presented.

There was a section on physical attacks that, while extremely interesting, was not really a good fit for the rest of the course material. It was too brief to really learn how to execute these attacks from a Red Team perspective, and physical security is often out of scope for the Blue Team (or handled by a different group). Other than entertainment value, I do not feel like it added anything to the course.

I would have liked a little more "Blue" content. The hunt-teaming section was mostly about configuring Windows Logging and pointing it to an ELK server for aggregation and analysis. Again, this was interesting, but we did not dive into other sources of data (network firewalls, non-Windows systems, etc.) like I hoped we would. It also did not spend any time discussing how to relate different events, only how to log the events you would want to look for.


Overall, I think this is a good course presented by excellent instructors. If you've done an OSCP course or even basic penetration testing, expect some duplication in the first day or so, but there will still be techniques that you might not have seen (or had the chance to try out) before. This was my first time trying the "Kerberoasting" attack, so it was nice to be able to do it hands-on. Overall a solid course, but I'd generally recommend it to those early in their careers or transitioning to an offensive security role.

12 Oct 2018 7:00am GMT

10 Oct 2018

feedPlanet Ubuntu

Simos Xenitellis: How to create a minimal container image for LXC/LXD with distrobuilder

In the previous post,

Using distrobuilder to create container images for LXC and LXD

we saw how to build distrobuilder, then use it to create a LXD container image for Ubuntu. We used one of the existing configuration files for an Ubuntu container image.

In this post, we are going to see how to compose such YAML configuration files that describe how the container image will look like. The aim of this post is to deal with a minimal configuration file to create a container image for Alpine Linux. A future post will deal with a more complete configuration file.

Creating a minimal configuration for a container image

Here is the minimal configuration for a Alpine Linux container image. Note that we have omitted some parts that will make the container more useful (namespaces, etc). The containers from this container image will still work for our humble purposes.

description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

manager: apk

Save this as a file with filename such as myalpine.yaml, and then build the container image. It takes a couple of seconds to build the container image. We will come back to the minimal configuration and explain in detail in the next section.

$ sudo $HOME/go/bin/distrobuilder build-lxd myalpine.yaml 
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.1-27-g42946288bd [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.1-23-ga2d8d72222 [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9539 distinct packages available
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on /home/username/ContainerImages/minimal/rootfs.squashfs, block size 131072.
[==================================================|] 90/90 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2093.68 Kbytes (2.04 Mbytes)
48.30% of uncompressed filesystem size (4334.32 Kbytes)
Inode table size 3010 bytes (2.94 Kbytes)
17.41% of uncompressed inode table size (17290 bytes)
Directory table size 4404 bytes (4.30 Kbytes)
54.01% of uncompressed directory table size (8154 bytes)
Number of duplicate files found 5
Number of inodes 481
Number of files 64
Number of fragments 5
Number of symbolic links 329
Number of device nodes 1
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 87
Number of ids (unique uids + gids) 2
Number of uids 1
root (0)
Number of gids 2
root (0)
shadow (42)

And here is the container image. The size of the container image is about 2MB.

$ ls -l
total 2108
-rw-r--r-- 1 root root 364 Oct 10 20:30 lxd.tar.xz
-rw-rw-r-- 1 user user 287 Oct 10 20:30 myalpine.yaml
-rw-r--r-- 1 root root 2146304 Oct 10 20:30 rootfs.squashfs

Let's import it into our LXD installation.

$ lxc image import --alias myminimal lxd.tar.xz rootfs.squashfs 
Image imported with fingerprint: ee9208767e745bb980a074006fa462f6878e763539c439e6bfa34c029cfc318b

And now launch a container from this container image.

$ lxc launch myminimal mycontainer
Creating mycontainer
Starting mycontainer

Let's see the container running. It's running, but did not get an IP address. That's part of the cost-cutting in the initial minimal configuration file.

$ lxc list mycontainer
| NAME | STATE | IPV4 | IPV6 |
| mycontainer | RUNNING | | |

Let's get a shell in the container and start doing things! First, set up the network configuration.

$ lxc exec mycontainer -- sh
~ # pwd
~ # cat /etc/network/interfaces
cat: can't open '/etc/network/interfaces': No such file or directory
~ # echo "auto eth0" > /etc/network/interfaces
~ # echo "iface eth0 inet dhcp" >> /etc/network/interfaces

Then, get an IP address using DHCP.

~ # ifup eth0
udhcpc: started, v1.28.4
udhcpc: sending discover
udhcpc: sending discover
udhcpc: sending select for
udhcpc: lease of obtained, lease time 3600

We got a lease, but for some reason the network was not configured. Both ifconfig and route showed no configuration. So, we complete the network configuration manually. And it works, we have access to the Internet!

~ # ifconfig eth0 up
~ # route add -net default gw
~ # ping -c 1
PING ( 56 data bytes
64 bytes from seq=0 ttl=120 time=17.451 ms
--- ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 17.451/17.451/17.451 ms
~ # exit

Let's clear up and start studying the configuration file. We force-delete the container, and then delete the container image.

$ lxc delete --force mycontainer
$ lxc image delete myminimal

Understanding the configuration file of a container image

Here is again the container file for a minimal Alpine container image. It has three sections,

  1. image, with information about the image. We can put anything for the description and distribution name. The release version, though, should exist.
  2. source, which describes where to get the image, ISO or packages of the distribution. The downloader is a plugin in distrobuilder that knows how to get the appropriate files, as long as it knows the URL and the release version. The url is the URL prefix of the location with the files. keys and keyserver are used to verify digitally the authenticity of the files.
  3. packages, which indicates the plugin that knows how to deal with the specific package manager of the distribution. In general, you can also indicate here which additional packages to install, which to remove and which to update.
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

manager: apk

The downloader and url go hand in hand. The URL is the prefix for the repository that the downloader will use to get the necessary files.

The keys are necessary to verify the authenticity of the files. The keyserver is used to download the actual public keys of the IDs that were specified in the keys. You could very well not specify a keyserver, and distrobuilder would request the keys from the root PGP servers. However, those servers are often overloaded and the attempt can easily fail. It happened to me several times so that I explicitly use now the Ubuntu keyserver.


We have seen how to use a minimal configuration file for an Alpine container image. In future posts, we are going to see how to create more complete configuration files.

Simos Xenitellis

10 Oct 2018 8:12pm GMT

09 Oct 2018

feedPlanet Ubuntu

Benjamin Mako Hill: What we lose when we move from social to market exchange

Couchsurfing and Airbnb are websites that connect people with an extra guest room or couch with random strangers on the Internet who are looking for a place to stay. Although Couchsurfing predates Airbnb by about five years, the two sites are designed to help people do the same basic thing and they work in extremely similar ways. They differ, however, in one crucial respect. On Couchsurfing, the exchange of money in return for hosting is explicitly banned. In other words, couchsurfing only supports the social exchange of hospitality. On Airbnb, users must use money: the website is a market on which people can buy and sell hospitality.

Graph of monthly signups on Couchsurfing and Airbnb.Comparison of yearly sign-ups of trusted hosts on Couchsurfing and Airbnb. Hosts are "trusted" when they have any form of references or verification in Couchsurfing and at least one review in Airbnb.

The figure above compares the number of people with at least some trust or verification on both Couchsurfing and Airbnb based on when each user signed up. The picture, as I have argued elsewhere, reflects a broader pattern that has occurred on the web over the last 15 years. Increasingly, social-based systems of production and exchange, many like Couchsurfing created during the first decade of the Internet boom, are being supplanted and eclipsed by similar market-based players like Airbnb.

In a paper led by Max Klein that was recently published and will be presented at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) which will be held in Jersey City in early November 2018, we sought to provide a window into what this change means and what might be at stake. At the core of our research were a set of interviews we conducted with "dual-users" (i.e. users experienced on both Couchsurfing and Airbnb). Analyses of these interviews pointed to three major differences, which we explored quantitatively from public data on the two sites.

First, we found that users felt that hosting on Airbnb appears to require higher quality services than Couchsurfing. For example, we found that people who at some point only hosted on Couchsurfing often said that they did not host on Airbnb because they felt that their homes weren't of sufficient quality. One participant explained that:

"I always wanted to host on Airbnb but I didn't actually have a bedroom that I felt would be sufficient for guests who are paying for it."

An another interviewee said:

"If I were to be paying for it, I'd expect a nice stay. This is why I never Airbnb-hosted before, because recently I couldn't enable that [kind of hosting]."

We conducted a quantitative analysis of rates of Airbnb and Couchsurfing in different cities in the United States and found that median home prices are positively related to number of per capita Airbnb hosts and a negatively related to the number of Couchsurfing hosts. Our exploratory models predicted that for each $100,000 increase in median house price in a city, there will be about 43.4 more Airbnb hosts per 100,000 citizens, and 3.8 fewer hosts on Couchsurfing.

A second major theme we identified was that, while Couchsurfing emphasizes people, Airbnb places more emphasis on places. One of our participants explained:

"People who go on Airbnb, they are looking for a specific goal, a specific service, expecting the place is going to be clean […] the water isn't leaking from the sink. I know people who do Couchsurfing even though they could definitely afford to use Airbnb every time they travel, because they want that human experience."

In a follow-up quantitative analysis we conducted of the profile text from hosts on the two websites with a commonly-used system for text analysis called LIWC, we found that, compared to Couchsurfing, a lower proportion of words in Airbnb profiles were classified as being about people while a larger proportion of words were classified as being about places.

Finally, our research suggested that although hosts are the powerful parties in exchange on Couchsurfing, social power shifts from hosts to guests on Airbnb. Reflecting a much broader theme in our interviews, one of our participants expressed this concisely, saying:

"On Airbnb the host is trying to attract the guest, whereas on Couchsurfing, it works the other way round. It's the guest that has to make an effort for the host to accept them."

Previous research on Airbnb has shown that guests tend to give their hosts lower ratings than vice versa. Sociologists have suggested that this asymmetry in ratings will tend to reflect the direction of underlying social power balances.

power difference bar graphAverage sentiment score of reviews in Airbnb and Couchsurfing, separated by direction (guest-to-host, or host-to-guest). Error bars show the 95% confidence interval.

We both replicated this finding from previous work and found that, as suggested in our interviews, the relationship is reversed on Couchsurfing. As shown in the figure above, we found Airbnb guests will typically give a less positive review to their host than vice-versa while in Couchsurfing guests will typically a more positive review to the host.

As Internet-based hospitality shifts from social systems to the market, we hope that our paper can point to some of what is changing and some of what is lost. For example, our first result suggests that less wealthy participants may be cut out by market-based platforms. Our second theme suggests a shift toward less human-focused modes of interaction brought on by increased "marketization." We see the third theme as providing somewhat of a silver-lining in that shifting power toward guests was seen by some of our participants as a positive change in terms of safety and trust in that guests. Travelers in unfamiliar places often are often vulnerable and shifting power toward guests can be helpful.

Although our study is only of Couchsurfing and Airbnb, we believe that the shift away from social exchange and toward markets has broad implications across the sharing economy. We end our paper by speculating a little about the generalizability of our results. I have recently spoken at much more length about the underlying dynamics driving the shift we describe in my recent LibrePlanet keynote address.

More details are available in our paper which we have made available as a preprint on our website. The final version is behind a paywall in the ACM digital library.

This blog post, and paper that it describes, is a collaborative project by Maximilian Klein, Jinhao Zhao, Jiajun Ni, Isaac Johnson, Benjamin Mako Hill, and Haiyi Zhu. Versions of this blog post were posted on several of our personal and institutional websites. Support came from GroupLens Research at the University of Minnesota and the Department of Communication at the University of Washington.

09 Oct 2018 5:02pm GMT

Simos Xenitellis: Using distrobuilder to create container images for LXC and LXD

With LXC and LXD you can run system containers, which are containers that behave like a full operating system (like a Virtual Machine does). There are already official container images for most Linux distributions. When you run lxc launch ubuntu:18.04 mycontainer, you are using the ubuntu: repository of container images to launch a container with Ubuntu 18.04.

In this post, we are going to see

  1. an introduction to the tool distrobuilderthat creates container images
  2. how to recreate a container image
  3. how to customize a container image

Introduction to distrobuilder

The following are the command line options of distrobuilder. You can use distrobuilder to create container images for both LXC and LXD.

$ distrobuilder
System container image builder for LXC and LXD

  distrobuilder [command]

Available Commands:
  build-dir   Build plain rootfs
  build-lxc   Build LXC image from scratch
  build-lxd   Build LXD image from scratch
  help        Help about any command
  pack-lxc    Create LXC image from existing rootfs
  pack-lxd    Create LXD image from existing rootfs

      --cache-dir   Cache directory
      --cleanup     Clean up cache directory (default true)
  -h, --help        help for distrobuilder
  -o, --options     Override options (list of key=value)

Use "distrobuilder [command] --help" for more information about a command.

The build-dir command builds the root filesystem (rootfs) of the distribution and stops there. This option makes sense if we plan to make some custom manual changes to the rootfs. We would then need to use either pack-lxc or pack-lxd to package up the rootfs into a container image.

The build-lxc and build-lxd commands create container images for either LXC or LXD, both from scratch. They both require a YAML configuration file, and that's what is only needed from them to produce a container image.


Currently, there are no binary packages of distrobuilder. Therefore, you will need to compile it from source. To do so, first install the Go programming language, and some other dependencies. Here are the commands to do this.

sudo apt update
sudo apt install -y golang-go debootstrap rsync gpg squashfs-tools

Second, download the source code of the distrobuilder repository (this repository). The source will be placed in $HOME/go/src/github.com/lxc/distrobuilder/Here is the command to do this.

go get -d -v github.com/lxc/distrobuilder

Third, enter the directory with the source code of distrobuilder and run make to compile the source code. This will generate the executable program distrobuilder, and it will be located at $HOME/go/bin/distrobuilder. Here are the commands to do this.

cd $HOME/go/src/github.com/lxc/distrobuilder

Creating a container image

To create a container image, first create a directory where you will be placing the container images, and enter that directory.

mkdir -p $HOME/ContainerImages/ubuntu/
cd $HOME/ContainerImages/ubuntu/

Then, copy one of the example yaml configuration files for container images into this directory. In this example, we are creating an Ubuntu container image.

cp $HOME/go/src/github.com/lxc/distrobuilder/doc/examples/ubuntu ubuntu.yaml

Finally, run distrobuilder to create the container image. We are using the build-lxd option to create a container image for LXD. We need sudo because the process of preparing the rootfs requires to set the ownership and permissions of files to IDs that a non-root account cannot perform. Also note the way we invoke distrobuilder (as $HOME/go/bin/distrobuilder). It has to be an absolute path because under sudo the $PATH is different from our current non-root user account.

sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml

It takes about five minutes to build the Ubuntu container image. Be patient.

If the command is successful, you will get an output similar to the following. The lxd.tar.xz file is the description of the container image. The rootfs.squasfs file is the root filesystem (rootfs) of the container image. The set of these two files is the container image.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ ls -l
total 121032
-rw-r--r-- 1 root      root            560 Oct  3 13:28 lxd.tar.xz
-rw-r--r-- 1 root      root      123928576 Oct  3 13:28 rootfs.squashfs
-rw-rw-r-- 1 multipass multipass      3317 Oct  3 13:19 ubuntu.yaml

Adding the container image to LXD

To add the container image to a LXD installation, use the lxc image import command as follows.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ lxc image import lxd.tar.xz rootfs.squashfs --alias mycontainerimage
Image imported with fingerprint: ae81c04327b5b115383a4f90b969c97f5ef417e02d4210d40cbb17a038729a27

Let's see the container image in LXD. The ubuntu.yaml had a setting to create an Ubuntu 17.10 (artful) image. The size is 118MB.

$ lxc image list mycontainerimage
|      ALIAS       | FINGERPRINT  | PUBLIC |  DESCRIPTION  |  ARCH  |   SIZE   |         UPLOAD DATE          |
| mycontainerimage | ae81c04327b5 | no     | Ubuntu artful | x86_64 | 118.19MB | Oct 3, 2018 at 12:09pm (UTC) |

Launching a container from the container image

To launch a container from the freshly created container image, use lxc launch as follows. Note that you do not specify a repository of container images (like ubuntu: or images:) because the image is located locally.

$ lxc launch mycontainerimage c1
Creating c1
Starting c1

How to customize a container image

The ubuntu.yaml configuration file contains all the details that are required to create an Ubuntu container image. We can edit the file and make changes to the generated container image.

Changing the distribution release

The file that is currently included in the distrobuilder repository has the following section:

distribution: ubuntu
release: artful
description: Ubuntu {{ image.release }}
architecture: amd64

We can change to either bionic (for Ubuntu 18.04) or cosmic (for Ubuntu 18.10), save and finally build again the container image.


Error "gpg: no valid OpenPGP data found"

$ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.920564219/secring.gpg' created gpg: keyring/tmp/distrobuilder.920564219/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpgkeys: key 790BC7277767219C42C86F933B4FE6ACC0B21F32 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
gpg: keyserver communications error: keyserver helper general error
gpg: keyserver communications error: unknown pubkey algorithm
gpg: keyserver receive failed: unknown pubkey algorithm

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

Error "gpg: keyserver timed out"

$ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.854636592/secring.gpg' created gpg: keyring/tmp/distrobuilder.854636592/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

Simos Xenitellis

09 Oct 2018 12:20pm GMT

Harald Sitter: KDiff3 master as git mergetool? Yes, please!

I like using kdiff3, I also like using git, I also like using bundles for applications. Let's put the three together!

Set up the KDE git flatpak repo and install kdiff3

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo
flatpak install kdeapps org.kde.kdiff3

Write a tiny shim around this so we can use it from git. Put it in /usr/bin/kdiff3 or $HOME/bin/kdiff3 if $PATH is set up to include bins from $HOME.

exec flatpak run org.kde.kdiff3 "$@"

Don't forget to chmod +x kdiff3 it!

git mergetool should now pick up our kdiff3 wrapper automatically. So all that's left to do is having a merge conflict and off we go with git mergetool

09 Oct 2018 11:00am GMT