25 Apr 2014

feedPlanet Ubuntu

Svetlana Belkin: My Dream Job

I Twitted this awhile ago today:

@openscience: Is there any #OpenSource #Science org that need some help in community management? I'm willing to learn how to help.

- Svetlana Belkin (@Barsookain) April 24, 2014

and I thought I need to explain more in depth what my dream job is. Hopefully, what I write down is not that far-fetched for a job that exists.

What I want to do is to tie in my hobbies, computers, Ubuntu/FOSS, and the sense of community in these communities and what I do for them, and my degree that I'm getting, which is BS in biology with molecular and cellular biology as the focus. The closest thing that I have in mind is a Community Manager type of job, just like Jono Bacon is looking for. I want to use my other skills that I gained from being involved with the Ubuntu Community, mainly running a WordPress Blog, editing MoinMoin wiki pages, and driving projects. The only skills that I lack are the coding/scripting skills and command line but I'm willing to learn those.

Even though I manage a team in the Ubuntu Community called Ubuntu Scientists, the team's aim is different then my dream job since it's aim is to have a network of scientists that use Ubuntu/Linux and have resources to help them. Also, I hate to say this but I want to be paid for my work so I can have a living.

While money is the issue, it's not the only one. The other major issue is I think I will not be happy if I was a lab tech or even (if I go for my Masters or PhD) as a researcher. I want to do both, community management and still work as a biologist.

If you have a position, please contact me at belkinsa@ubuntu.com or comment below. You may also make a connection with on LinkedIn.

25 Apr 2014 12:26am GMT

24 Apr 2014

feedPlanet Ubuntu

Ubuntu Podcast from the UK LoCo: S07E04 – The One with All the Haste

We're back with Season Seven, Episode Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Simnel cake in Studio L.

Download OGG Download MP3 Play in Popup

In this week's show:

We'll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus

24 Apr 2014 7:30pm GMT

Daniel Pocock: Android betrays tethering data

When I upgraded an Android device the other day, I found that tethering completely stopped working. The updated CyanogenMod had inherited a new bug from Android, informing the carrier that I was tethering. The carrier, Vodafone Italy, had decided to make my life miserable by blocking that traffic. I had a closer look and managed to find a workaround.

There is absolutely no difference, from a technical perspective, between data transmitted from a mobile device on-screen application and data transmitted from tethering. Revealing the use of tethering to the carrier is a massive breach of privacy - yet comments in the Google bug tracker suggest it is a feature rather than a bug. This little table helps put that logic in perspective:

Product Person who carriers handset
User Mobile network who wants to discriminate against some types of network traffic to squeeze more money out of the Product
Feature Revealing private information about the way the Product uses his/her Internet so the real User can profit.

It is also bad news for the environment: many people are being tricked into buying un-needed USB dongle modems that will end up in the rubbish in 1-2 years time when their contract expires and the company pushes them to upgrade to the next best thing.

Behind the scenes

What does it really mean in practice, how does Android tell your carrier which data is from tethering?

As my device is rooted and as it is my device and I will do what I want with it, I decided to have a look inside.

The ip command revealed that there are now two network devices, rmmnet_usb0 and rmmnet_usb1. The basic ip route command reveals that traffic from different source addresses is handled differently and split over the different network devices:

shell@android:/ # ip route dev tun0  scope link
default via dev rmnet_usb0 via dev rmnet_usb1 via dev rmnet_usb1 dev rmnet_usb0  proto kernel  scope link  src dev rmnet_usb0  scope link dev rmnet_usb1  proto kernel  scope link  src dev rmnet_usb1  scope link dev tun0  scope link dev rndis0  proto kernel  scope link  src

I then looked more closely and found that there is also an extra routing table, it can be found with ip rule

shell@android:/ # ip rule show
0:      from all lookup local
32765:  from lookup 60
32766:  from all lookup main
32767:  from all lookup default

shell@android:/ # ip route show table 60
default via dev rmnet_usb1 dev rmnet_usb1 dev rndis0  scope link

In this routing table, it is obvious that data from the tethering subnet ( is sent over the extra device rmnet_usb1.

Manually cleaning it up

If the phone is rooted, it is possible to very quickly change the routing table to get all the tethering traffic going through the normal rmnet_usb0 interface again.

It is necessary to get rid of that alternative routing table first:

ip rule del pref 32765

and then update the iptables entries that refer to interface names:

iptables -t nat -I natctrl_nat_POSTROUTING -s -o rmnet_usb0 -j MASQUERADE
iptables -I natctrl_FORWARD -i rmmnet_usb0 -j RETURN

This immediately resolved the problem for me on the Vodafone network in Italy.


If Google can be bullied into accepting this kind of discriminatory routing in the stock Android builds and it can even propagate into CyanogenMod, then I'm just glad I'm not running one of those Android builds that has been explicitly "enhanced" by a carrier.

It raises big questions about who really is the owner and user of the device and who is receiving the value when a person pays money to "buy" a device.

24 Apr 2014 5:06pm GMT

Canonical Design Team: Latest from the web team — April 2014

Ubuntu 14.04 LTS is out and it's great! The period after release tends to be slightly less hectic than the lead up to it, but that doesn't mean that the web team is not as busy as ever.

In the last few weeks we've worked on:

And we're currently working on:

And, if you'd like to join the web team, we are currently looking for an experienced user experience designer to join us! Send us an email if you'd like to apply.

Delicious treats for the Ubuntu releaseDelicious treats on release day

Do you have any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

24 Apr 2014 1:57pm GMT

Robie Basak: New in Ubuntu 14.04: Apache 2.4

Ubuntu 14.04 ships with Apache 2.4, which is a significant upgrade over Apache 2.2 as found in 12.04.

Apache 2.4 actually first appeared in 13.10, though of course if you intend to do an LTS to LTS upgrade, you won't notice this until now.

If you have a default configuration, then everything should upgrade automatically.

Of course, server deployments typically do not run on defaults. In this case, there are significant changes of which you should be aware. Expect the apache2 postinst script to fail to restart Apache after the upgrade. You'll need to fix up your own customisations to meet the requirements in Apache 2.4 and then run sudo dpkg --configure -a and sudo apt-get -f install to recover. Be sure to back up your system before you begin.

Instead of upgrading, you may want to consider this as an opportunity to enter the new world of automated deployments. Codify your deployment, and then test and deploy a fresh instance of Apache on 14.04 instead, using virtual machines as needed. This is far less stressful than trying to upgrade an existing production system!

Upstream changes

You will need to update any custom configuration according to latest upstream configuration syntax.

See upstream's document "Upgrading to 2.4 from 2.2" for details of required configuration changes. Authorization and access control directives have changed, and will likely need adjustment. Various defaults have also changed.

Significant packaging changes

The default path to served files has changed from /var/www to /var/www/html, mainly for security reasons. See the debian-devel thread "Changing the default document root for HTTP server" for details.

The packaging has been overhauled quite significantly. /etc/apache2/conf.d/ is now /etc/apache2/conf-available/ and /etc/apache2/conf-enabled/, to match the existing sites-enabled/ and mods-enabled/ mechanisms.

Before you upgrade, I suggest that you first make sure that everything in /etc/apache2/*-available is correctly a symlink to the corresponding /etc/apache2/*-enabled. Note that all configurations in sites-enabled and conf-enabled need a .conf suffix now.

Make use of the a2enmod, a2ensite, a2enconf series tools! These help you easily manage the symlinks from *-available to *-enabled.

See Debian's apache2 packaging NEWS file for full details.

Other Notes

Debian changed the default "It works!" page into a comprehensive page explaining on where to go after an initial installation. Initially, I imported this into Ubuntu without noticing this change. Thank you to Andreas Hasenack for pointing out that the page referred to Debian and the Debian bug tracker in a misleading way, in bug 1288690. I fixed this in Ubuntu by essentially doing a s/Debian/Ubuntu/g and crediting Debian appropriately instead.


I think the Apache 2.4 packaging is a shining example of complex packaging done well. All credit is due to Stefan Fritsch and Arno Töll, the Debian maintainers of the Apache packaging. They have done the bulk of the work involved in this update.

Getting help

As always, see Ubuntu's main page on community support options. askubuntu.com, #ubuntu-server on IRC (Freenode) and the Ubuntu Server mailing list are appropriate venues.

24 Apr 2014 1:13pm GMT

Martin Pitt: Booting Ubuntu with systemd: Test packages available

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There's a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn't accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there's no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don't have an init.d script at all (NM) or it isn't enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn't go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that's now part of the TODO list.

So, if you want to help with that, or just test and tell us what's wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

  sudo add-apt-repository ppa:pitti/systemd
  sudo apt-get update
  sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you'll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn't work for you (too fast, VM, or similar), enable the grub menu with

  sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub
  sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I'll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now. :-)

Update: As the comments pointed out, this bricked /etc/resolv.conf. I now uploaded a resolvconf package to the PPA which provides the missing unit (counterpart to the /etc/init/resolvconf.conf upstart job) and this now works fine. If you are in that situation, please boot with upstart, and do the following to clean up:

  sudo rm /etc/resolv.conf
  sudo ln -s ../run/resolvconf/resolv.conf /etc/resolv.conf

Then you can boot back to systemd.

Update 2: If you want to help testing, please file bugs with a systemd-boot tag. See the list of known bugs when booting with systemd.

24 Apr 2014 7:59am GMT

23 Apr 2014

feedPlanet Ubuntu

Costales: "Folder Color" app: Change the color of your folders in Ubuntu

A simple, easy, fast and useful app! Change the color of your folders in Nautilus in a really easy way, so that you can get a better visual layout!

Folder Color in Ubuntu

How to install? Just enter this command into a Terminal, logout and enjoy it!
sudo add-apt-repository ppa:costales/folder-color ; sudo apt-get update ; sudo apt-get install folder-color -y

More info.

23 Apr 2014 5:29pm GMT

Mark Shuttleworth: U talking to me?

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it's time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity - our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams - with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We're in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We'll make that possible. They want PAAS and SAAS and an Internet of Things that Don't Bite, let's make that possible. If free software is to fulfil its true promise it needs to be useful for people putting precious parts into production, and we'll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It's a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love - time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We've all got our ucky code, and now's a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It's not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table - or the forum - or the mailing list - and let's make something amazing. Something unified and upright, something about which we can be universally proud. And since we're getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let's get going on the utopic unicorn. Give it stick. See you at vUDS.

23 Apr 2014 5:16pm GMT

Svetlana Belkin: vBlog Teaser

I'm thinking of doing a vBlog about Ubuntu and other things:

23 Apr 2014 1:54pm GMT

Adam Stokes: new juju plugin: juju-sos

Juju sos is my entryway into Go code and the juju internals. This plugin will execute and pull sosreports from all machines known to juju or a specific machine of your choice and copy them locally on your machine.

An example of what this plugin does, first, some output of juju status to give you an idea of the machines I have:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju status
environment: local
    agent-state: started
    dns-name: localhost
    instance-id: localhost
    series: trusty
    agent-state: started
    instance-id: poe-local-machine-1
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
    agent-state: started
    instance-id: poe-local-machine-2
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M
    charm: cs:trusty/keystone-2
    exposed: false
      - keystone
      - openstack-dashboard
        agent-state: started
        machine: "2"
    charm: cs:trusty/openstack-dashboard-0
    exposed: false
      - openstack-dashboard
      - keystone
        agent-state: started
        machine: "1"
        - 80/tcp
        - 443/tcp

Basically what we are looking at is 2 machines that are running various services on them in my case Openstack Horizon and Keystone. Now suppose I have some issues with my juju machines and openstack and I need a quick way to gather a bunch of data on those machines and send them to someone who can help. With my juju-sos plugin, I can quickly gather sosreports on each of the machines I care about in as little typing as possible.

Here is the output from juju sos querying all machines known to juju:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju sos -d ~/scratch
2014-04-23 05:30:47 INFO juju.provider.local environprovider.go:40 opening environment "local"
2014-04-23 05:30:47 INFO juju.state open.go:81 opening state, mongo addresses: [""]; entity ""
2014-04-23 05:30:47 INFO juju.state open.go:133 dialled mongo successfully
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:53 Querying all machines
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(1)
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(2)
2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 1
2014-04-23 05:30:55 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
2014-04-23 05:30:56 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
2014-04-23 05:31:08 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch"
┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> ls $HOME/scratch
sosreport-ubuntu-20140423040507.tar.xz  sosreport-ubuntu-20140423052125.tar.xz  sosreport-ubuntu-20140423052545.tar.xz
sosreport-ubuntu-20140423050401.tar.xz  sosreport-ubuntu-20140423052223.tar.xz  sosreport-ubuntu-20140423052600.tar.xz
sosreport-ubuntu-20140423050727.tar.xz  sosreport-ubuntu-20140423052330.tar.xz  sosreport-ubuntu-20140423052610.tar.xz
sosreport-ubuntu-20140423051436.tar.xz  sosreport-ubuntu-20140423052348.tar.xz  sosreport-ubuntu-20140423053052.tar.xz
sosreport-ubuntu-20140423051635.tar.xz  sosreport-ubuntu-20140423052450.tar.xz  sosreport-ubuntu-20140423053101.tar.xz
sosreport-ubuntu-20140423052006.tar.xz  sosreport-ubuntu-20140423052532.tar.xz

Another example of juju sos just capturing a sosreport from one machine:

┌[poe@cloudymeatballs] [/dev/pts/1] 
└[~]> juju sos -d ~/scratch -m 2
2014-04-23 05:41:59 INFO juju.provider.local environprovider.go:40 opening environment "local"
2014-04-23 05:42:00 INFO juju.state open.go:81 opening state, mongo addresses: [""]; entity ""
2014-04-23 05:42:00 INFO juju.state open.go:133 dialled mongo successfully
2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:70 Querying one machine(2)
2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2
2014-04-23 05:42:08 INFO juju.sos main.go:99 Copying archive to "/home/poe/scratch"

Fancy, fancy :)

Of course this is a work in progress and I have a few ideas of what else to add here, some of those being:

As usual contributions are welcomed and some installation instructions are located in the readme

23 Apr 2014 5:52am GMT

Mario Limonciello: IR Receiver extension for Ambilight raspberry pi clone

After working with my ambilight clone for a few days, I discovered the biggest annoyance was that it wouldn't turn off after turning off the TV. I had some ideas on how I could remotely trigger it from the phone or from an external HTPC but I really wanted a self contained solution in case I decided to swap the HTPC for a FireTV or a Chromecast.

This brought me to trying to do it directly via my remote. My HTPC uses a mceusb, so I was tempted to just get another mceusb for the pi. This would have been overkill though, the pi has tons of unused GPIO's, it can be done far simpler (and cheaper).

I looked into it and discovered that someone actually already wrote a kernel module that directly controls an IR sensor on a GPIO. The kernel module is based off the existing lirc_serial module, but adapted specifically for the raspberry pi. (See http://aron.ws/projects/lirc_rpi/ for more information)


All that's necessary is a 38 kHz IR sensor. You'll spend under $5 on one of them on Amazon (plus some shipping) or you can get one from radio shack if you want something quick and local. I spent $4.87 on one at my local radio shack.

The sensor is really simple, 3 pins. All 3 pins are available in the pi's header. One goes to 3.3V rail, one to ground, and one to a spare GPIO. There's a few places on the header that you can use for each. Just make sure you match up the pinout to the sensor you get. I chose to use GPIO 22 as it's most convenient for my lego case. The lirc_rpi defaults to GPIO 18.

Some notes to keep in mind:

  1. While soldering it, be cognizant of which way you want the sensor to face so that it can be accessed from the remote.
  2. Remember that you are connecting to 3.3V and Ground from the Pi header. The ground connection won't be the same as your rail that was used to power the pi if you are powering via USB.
  3. The GPIO pins are not rated for 5V, so be sure to connect to the 3.3V.


LIRC is available directly in the raspbian repositories. Install it like this:

# sudo apt-get install lirc

Manually load the module so that you can test it.

# sudo modprobe lirc_rpi gpio_in_pin=22

Now use mode2 to test that it's working. Once you run the command, press some buttons on your remote. You should be output about space, pulse and other stuff. Once you're satisfied, press ctrl-c to exit.

# mode2 -d /dev/lirc0

Now, add the modules that need to be loaded to /etc/modules. If you are using a different GPIO than 18, specify it here again. This will make sure that lirc_rpi loads on boot.


lirc_rpi gpio_in_pin=22

Now modify /etc/lirc/hardware.conf to match this configuration to make it work for the rpi:


# /etc/lirc/hardware.conf
# Arguments which will be used when launching lircd

#Don't start lircmd even if there seems to be a good config file

#Don't start irexec, even if a good config file seems to exist.

#Try to load appropriate kernel modules

# Run "lircd --driver=help" for a list of supported drivers.
# usually /dev/lirc0 is the correct setting for systems using udev

# Default configuration files for your hardware if any

Next, we'll record the buttons that you want the pi to trigger the backlight toggle on. I chose to do it on the event of turning the TV on or off. For me I actually have a harmony remote that has separate events for "Power On" and "Power Off" available. So I chose to program KEY_POWER and KEY_POWER2. If you don't have the codes available for both "Power On" and "Power Off" then you can just program "Power Toggle" to KEY_POWER.

# irrecord -d /dev/lirc0 ~/lircd.conf

Once you have the lircd.conf recorded, move it into /etc/lirc to overwrite /etc/lirc/lircd.conf and start lirc

# sudo mv /home/pi/lircd.conf /etc/lirc/lircd.conf
# sudo /etc/init.d/lirc start

With lirc running you can examine that it's properly recognizing your key event using the irw command. Once irw is running, press the button on the remote and make sure your pi recognizes it. Once you're done press ctrl-c to exit.

# irw

Now that you've validated the pi can recognize the command, it's time to tie it to an actual script. Create /home/pi/.lircrc with contents like this:


button = KEY_POWER
prog = irexec
repeat = 0
config = /home/pi/toggle_backlight.sh off

button = KEY_POWER2
prog = irexec
repeat = 0
config = /home/pi/toggle_backlight.sh on

My toggle_backlight.sh looks like this:


if [ -n "$1" ]; then
RUNNING=$(pgrep hyperion-v4l2)
if [ -n "$RUNNING" ]; then
if [ "$ARG" = "on" ]; then
exit 0
pkill hyperion-v4l2
hyperion-remote --color black
exit 0
if [ "$ARG" = "off" ]; then
hyperion-remote --color black
exit 0
#spawn hyperion remote before actually clearing channels to prevent extra flickers
hyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&
hyperion-remote --clearall

To test, run irexec and then press your remote button. With any luck irexec will launch the toggle script and change your LED status.

# irexec

Lastly, you need to add irexec to your /etc/rc.local to make it boot with the pi. Make sure you put the execution before the exit 0


su pi -c "irexec -d"
su pi -c "/home/pi/toggle_backlight.sh off"

Reboot your pi, and make sure everything works together.

# sudo reboot

23 Apr 2014 5:48am GMT

Charles Profitt: Ubuntu 14.04: Subtle shades of success

I just completed upgrading four computers to Ubuntu 14.04 tonight. My testing machine has been running 14.04 since early alpha phase, but in the last two days I upgrade by work Lenovo W520, my person Lenovo T530 and the self-assembled desktop with a core2duo and Nvidia 8800 GTS that I haded down to my son.

Confidence In Ubuntu
On Friday of this week I will be involved in delivering training to a group of Boy Scout leaders at a Wood Badge course. I will utilize my primary laptop, the T530, to give a presentation and produce the Gilwell Gazette. I completed a great deal of prep work on Ubuntu 13.10 and if I did not have complete confidence in Ubuntu 14.04 I would have waited until after the weekend to upgrade. I needed to be confident that the multi-monitor functionality would work, that documents produced in an earlier version of Libre Office would not suddenly change the page layouts. In short, I was depending on Ubuntu being dependable and solid more than I usually do.

Subtle Changes Add Flexibility and Polish
Ubuntu added some very small tweaks that truly add to the overall user experience. The borderless windows, new lock screen, and smaller minimum size of launcher icons all add up to slight, but pleasant changes.

Here is a screen shot of the 14.04 desktop on the Lenovo T530.

14.04 desktop

14.04 desktop

23 Apr 2014 3:10am GMT

Dustin Kirkland: Docker in Ubuntu, Ubuntu in Docker

This article is cross-posted on Docker's blog as well.

There is a design pattern, occasionally found in nature, when some of the most elegant and impressive solutions often seem so intuitive, in retrospect.

For me, Docker is just that sort of game changing, hyper-innovative technology, that, at its core, somehow seems straightforward, beautiful, and obvious.

Linux containers, repositories of popular base images, snapshots using modern copy-on-write filesystem features. Brilliant, yet so simple. Docker.io for the win!

I clearly recall nine long months ago, intrigued by a fervor of HackerNews excitement pulsing around a nascent Docker technology. I followed a set of instructions on a very well designed and tastefully manicured web page, in order to launch my first Docker container. Something like: start with Ubuntu 13.04, downgrade the kernel, reboot, add an out-of-band package repository, install an oddly named package, import some images, perhaps debug or ignore some errors, and then launch. In few moments, I could clearly see the beginnings of a brave new world of lightning fast, cleanly managed, incrementally saved, highly dense, operating system containers.

Ubuntu inside of Ubuntu, Inception style. So. Much. Potential.

Fast forward to today -- April 18, 2014 -- and the combination of Docker and Ubuntu 14.04 LTS has raised the bar, introducing a new echelon of usability and convenience, and coupled with the trust and track record of enterprise grade Long Term Support from Canonical and the Ubuntu community.

Big thanks, by the way, to Paul Tagliamonte, upstream Debian packager of Docker.io, as well as all of the early testers and users of Docker during the Ubuntu development cycle.

Docker is now officially in Ubuntu. That makes Ubuntu 14.04 LTS the first enterprise grade Linux distribution to ship with Docker natively packaged, continuously tested, and instantly installable. Millions of Ubuntu servers are now never more than three commands away from launching or managing Linux container sandboxes, thanks to Docker.

sudo apt-get install docker.io
sudo docker.io pull ubuntu
sudo docker.io run -i -t ubuntu /bin/bash

And after that last command, Ubuntu is now running within Docker, inside of a Linux container.




User friendly.

Just the way we've been doing things in Ubuntu for nearly a decade. Thanks to our friends at Docker.io!


23 Apr 2014 1:12am GMT

22 Apr 2014

feedPlanet Ubuntu

Daniel Pocock: Automatically creating repackaged upstream tarballs for Debian

One of the less exciting points in the day of a Debian Developer is the moment they realize they have to create a repackaged upstream source tarball.

This is often a process that they have to repeat on each new upstream release too.

Wouldn't it be useful to:

  • Scan all the existing repackaged upstream source tarballs and diff them against the real tarballs to catalog the things that have to be removed and spot patterns?
  • Operate a system that automatically produces repackaged upstream source tarballs for all tags in the upstream source repository or all new tarballs in the upstream download directory? Then the DD can take any of them and package them when he wants to with less manual effort.
  • Apply any insights from this process to detect non-free content in the rest of the Debian archive and when somebody is early in the process of evaluating a new upstream project?

Google Summer of Code is back

One of the Google Summer of Code projects this year involves recursively building Java projects from their source. Some parts of the project, such as repackaged upstream tarballs, can be generalized for things other than Java. Web projects including minified JavaScript are a common example.

Andrew Schurman, based near Vancouver, is the student selected for this project. Over the next couple of weeks, I'll be starting to discuss the ideas in more depth with him. I keep on stumbling on situations where repackaged upstream tarballs are necessary and so I'm hoping that this is one area the community will be keen to collaborate on.

22 Apr 2014 8:34pm GMT

Jonathan Riddell: Favourite Twitter Post

KDE Project:

There's only 1 tool to deal with an unsupported Windows XP...

22 Apr 2014 12:11pm GMT

Michael Rooney: Easily sending postcards to your Kickstarter backers with Lob

Recently my friend Joël Franusic was stressing out about sending postcards to his Kickstarter backers and asked me to help him out. He pointed me to the excellent service Lob.com, which is a very developer-friendly API around printing and mailing. We quickly had some code up and running that could take a CSV export of Kickstarter campaign backers, verify addresses, and trigger the sending of customizable, actual physical postcards to the backers.

Screen Shot 2014-04-14 at 2.24.30 PM

We wanted to share the project such that it could help out other Kickstarter campaigns, so we put it on Github: https://github.com/mrooney/kickstarter-lob.

Below I explain how to install and use this script to use Lob to send postcards to your Kickstarter backers. The section after that explains how the script works in detail.

Using the script to mail postcards to your backers

First, you'll need to sign up for a free account at Lob.com, then grab your "Test API Key" from the "Account Details" section of your account page. At this point you can use your sandbox API key to test away free of charge and view images of any resulting postcards. Once you are happy with everything, you can plug in credit card details and start using your "Live" API key. Second, you'll need an export from Kickstarter for the backers you wish to send postcards to.

Now you'll want to grab the kickstarter-lob code and get it set.

These instructions assume that you're using a POSIX compatible operating system like Mac OS X or Linux. If you're using Mac OS X, open the "Terminal" program and type the commands below into it to get started:

git clone https://github.com/mrooney/kickstarter-lob.git
cd kickstarter-lob
sudo easy_install pip # (if you don't have pip installed already)
pip install -r requirements.txt
cp config_example.json config.json
open config.json

At this point, you should have a text editor open with the configuration information. Plug in the correct details, making sure to maintain quotes around the values. You'll need to provide a few things besides an API key:

Now you are ready to give it a whirl. Run it like so. Make sure you include the filename for your Kickstarter export:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv
Fetching list of any postcards already sent...
Verifying addresses of backers...
warning: address verification failed for jsmith@example.com, cannot send to this backer.
Already sent postcards to 0 of 161 backers
Send to 160 unsent backers now? [y/N]: y
Postcard sent to Jeff Bezos! (psc_45df20c2ade155a9)
Postcard sent to Tim Cook! (psc_dcbf89cd1e46c488)
Successfully sent to 160 backers with 0 failures

The script will verify all addresses, and importantly, only send to addresses not already sent to. The script queries Lob to keep track of who you've already sent a postcard to; this important feature allows you to download new Kickstarter exports as people fill in or update their addresses. After downloading a new export from Kickstarter, just run the script against the new export, and the script will only send postcards to the new addresses.

Before anything actually happens, you'll notice that you're informed of how many addresses have not yet received postcards and prompted to send them or not, so you can feel assured it is sending only as many postcards as you expect.

If you were to run it again immediately, you'd see something like this:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv
 Fetching list of any postcards already sent...
 Verifying addresses of backers...
 warning: address verification failed for jsmith@example.com, cannot send to this backer.
 Already sent postcards to 160 of 161 backers
 SUCCESS: All backers with verified addresses have been processed, you're done!

After previewing your sandbox postcards on Lob's website, you can plug in your live API key in the config.json file and send real postcards at reasonable rates.

How the script works

This section explains how the script actually works. If all you wanted to do is send postcards to your Kickstarter backers, then you can stop reading now. Otherwise, read on!

Before you get started, take a quick look at the "kslob.py" file on GitHub: https://github.com/mrooney/kickstarter-lob/blob/master/kslob.py

We start by importing four Python libraries: "csv", "json", "lob", and "sys". Of those four libraries, "lob" is the only one that isn't part of Python's standard library. The "lob" library is installed by using the "pip install -r requirements.txt" command I suggest using above. You can also install "lob-python" using pip or easy_install.

#!/usr/bin/env python
import csv
import json
import lob
import sys

Next we define one class named "ParseKickstarterAddresses" and two functions "addr_identifier" and "kickstarter_dict_to_lob_dict"

"ParseKickstarterAddresses" is the code that reads in the backer report from Kickstarter and turns it into an array of Python dictionaries.

class ParseKickstarterAddresses:
   def __init__(self, filename):
       self.items = []
       with open(filename, 'r') as csvfile:
           reader = csv.DictReader(csvfile)
           for row in reader:

The "addr_identifier" function takes an address and turns it into a unique identifier, allowing us to avoid sending duplicate postcards to backers.

def addr_identifier(addr):
   return u"{name}|{address_line1}|{address_line2}|{address_city}|{address_state}|{address_zip}|{address_country}".format(**addr).upper()

The "kickstarter_dict_to_lob_dict" function takes a Python dictionary and turns it into a dictionary we can give to Lob as an argument.

def kickstarter_dict_to_lob_dict(dictionary):
   ks_to_lob = {'Shipping Name': 'name',
                'Shipping Address 1': 'address_line1',
                'Shipping Address 2': 'address_line2',
                'Shipping City': 'address_city',
                'Shipping State': 'address_state',
                'Shipping Postal Code': 'address_zip',
                'Shipping Country': 'address_country'}
   address_dict = {}
   for key in ks_to_lob.keys():
       address_dict[ks_to_lob[key]] = dictionary[key]
   return address_dict

The "main" function is where the majority of the logic for our script resides. Let's cover that in more detail.

We start by reading in the name of the Kickstarter backer export file. Loading our configuration file ("config.json") and then configuring Lob with the Lob API key from the configuration file:

def main():
   filename = sys.argv[1]
   config = json.load(open("config.json"))
   lob.api_key = config['api_key']

Next we query Lob for the list of postcards that have already been sent. You'll notice that the "processed_addrs" variable is a Python "set", if you haven't used a set in Python before, a set is sort of like an array that doesn't allow duplicates. We only fetch 100 results from Lob at a time, and use a "while" loop to make sure that we get all of the results.

print("Fetching list of any postcards already sent...")
processed_addrs = set()
postcards = []
postcards_result = lob.Postcard.list(count=100)
while len(postcards_result):
    postcards_result = lob.Postcard.list(count=100, offset=len(postcards))

One we fetch all of the postcards, we print out how many were found:

print("...found {} previously sent postcards.".format(len(postcards)))

Then we iterate through all of our results and add them to the "processed_addrs" set. Note the use of the "addr_identifier" function, which turns each address dictionary into a string that uniquely identifies that address.

for processed in postcards:
    identifier = addr_identifier(processed.to.to_dict())

Next we set up a bunch of variables that will be used later on, variables with configuration information for the postcards that Lob will send, the addresses from the Kickstarter backers export file, and variables to keep track of who we've sent postcards to and who we still need to send postcards to.

postcard_from_address = config['postcard_from_address']
postcard_message = config['postcard_message']
postcard_front = config['postcard_front']
postcard_name = config['postcard_name']
addresses = ParseKickstarterAddresses(filename)
to_send = []
already_sent = []

At this point, we're ready to start validating addresses, the code below loops over every line in the Kickstarter backers export file and uses Lob to see if the address is valid.

print("Verifying addresses of backers...")
for line in addresses.items:
    to_person = line['Shipping Name']
    to_address = kickstarter_dict_to_lob_dict(line)
        to_name = to_address['name']
        to_address = lob.AddressVerify.verify(**to_address).to_dict()['address']
        to_address['name'] = to_name
    except lob.exceptions.LobError:
        msg = 'warning: address verification failed for {}, cannot send to this backer.'

If the address is indeed valid, we check to see if we've already sent a postcard to that address. If so, the address is added to the list of addresses we've "already_sent" postcards to. Otherwise, it's added to the list of address we still need "to_send" postcards to.

if addr_identifier(to_address) in processed_addrs:

Next we print out the number of backers we've already sent postcards to and check to see if we need to send postcards to anybody, exiting if we don't need to send postcards to anybody.

nbackers = len(addresses.items)
print("Already sent postcards to {} of {} backers".format(len(already_sent), nbackers))
if not to_send:
    print("SUCCESS: All backers with verified addresses have been processed, you're done!")

Finally, if we do need to send one or more postcards, we tell the user how many postcards will be mailed and then ask them to confirm that those postcards should be mailed:

query = "Send to {} unsent backers now? [y/N]: ".format(len(to_send), nbackers)
if raw_input(query).lower() == "y":
    successes = failures = 0

If the user enters "Y" or "y", then we start sending postcards. The call to Lob is wrapped in a "try/except" block. We handle calls to the Lob library that return a "LobError" exception, counting those calls as a "failure". Other exceptions are not handled and will result in the script exciting with that exception.

for to_address in to_send:
        rv = lob.Postcard.create(to=to_address, name=postcard_name, from_address=postcard_from_address, front=postcard_front, message=postcard_message)
        print("Postcard sent to {}! ({})".format(to_address['name'], rv.id))
        successes += 1
    except lob.exceptions.LobError:
        msg = 'Error: Failed to send postcard to Lob.com'
        print("{} for {}".format(msg, to_address['name']))
        failures += 1

Lastly, we print a message indicating how many messages were sent and how many failures we had.

    print("Successfully sent to {} backers with {} failures".format(successes, failures))

(If the user pressed a key other than "Y" or "y", this is the message that they'll see)

print("Okay, not sending to unsent backers.")

And there you have it, a short script that uses Lob to send postcards to your Kickstarter backers, with code to only send one postcard per address, that gracefully handles errors from Lob.

I hope that you've found this useful! Please let us know of any issues you encounter on Github, or send pull requests adding exciting new features. Most importantly, enjoy easily bringing smiles to your backers!

22 Apr 2014 12:00pm GMT