21 Apr 2015

feedPlanet Debian

Manuel A. Fernandez Montecelo: About the Debian GNU/Linux port for OpenRISC or1k

In my previous post I mentioned my involvement with the OpenRISC or1k port. It was the technical activity in which I spent most time during 2014 (Debian and otherwise, day job aside).

I thought that it would be nice to talk a bit about the port for people who don't know about it, and give an update for those who do know and care. So this post explains a bit how it came to be, details about its development, and finally the current status. It is going to be written as a rather personal account, for that matter, since I did not get involved enough in the OpenRISC community at large to learn much about its internal workings and aspects that I was not directly involved with.

There is not much information about all of this elsewhere, only bits and pieces scattered here and there, but specially not much public information at all about the development of the Debian port. There is an OpenRISC entry in the Debian wiki, but it does not contain much information yet. Hopefully, this piece will help a bit to preserve history and give an insight for future porters.

First Things First

I imagine that most people reading this post will be familiar with the terminology, but just in case, to create a new Debian port means to get a Debian system (GNU/Linux variant, in this case) to run in the OpenRISC or1k computer architecture.

Setting to one side all differences between hardware and software, and as described in their site:

"The aim of the OpenRISC project is to create free and open source computing platforms"

It is therefore a good match for the purposes of Debian and Free Software world in general.

The processor has not been produced in silicon, or not available for the masses in any case. People with the necessary know-how can download the hardware description (Verilog) and synthesise it in a FPGA, or otherwise use simulators. It is not some piece of hardware that people can purchase yet, and there are no plans to mass-produce it in the near future either.

The two people (including me) involved in this Debian port did not have the hardware, so we created the port entirely through cross-compiling from other architectures, and then compiling inside Qemu. In a sense, we were creating a Debian port for hardware that "does not [phisically] exist". The software that we built was tested by people who had hardware available in FPGA, though, so it was at least usable. I understand that people working in the arm64 port had to work similarly in the initial phases, working in the dark without access to real hardware to compile or test.

The Spark

The first time that I heard about the initiative to create the port was in late February of 2014, in a post which appeared in Linux Weekly News (sent by Paul Wise) and Slashdot. The original post announcing it was actually from late January, from Christian Svensson (blueCmd):

"Some people know that I've been working on porting Glibc and doing some toolchain work. My evil master plan was to make a Debian port, and today I'm a happy hacker indeed!

Below is a link to a screencast of me installing Debian for OpenRISC, installing python2.7 via apt-get (which you shouldn't do in or1ksim, it takes ages! (but it works!)) and running a small Python script. http://asciinema.org/a/7362"

So, now, what can a Debian Hacker do when reading this? (Even if one's Hackery Level is not that high, as it is my case). And well, How Hard Can It Be? I mean, Really?

Well, in my own defence, I knew that the answer to the last two questions would be a resounding "Very". But for some reason the idea grabbed me and I couldn't help but think that it would be a Really Exciting Project, and that somehow I would like to get involved. So I wrote to Christian offering my help after considering it for a few days, around mid March, and he welcomed me aboard.

The Ball Was Already Rolling

Christian had already been in contact with the people behind DebianBootstrap, and he had already created the repository http://openrisc.debian.net/ with many packages of the base system and beyond (read: packages name_version_or1k.deb available to download and install). Still nowadays the packages are not signed with proper keys, though, so use your judgement if you want to try them.

After a few weeks, I got up to speed with the status of the project and got my system working with the necessary tools. This meant basically sbuild/schroot to compile new packages, with the base system that Christian already got working, installed in a chroot, probably with the help of debootstrap, and qemu-system-or1k to simulate the system.

Only a few of the packages were different from the version in Debian, like gcc, binutils or glibc -- they had not been upstreamed yet. sbuild ran through qemu-system-or1k, so the compilation of new packages could happen "natively" (running inside Qemu) rather than cross-compiling the packages, pulling _or1k.deb packages for dependencies from the repository that he had prepared, and _all.deb packages from snapshots.debian.org.

I started by trying to get the packages that I [co-]maintain in Debian compiled for this architecture, creating the corresponding _or1k.deb. For most of them, though, I needed many dependencies compiled before I could even compile my packages.

The GNU autotools / autoreconf Problem

Since very early, many of the packages failed to build with messages such as:

Invalid configuration 'or1k-linux-gnu': machine 'or1k' not recognized
configure: error: /bin/bash ../config.sub or1k-linux-gnu failed

This means that software packages based on GNU autotools and using configure scripts need recent versions of the files config.sub and config.guess that they ship in their root directory, to be able to detect the architecture and generate the code accordingly.

This is counter-intuitive, having into account that GNU autotools were designed to help with portability. Yet, in the case of creating new Debian ports, it meant that unless upstream had very recent versions of config.{guess,sub}, it would prevent the package to compile straight away in the new architectures -- even if invoking gcc without ado would have worked without problems in most cases for native compilation.

Of course this did not only affect or1k, and there was already the autoreconf effort underway as a way to update these files automatically when building Debian packages, pushed by people porting Debian to the new architectures added in 2013/2014 (mips64el, arm64, ppc64el), which encountered the same roadblock. This affected around a thousand source packages in unstable. A Royal Pain. Also, all of their reverse dependencies (packages that depended on those to be built) could not be compiled straight away.

The bugs were not Release Critical, though (none of these architectures were officially accepted at the time), so for people not concerned with the new ports there was no big incentive to get them fixed. This problem, which conceptually is easily solvable, prevented new ports to even attempt compile vast portions of the archive straight away (cleanly, without modifications to the package or to the host system), for weeks or months.

The GNU autotools / autoreconf Solution

We tackled this problem mainly in two ways.

First, more useful for Debian in general, was to do as other porters were doing and submit bug reports and patches to Debian packages requesting them to use autoreconf, and NMUing packages (uploading changes to the archive without the official maintainers' intervention). A few NMUs were made for packages which had bug reports with patches available for a while, that were in the critical path to get many other packages compiled, and that were orphan or had almost no maintainer activity.

The people working in the other new ports, and mainly Ubuntu people which helped with some of those ports and wanted to support them, had submitted a large amount of requests since late 2013, so there was no shortage of NMUs to be made. Some porters, not being Debian Developers, could not easily get the changes applied; so I also helped a bit the porters of other architectures, specially later on before the freeze of Jessie, to get as many packages compiled in those architectures as possible.

The second way was to create dpkg-buildpackage hooks that updated unconditionally config.{guess,sub} before attempting to build the package in the local build system. This local and temporary solution allowed us in the or1k port to get many _or1k.deb packages in the experimental repository, which in turn would allow many more packages to compile. After a few weeks, I set up many sbuilds in a multi-core machine attempting to build uninterruptedly packages that were not previously built and which had their dependencies available. Every now and then (typically several times per day in peak times) I pushed the resulting _or1k.deb files to the repository, so more packages would have the necessary dependencies ready to attempt to build.

Christian was doing something similar, and by April at peak times, among the two of us, we were compiling some days more than a hundred packages -- a huge amount of packages did not need any change other than up-to-date config.{guess,sub} files. At some point, late April, Christian set up wanna-build in a few hosts to do this more elegantly and smartly than my method, and more effectively as well.

Ugly Hacks, Bugs and Shortcomings in the Toolchain and Qemu

Some packages are extremely important because many other packages need them to compile (like cmake, Qt or GTK+), and they are themselves very complex and have dependency loops. They had deeper problems than the autoreconf issue and needed some seriously dirty hacking to get them built.

To try to get as many packages compiled as possible, we sometimes compiled these important packages with some functionality disabled, disabling some binary packages (e.g. Java bindings) or specially disabling documentation (using DEB_BUILD_OPTIONS=nodoc when possible, and more aggressively when needed by removing chunks of debian/rules). I tried to use the more aggressive methods in as few packages as possible, though, about a dozen in total. We also used DEB_BUILD_OPTIONS=nocheck for speeding up compilation and avoiding build failures -- many packages' tests failed due to qemu-system-or1k not supporting multi-threading, which we could do nothing about at the time, but otherwise the packages mostly passed tests fine.

Due to bugs and shortcomings in Qemu and the toolchain --like the compiler lacking atomics, missing functionality in glibc, Qemu entering in endless loops, or programs segfaulting (especially gettext, used by many packages and causing the packages failing to build)--, we had to resort to some very creative ways or time-consuming dull work to edit debian/rules, or to create wrappers of the real programs avoiding or forcing certain options (like gcc -O0, since -O2 made buggy binaries too often).

To avoid having a mix of cleanly compiled and hacked packages in the same repository, Christian set up a two-tiered repository system -- the clean one and the dirty one. In the dirty one we dumped all of the packages that we got built, no matter how. The packages in the clean one could use packages from the dirty one to build, but they themselves were compiled without any hackery. Of course this was not a completely airtight solution, since they could contain code injected at build time from the "dirty repository" (e.g. by static linking), and perhaps other quirks. We hoped to get rid of these problems later by rebuilding all packages against clean builds of all their dependencies.

In addition, Christian also spent significant amounts of time working inside the OpenRISC community, debugging problems, testing and recompiling special versions of the toolchain that we could use to advance in our compilation of the whole archive. There were other people in the OpenRISC community implementing the necessary bits in the toolchain, but I don't know the details.

Good Progress

Olof Kindgren wrote the OpenRISC health report April 2014 (actually posted in May), explaining the status at the time of projects in the broad OpenRISC community, and talking about the software side, Debian port included. Sadly, I think that there have been no more "health reports" since then. There was also a new post in Slashdot entitled OpenRISC Gains Atomic Operations and Multicore Support shortly thereafter.

In the side of the Debian port, from time to time new versions of packages entered unstable and we started to use those newer versions. Some of them had nice fixes, like the autoreconf updates, so they did not require local modifications. In other cases, the new versions failed to build when old ones had worked (e.g. because the newer versions added support and dependencies of new versions of gnutls, systemd or other packages not yet available for or1k), and we had to repeat or create more nasty hacks to get the packages built again.

But in general, progress was very good. There were about 10k arch-dependent packages in Debian at the time, and we got about half of them compiled by the beginning of May, counting clean and dirty. And, if I recall correctly, there were around the same number of arch=all (which can be installed in any architecture, after the package is built in one of them). Counting both, it meant that systems using or1k got about 15k packages available, or 75% of the whole Debian archive (at least "main", we excluded "contrib" and "non-free"). Not bad.

By the middle to end of May, we had about 6k arch-dependent packages compiled, and 4k to go. The count of packages eventually reached ~6.6k at its peak (I think that in June/July). Many had been built with hacks and not rebuilt cleanly yet, but everything was fine until the amount of packages built plateaued.

Plateauing

There were multiple reasons for that. One of them is that after having fixed the autoreconf issue locally in some packages, new versions were uploaded to Debian without fixing that problem (in many cases there was no bug report or patch yet, so it was understandable; in other cases the requests were ignored). The wanna-build for the clean repository set up by Christian rightly considered the package out-of-date and prepared to build the more recent version, that failed. Then, other packages entering the unstable archive and build-depending on newer versions of those could not be built ("BD-Uninstallable"), until we built the newer versions of the dependencies in the dirty repository with local hacks. Consequently, the count of cleanly built packages went back-and-forth, when not backwards.

More challenging was the fact that when creating a new port, language compilers which are written in that same language need to be built for that architecture first. Sometimes it is not the compiler, but compile-time or run-time support for modules of a language are not ported yet. Obviously, as with other dependencies, large amounts of packages written in those languages are bound to remain uncompiled for a long time. As Colin Watson explained in porting Haskell's GHC to arm64 and ppc64el, untangling some of the chicken-and-egg problems of language compilers for new ports is extremely challenging.

Perl and Python are pretty much a pre-requisite of the base Debian system, and Christian got them working early on. But for example in May, 247 packages depended on r-base-dev (GNU R) for building, and 736 on ghc, and we did not have these dependencies compiled. Just counting those two, 1k source packages of the remaining 4k to 5k to be compiled for the new architecture would have to wait for a long time. Then there was Java, Mono, etc...

Even more worrying problems were the pending issues with the toolchain, like atomics in glibc, or make check failing for some packages in the clean repository built with wanna-build. Christian continued to work on the toolchain and liasing with the rest of the OpenRISC community, I continued to request more changes to the Debian archive through a few requests to use autoreconf, and pushing a few more NMUs. Though many requests were attended, I soon got negative replies/reactions and backed off a bit. In the meantime, the porters of other new architectures at the time were mostly submitting requests to support them and not NMUing much either.

Upstreaming

Things continued more or less in the same state until the end of the summer.

The new ports needed as many packages built as possible before the evaluation of which official ports to accept (in early September, I think, the final decision around the time of the freeze). Porters of the other new architectures (and maintainers, and other helpful Debian Developers) were by then more active in pushing for changes, specially remaining autoreconf issues, many of which benefited or1k. As I said before, I also kept pushing NMUs now and then, specially during summer, for packages which were not of immediate benefit for our port but helped the others (e.g. ppc64el needed updates to ltmain.sh for libtool which were not necessary for or1k, in addition to config.{guess,sub}).

In parallel in the or1k camp, there were patches that needed changes to be sent upstream, like for example Python's NumPy, that I submitted in May to the Debian package and upstream, and was uploaded to Debian in September with a new upstream release. Similar paths were followed between May and September for packages such as jemalloc, ocaml, gstreamer0.10, libgc, mesa, X.org's cf module and cmake (patch created by Christian).

In April, Christian had reached the amazing milestone of tracking and getting all of the contributors of the port of GNU binutils to assign copyright to the Free Software Foundation (FSF), all of the work was refreshed and upstreamed. In July or August, he started to gather information about the contributors of the GCC port, which had started more than a decade ago.

After that, nothing much happened (from the outside) until the end of the year, when Christian sent a message about the status of upstreaming GCC to the OpenRISC community. There was only one remaining person to assign the copyright to the FSF, but it was a blocker. In addition, there was the need to find one or more maintainers to liaise with upstream, review the patches, fix the remaining failures in the test suite and keeping the port in good shape. A few months after that and from what I could gather, the status remains the same.

Current Status, and The Future?

In terms of the Debian port, there have not been huge visible changes since the end of the summer, and not only because of the Jessie freeze.

It seems that for this effort to keep going forward and be sustainable, sorting out the issues with GCC and Glibc is essential. That means having the toolchain completely pushed upstream and in good shape, and particularly completing the copyright assignment. Debian will not accept private forks of those essential packages without a very good reason even in unofficially supported ports; and from the point of view of porters, working in the remaining not-yet-built packages with continuing problems in the toolchain is very frustrating and time-consuming.

Other than that, there is already a significant amount of software available that could run in an or1k system, so I think that overall the project has achieved a significant amount of success. Granted, KDE and LibreOffice are not available yet, neither are the tools depending on Haskell or Java. But a lot of software is available (including things high in the stack, like XFCE), and in many aspects it should provide a much more functional system that the one available in Linux (or other free software) systems in the late 1990s. If the blocking issues are sorted out in the near future, the effort needed to get a very functional port, on par with the unofficial Debian ports, should not be that big.

In my opinion, and looking at the big picture, not bad at all for an architecture whose hardware implementation is not easy to come by, and in which the port was created almost solely with simulators. That it has been possible to get this far with such meagre resources, it's an amazing feat of Free Software and Debian in particular.

As for the future, time will tell, as usual. I will try to keep you posted if there is any significant change in the future.

21 Apr 2015 12:16am GMT

20 Apr 2015

feedPlanet Debian

Mark Brown: Flashing an AT91SAM9G20-EK from bare metal

Since I just had cause to do this and it was harder than it needed to be due to bitrot in the public documentation I could find I thought I'd write up how to get a modern bootloader onto older Atmel boards. These instructions are written for the AT91SAM9G20-EK though they should also apply to other Atmel boards of a similar generation.

These instructions are for booting from NAND since it's the default thing for the board, for this J34 should be fitted to enable the chip select and J33 disconnected to disable the dataflash. If there is something broken programmed into flash then booting while holding down BP4 should cause the second stage bootloader to trash itself and ensure the ROM bootloader puts itself into recovery mode, or just removing both J33 and J34 during power on will also ensure no second stage bootloader is found.

There is a ROM bootloader but it just loads a small region from the boot media and jumps into it which isn't enough for u-boot so there is a second stage bootloader called AT91Bootstrap. Download sources for current versions from github. If it (or a more sensibly written equivalent) is not yet merged upstream you'll need to apply this patch to get it to build with a modern compiler, or you could use an old toolchain (which you'll need in the next step anyway):

diff --git a/board/at91sam9g20ek/board.mk b/board/at91sam9g20ek/board.mk
index 45f59b1822a6..b8251ca2fbad 100644
--- a/board/at91sam9g20ek/board.mk
+++ b/board/at91sam9g20ek/board.mk
@@ -1,7 +1,7 @@
 CPPFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft
 
 ASFLAGS += \
        -DCONFIG_AT91SAM9G20EK \
-       -mcpu=arm926ej-s
+       -mcpu=arm926ej-s -mfloat-abi=soft

Once that's done you can build with:

make at91sam9g20eknf_uboot_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

producing binaries/at91sam9g20ek-nandflashboot-uboot-${VERSION}.bin. This configuration will look for u-boot at 0x40000 in the flash so we need a u-boot binary. Unfortunately modern compilers seem to produce binaries that fail with no output. This is normally a sign that they need the ABI specifying more clearly as above but I got fed up trying to spot what was missing so I used an old CodeSourcery 2013.05 release instead, hopefully future versions of u-boot will be able to build for this target with older toolchains. Grab a recent release (I used 2015.01) and build with:

cd ${UBOOT}
make at91sam9g20ek_nandflash_defconfig
make CROSS_COMPILE=arm-linux-gnueabihf-

to get u-boot.bin.

These can then be flashed using the Atmel flashing tool SAM-BA. Start it and connect to the target (there is a Linux version, though it appears to rely on old versions of TCL/TK so if you get trouble starting it the easiest thing is to use the sacrificial Windows laptop you've obtained in order to run the "entertaining" flashing tools companies sometimes provide without risking a real system, or in my case your shiny new laptop that you've not yet installed Linux on). Start it then:

  1. Connect SAM-BA to the device following the dialog on start.
  2. Make sure you've selected "NandFlash" in the memory type tabs in the center of the window.
  3. Run the "Enable NandFlash" script.
  4. Run the "Erase All" script.
  5. Run the "Send Boot File" script and provide the at91bootstrap binary.
  6. Set "Send File Name" to be the u-boot binary you built earlier and "Address" to be 0x40000.
  7. Click "Send File"
  8. Press the reset button

which should result in at91bootstrap output followed by u-boot output on the serial console. A similar process works for the AT91SAM9263, there the jumper you need is J19 (sadly u-boot does not flash pictures of cute animals or forested shorelines on the screen as the default "Basic LCD Project 1.4″ firmware does, I'm not sure this "full operating system" thing is really delivering improved functionality).

20 Apr 2015 4:03pm GMT

Jonathan Wiltshire: Jessie Countdown: 5

Five contributors became uploading Debian Developers so far in 2015 (source: https://nm.debian.org/public/people).


Jessie Countdown: 5 is a post from: jwiltshire.org.uk | Flattr

20 Apr 2015 2:49pm GMT

Daniel Pocock: WebRTC video from mini-DebConf Lyon, France

Thanks to the Debian France community for putting on another very successful mini-DebConf in Lyon recently.

It was a great opportunity to meet more people for the first time and share ideas.

On the first day, I gave a talk about the current status of WebRTC and the Debian WebRTC portal. You can now watch the video of the talk online.

Unfortunately, we had some wifi problems that delayed the demonstration but we did eventually see it work successfully towards the end of the talk.

20 Apr 2015 12:15pm GMT

Bálint Réczey: Hot upgrading Erlang applications using tools in Debian

Erlang lets you write applications supporting zero downtime by switching one live system to another running a different application version converting the application's state on the fly to the new representation. Debian packages however can have only one installed version on a system which prevents using Erlang's hot-upgrade feature easily.

Engineers at Yakaz (Jean-Sébastien Pédron) came up with a nice solution by creating separate directories for each application release and creating .deb packages for managing the transitions. I had to solve the same problem recently and found that the erlsvc Perl application they created needed a few patches to be usable with latest Erlang and other packages and with the changes it worked perfectly. Yakaz was not interested in accepting the patches and developing it further, but let me continue the maintenance. Please find the updated erlsvc application under my GitHub account and feel free to submit patches if you find something to fix in it.

I have also packaged erlsvc as an official Debian package and it is waiting in the NEW queue for being accepted. When it enters unstable you will have to make very little effort to make your applications hot-upgradeable on Debian!

20 Apr 2015 9:58am GMT

Patrick Schoenfeld: Resources about writing puppet types and providers

When doing a lot of devops stuff with Puppet, you might get to a point, where the existing types are not enough. That point is usually reached, when a task at hand becomes extraordinary complex when trying to achieve it with the Puppet DSL. One example of such a case could be if you need to interact with a system binary a lot. In this case, writing your own puppet type might be handy.

Now where to start, if you want to write your own type?

Overview: modeling and providing types

First thing that you should know about puppet types (if you do not already): a puppet resource type consists of a type and one or more providers.

The type is a model of the resource and describes which properties (e.g. the uid of a user resource) and parameters (like the managehome parameter) a resource has. It's a good idea to start with a rough idea of what properties you'll be manage with your resource and what values they will accept, since the type also does the job of validation.

What actually needs to be done on the target system is what the provider is up to. There can be different providers for different implementations (e.g. a native ruby implementation or an implementation using a certain utility), different operating systems and other conditions.

A combination of a type and a matching provider is what forms a (custom) resource type.

Resources

Next I'll show you some resources about puppet provider development, that I found useful:

Official documentation:

Actually types and resources is quiet well documented in the official documentation, although it might not get to much in the details:


Blog posts:
A hands-on tutorial in multiple parts with good explanations are the blog posts by Gary Larizza:

Books:
The probably most complete information, including explanations of the puppet resource model and it's resource abstraction layer (RAL), can be found in the book Puppet Types and providers by Dan Bode and Nan Liu.

The puppet source:
Last but not least, it's always worth a peek at how others did it. The puppet source contains all providers of the official puppet release, as well as the base libraries for puppet types and providers with their api documentation: https://github.com/puppetlabs/puppet/

20 Apr 2015 9:29am GMT

Russ Allbery: Review: The Girls from Alcyone

Review: The Girls from Alcyone, by Cary Caffrey

Series: Girls from Alcyone #1
Publisher: Tealy
Copyright: 2011
ISBN: 1-105-33727-8
Format: Kindle
Pages: 315

Sigrid is a very special genetic match born to not particularly special parents, deeply in debt in the slums of Earth. That's how she finds herself being purchased by a mercenary corporation at the age of nine, destined for a secret training program involving everything from physical conditioning to computer implants, designed to make her a weapon. Sigrid, her friend Suko, and the rest of their class are a special project of the leader of the Kimura corporation, one that's controversial even among the corporate board, and when the other mercenary companies unite against Kimura's plans, they become wanted contraband.

This sounds like it could be a tense SF thriller, but I'll make my confession at the start of the review: I had great difficulty taking this book seriously. Initially, it had me wondering what horrible alterations and mind control Kimura was going to impose on the girls, but it very quickly turned into, well, boarding school drama, with little of the menace I was expecting. Not that bullying, or the adults who ignore it to see how the girls will handle it themselves, are light-hearted material, but it was very predictable. As was the teenage crush that grows into something deeper, the revenge on the nastiest bully that the protagonist manages to not be responsible for, and the conflict between unexpectedly competent girls and an invasion of hostile mercenaries.

I'm not particularly well-read or informed about the genre, so I'm not the best person to make this comparison, but the main thing The Girls from Alcyone reminded me of was anime or manga. The mix of boarding-school interpersonal relationships, crushes and passionate love, and hypercompetent female action heroes who wear high heels and have constant narrative attention on their beauty had that feel to it. Add in the lesbian romance and the mechs (of sorts) that show up near the end of the story, and it's hard to shake the feeling that one is reading SF yuri as imagined by a North American author.

The other reason why I had a hard time taking this seriously is that it's over-the-top action sequences (it's the Empire Strikes Back rescue scene!) mixed with rather superficial characterization, with one amusing twist: female characters almost always end up being on the side of the angels. Lady Kimura, when she appears, turns into exactly the sort of mentor figure that one would expect given the rest of the story (and the immediate deference she got felt like it was lifted from anime). The villains, meanwhile, are hissable and motivated by greed or control. While there's a board showdown, there's no subtle political maneuvering, just a variety of more or less effective temper tantrums.

I found The Girls from Alcyon amusing, and even fun to read in places, but that was mostly from analyzing how closely it matched anime and laughing at how reliably it delivered characteristic tropes. It thoroughly embraces its action-hero story full of beautiful, deadly women, but it felt more like a novelization of a B-grade sci-fi TV show than serious drama. It's just not well-written or deep enough for me to enjoy it as a novel. None of the characters were particularly engaging, partly because they were so predictable. And the deeper we got into the politics behind the plot, the less believable I found any of it.

I picked this up, along with several other SFF lesbian romances, because sometimes it's nice to read a story with SFF trappings, a positive ending, and a lack of traditional gender roles. The Girls from Alcyone does have most of those things (the gender roles are tweaked but still involve a lot of men looking at beautiful women). But unless you really love anime-style high-tech mercenary boarding-school yuri, want to read it in book form, and don't mind a lot of cliches, I can't recommend it.

Followed by The Machines of Bellatrix.

Rating: 3 out of 10

20 Apr 2015 4:28am GMT

19 Apr 2015

feedPlanet Debian

Richard Hartmann: Release Critical Bug report for Week 16

The UDD bugs interface currently knows about the following release critical bugs:

How do we compare to the Squeeze and Wheezy release cycles?

Week Squeeze Wheezy Jessie
43 284 (213+71) 468 (332+136) 319 (240+79)
44 261 (201+60) 408 (265+143) 274 (224+50)
45 261 (205+56) 425 (291+134) 295 (229+66)
46 271 (200+71) 401 (258+143) 427 (313+114)
47 283 (209+74) 366 (221+145) 342 (260+82)
48 256 (177+79) 378 (230+148) 274 (189+85)
49 256 (180+76) 360 (216+155) 226 (147+79)
50 204 (148+56) 339 (195+144) ???
51 178 (124+54) 323 (190+133) 189 (134+55)
52 115 (78+37) 289 (190+99) 147 (112+35)
1 93 (60+33) 287 (171+116) 140 (104+36)
2 82 (46+36) 271 (162+109) 157 (124+33)
3 25 (15+10) 249 (165+84) 172 (128+44)
4 14 (8+6) 244 (176+68) 187 (132+55)
5 2 (0+2) 224 (132+92) 175 (124+51)
6 release! 212 (129+83) 161 (109+52)
7 release+1 194 (128+66) 147 (106+41)
8 release+2 206 (144+62) 147 (96+51)
9 release+3 174 (105+69) 152 (101+51)
10 release+4 120 (72+48) 112 (82+30)
11 release+5 115 (74+41) 97 (68+29)
12 release+6 93 (47+46)
13 release+7 50 (24+26)
14 release+8 51 (32+19)
15 release+9 39 (32+7)
16 release+10 20 (12+8)
17 release+11 24 (19+5)
18 release+12 2 (2+0)

Graphical overview of bug stats thanks to azhag:

19 Apr 2015 9:35pm GMT

Wouter Verhelst: Youn Sun Nah 5tet: Light For The People

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

19 Apr 2015 9:25am GMT

Laura Arjona: Six months selfhosting: my userop experiences

Note: In this post I mention some problems and ask questions (to myself, like "thinking aloud"). The goal is not to get answers to those questions (I suppose that I will find them soon or later in the internet, manuals and so), but to show the kind of problems and questions that arise in my selfhosting adventures, which I suppose are common to other people trying to administer a home server with some web services.

Am I an userop? Well I'm something in the middle of (GNU/Linux) user and sysadmin: I have studied computer technical engineering but most of my experience has been in helpdesk, providing support for Windows users. I'm running Debian in some LAMP boxes at work (without GUI) since 2008 or so, and in my desktops (with GUI) since 2010. I don't code nor package, but I don't mind trying to read code and understand it (or not). I know a bit of C, a bit of Python, of PHP, and enough Perl to open a Perl file and close it after two minutes, understanding that it's great, but too much for me :) I translate software, so I'm not scared to clone a repository, edit files, commit or submit a patch. I'm not scared of compiling a program (except if it's an Android app: I try to avoid setting up the development environment just to try some translation that I made… but I built my Puma before it was the binary available for download or in F-Droid).

In conclusion, I feel more like a "GNU/Linux power user" than a "sysadmin". Sometimes just a "user" or even a "newbie" (for example, I don't know very well the Unix/Linux folder tree… where are the wallpapers stored? Does it depend on the desktop that I use?).

Anyway. I won't stop my free software + free networks digital life because I don't know many things. I bought a small server for home last September, and I wanted to try to selfhost some services, for me and for my family. I want to be a "home sysadmin" or something like that, so I joined the "userops" mailing list :)

Here you have my experiences on selfhosting/being an userop until now.

Mail

I even didn't try to setup my mail server, because many people say it's a pain (although nice articles were published about how to do it, for example this series in ArsTechnica) and I need a static IP which is 14€/month more to my ISP, and Gandi, the place where I rented my domain name, provides mail, and they use Debian and Roundcube, and sponsor Debian too, so I decided to trust on them.

So this is my strategy now, to try to keep mail under my control:

MediaGoblin

I've setup two MediaGoblin instances (yes, two!). I managed to do it in Debian 7 stable (I think NodeJS' npm was not needed then), but soon later I upgraded to Jessie so now it's even better.

I installed Nginx and PostgreSQL via apt, to use them for both instances (and probably some more software later).

One instance is public, and I use a Debian user, a PostgreSQL database, and it's running in http://media.larjona.net
I have requested an SSL cert to Gandi but I still didn't deployed it (lazy LArjona!!).

The other instance is private, for family photos. I didn't know very well how much of my existing setup could reuse and how to keep both instances in case of downtimes or attack… I know more or less the concept or "chroot" but I don't know how to deploy it in my machine. So I decided to use another Debian user, another PostgreSQL database, deploy MediaGoblin in a different folder, and create another virtual server in my Nginx to serve it. I managed to setup that virtual server to http-authenticate and to serve content via a different port, and use a self-signed SSL certificate (it's only for family, so it does not matter). I created another (unprivileged) Debian user with a password for the nginx authentication, and gave my family the URL in the form https://mediaprivate.larjona.net:PortNumber and the user and password (mediaprivate is a string, and PortNumber is a number). I think they don't use the instance too much, but at least I upload photos there from time to time and email the link instead of emailing the photos themselves (they don't use GPG either…).

Upgrades

I upgraded MediaGoblin from 0.7.1 to 0.8.0 successfully, I sent a report about how I did it to the mailing list. First I upgraded the public instance, when I figure out the process, I upgraded the second instance to test my instructions, and then, I sent the report with the instructions to the mailing list.

Static site and LimeSurvey: the power of free software (with instructions)

I wanted to act as a mirror of floss2013.libresoft.es and surveys.libresoft.es since they suffer a downtime and I participated in that project (not in the sysadmin part, but in the research and content creation).

The static site floss2013.libresoft.es offered a zip with the whole website tree (since the website was licensed as AGPL), and I had access to the git repo holding the development copy of the website. So I just cloned the repository and setup another nginx virtual server in my machine, and tuned my DNS zone in Gandi website to serve floss2013.larjona.net from home. 10 minutes setup YAY! #inGitWeTrust #FreeSoftwareFTW :)

For surveys.larjona.net I had to install a LimeSurvey instance. I knew how to do it because we use LimeSurvey at work, but at home I had Nginx instead of Apache, and PostgreSQL instead of MySQL. And no PHP… I searched about how to install PHP in Nginx (I can use apt-get, nice!) and how to install LimeSurvey with Nginx and PostgreSQL (I had documentation about that, so I followed, and it worked).

For making available the data (one survey and its results, so people can login as visitor to query and get statistics), I downloaded the LimeSurvey export dataset that we were providing in the static website, followed the replication instructions (hey, I wrote them!), and they worked #oleole! (And here, dear researchers, gets demonstrated that free software and free culture really empower your research and help spreading your results).

Etherpad: not so easy, it seems!

I'm trying to install Etherpad-Lite, but I'm suffering a bit. I think I did everything ok according to some guides but I get "Bad Gateway" and these kind of errors when trying to browse with Lynx in the host:

[error] 3615#0: *24 upstream timed out (110: Connection timed out) 
while reading response header from upstream, 
client: 127.0.0.1, 
server: pad.larjona.net, 
request: "GET / HTTP/1.0", 
upstream: "http://127.0.0.1:9001/", 
host: "pad.larjona.net"

2015/04/17 20:52:56 [error] 3615#0: *24 connect() failed 
(111: Connection refused) while connecting to upstream, 
client: 127.0.0.1, 
server: pad.larjona.net, 
request: "GET / HTTP/1.0", 
upstream: "http://[::1]:9001/", 
host: "pad.larjona.net"

I'm not sure if I need to open some port in iptables, my router, or change my nginx configuration because the guides assume you're only serving one website in the port 80 (and I have several of them, now…), or what… I've spent three chunks of time (maybe ~2h each?) on this, in different days, and couldn't figure it out, so I decided to round-robin in my TODO list.

Userops thoughts

Debian brings peace of mind (for me)

On one side, maintaining a Debian box it's quite easy, and the more software that it's packaged, the less time that I spend installing or upgrading. I like being in stable, I'm in Jessie now (I migrated when it was frozen), but I'll stay in stable as much as I can.

I like that I can use the software that I installed via apt-get for several services (nginx, PostgreSQL…). About the software that is not packaged (MediaGoblin, LimeSurvey, EtherPad, maybe others later), I wonder how dependencies and updates are handled. And maybe (probably) I have installed some components several times, one for each service (this sounds like a Windows box #grr).

For example MediaGoblin uses PyPump. PyPump 0.5 is packaged in Debian Jessie. MediaGoblin uses PyPump 0.7+. What if PyPump 0.7+ gets, let's say, into Jessie-backports? Can I benefit from that?

I know that MediaGoblin upgrade instructions includes upgrading the dependencies, but what about some security patch in one dependency? Should I upgrade the pip modules periodically? How to know if some upgrade is recommended because patches a vulnerability, or it's just new features (and maybe breaking my setup)?

This kind of things are the "peace of mind" that Debian packaging brings to me: when some piece of software is packaged, I know maybe I need to care about proper setup and configuration, but later, it's kind-of-easy to maintain (since the Debian maintainers care about the rest). I don't mind about cloning a repo and compiling, I mind about later, or coexistance with other program/services. I trust in the MediaGoblin community and I'm an active member (I'm not developer, but hang on IRC, follow the mailing list, etc) but for example I don't know anything about the EtherPad project. And I don't feel like joining the community (I'm already an active member in Debian, MediaGoblin, F-Droid, Pump.io, translator of LimeSurvey and many other small apps that I use, and in the future will use more services, like OwnCloud, XMPP…), joining the community of each software that I use is becoming not sustainable :s

Free software is more than software

I follow the userop mailing list, and it's becoming very technical. I mostly understand the problems (which are similar to the problems that I face: how to isolate different services, how to easily-configure them, how to make them installable by average user…) But I don't understand most of the solutions proposed, and I think that probably we need technical solutions, but in the meanwhile, some issues can be addressed not with software, but with other means: good documentation, community support, translations, beta-testers…

This is my conclusion until now. When a project is well documented, I think I can find my way to selfhost no matter if the software is packaged (or "contained") or not. MediaGoblin, and LimeSurvey, are well documented, and the user support channels are very responsive.

I find lots of instructions that assume that you will use a whole machine for their service (and not for other things). And lots of documentation for the LAMP stack, but not for Nginx + PostgreSQL and Node instead of PHP… So, for each "particularity" of my setup, I search the internet and try to pick good sources to help me do what I wanted to do.

I'm kind of privileged

Some elements, not software related, to take into account as "pre-requisites for succeed" selfhosting services:

These aspects are not present in a lot of people. If I look around to the "computer users" that I know (mostly Windows+Android, some GNU/Linux users, some Mac OSX users, some iOS users), I find that they search things like "X does not work" or they cannot write a proper search query in English. Or they trust some random person writing a recipe in their blog, without trying first to understand what the recipe does. Other people just say "I'm not a professional sysadmin, I'll just do what «everybody» does (aka use Google services or whatever). What if I try and I don't succeed?". Things like that.

We may need some technical solutions (and hackers are thinking about that, and working on that). But I feel that we need (more) a huge group of beta-testers, dogfooding people, aventurers that try the half-cooked solutions and provide successful and unsuccessful experiences, to guide the research and make software technologies advance. I'm not sure if I am an userop, but I feel part of that "vanguard force", I want to be part of the future of free software and free networks.

Comments?

You can comment about this post on this pump.io thread.


Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, Developer motivations, English, free networks, Free Software, Freedom, innovation, MediaGoblin, Moving into free software, Project Management, selfhosting, sysadmin

19 Apr 2015 12:06am GMT

18 Apr 2015

feedPlanet Debian

Gregor Herrmann: RC bugs 2015/11-16

only one week left until the jessie release. yay!

in the last weeks I didn't find many RC bugs that I could fix; still, here's the short list; nice feature: I mostly help others or could build an work done by others.

18 Apr 2015 10:52pm GMT

Neil McGovern: Taking office

Yesterday, my first term started as the Debian Project Leader. There's been a large number of emails congratulating me, and thanks to everyone who sent those. I'd also like to thank Mehdi Dogguy and Gergely Nagy for running, and of course Lucas Nussbaum for his service over the past two years.

Lucas also did a great handover, and so (I hope!) I'm aware of most of the issues that are ongoing. As started previously, I'll keep my daily log of activities in /srv/leader/news/ on master.debian.org.

18 Apr 2015 1:36pm GMT

Russ Allbery: Review: The Long Way to a Small, Angry Planet

Review: The Long Way to a Small, Angry Planet, by Becky Chambers

Publisher: CreateSpace
Copyright: 2014
ISBN: 1-5004-5330-7
Format: Kindle
Pages: 503

The Wayfarer is a tunneling ship: one of the small, unremarked construction ships that help build the wormhole network used for interstellar transport. It's a working ship with a crew of eight (although most people would count seven and not count the AI). They don't all like each other - particularly not the algaeist, who is remarkably unlikeable - but they're used to each other. It's not a bad life, although a more professional attention to paperwork and procedure might help them land higher-paying jobs.

That's where Rosemary Harper comes in. At the start of the book, she's joining the ship as their clerk: nervous, hopeful, uncertain, and not very experienced. But this is a way to get entirely away from her old life and, unbeknownst to the ship she's joining, her real name, identity, and anyone who would know her.

Given that introduction, I was expecting this book to be primarily about Rosemary. What is she fleeing? Why did she change her identity? How will that past come to haunt her and the crew that she joined? But that's just the first place that Chambers surprised me. This isn't that book at all. It's something much quieter, more human, more expansive, and more joyful.

For one, Chambers doesn't stick with Rosemary as a viewpoint character, either narratively or with the focus of the plot. The book may open with Rosemary and the captain, Ashby, as focal points, but that focus expands to include every member of the crew of the Wayfarer. We see each through others' eyes first, and then usually through their own, either in dialogue or directly. This is a true ensemble cast. Normally, for me, that's a drawback: large viewpoint casts tend to be either jarring or too sprawling, mixing people I want to read about with people I don't particularly care about. But Chambers avoids that almost entirely. I was occasionally a touch disappointed when the narrative focus shifted, but then I found myself engrossed in the backstory, hopes, and dreams of the next crew member, and the complex ways they interweave. Rosemary isn't the center of this story, but only because there's no single center.

It's very hard to capture in a review what makes this book so special. The closest that I can come is that I like these people. They're individual, quirky, human (even the aliens; this is from more the Star Trek tradition of alien worldbuilding), complicated, and interesting, and it's very easy to care about them. Even characters I never expected to like.

The Long Way to a Small, Angry Planet does have a plot, but it's not a fast-moving or completely coherent one. The ship tends to wander, even when the mission that gives rise to the title turns up. And there are a lot of coincidences here, which may bother you if you're reading for plot. At multiple points, the ship ends up in exactly the right place to trigger some revelation about the backstory of one of the crew members, even if the coincidence strains credulity. Similar to the algae-driven fuel system, some things one just has to shrug about and move past.

On other fronts, though, I found The Long Way to be refreshingly willing to take a hard look at SF assumptions. This is not the typical space opera: humans are a relatively minor species in this galaxy, one that made rather a mess of their planet and are now refugees. They are treated with sympathy or pity; they're not somehow more flexible, adaptable, or interesting than the rest of the galaxy. More fascinatingly to me, humans are mostly pacifists, a cultural reaction to the dire path through history that brought them to their current exile. This is set against a backdrop of a vibrant variety of alien species, several of whom are present onboard the Wayfinder. The history and background of the other species are not, sadly, as well fleshed out as the humans, but each with at least a few twists that add interest to the story.

But the true magic of this book, the thing that it has in overwhelming abundance, is heart. Not everyone in this book is a good person, but most of them are trying. I've rarely read a book full of so much empathy and willingness to reach out to others with open hands. And, even better, they're all nice in different ways. They bring their own unique personalities and approaches to their relationships, particularly the complex web of relationships that connects the crew. When bad things happen, and, despite the overall light tone, a few very bad things happen, the crew rallies like friends, or like chosen family. I have to say it again: I like these people. Usually, that's not a good sign for a book, since wholly likeable people don't generate enough drama. But this is one of the better-executed "protagonist versus nature" plots I've read. It successfully casts the difficulties of making a living at a hard and lonely and political job as the "nature" that provides the conflict.

This is a rather unusual book. It's probably best classified as space opera, but it doesn't fit the normal pattern of space opera and it doesn't have enough drama. It's not a book about changing the universe; at the end of the book, the universe is in pretty much the same shape as we found it. It's not even about the character introduced in the first pages, or really that much about her dilemma. And it's certainly not a book about winning a cunning victory against your enemies.

What it is, rather, is a book about friendships, about chosen families and how they form, about being on someone else's side, about banding together while still being yourself. It's about people making a living in a hard universe, together. It's full of heart, and I loved it.

I'm unsurprised that The Long Way to a Small, Angry Planet had to be self-published via a Kickstarter campaign to find its audience. I'm also unsurprised that, once it got out there, it proved very popular and has now been picked up by a regular publisher. It's that sort of book. I believe it's currently out of print, at least in the US, as its new publisher spins up that process, but it should be back in print by late 2015. When that happens, I recommend it to your attention. It was the most emotionally satisfying book I've read so far this year.

Rating: 9 out of 10

18 Apr 2015 6:24am GMT

Steve Kemp: skx-www upgraded to jessie

Today I upgraded my main web-host to the Jessie release of Debian GNU/Linux.

I performed the upgraded by changing wheezy to jessie in the sources.list file, then ran:

apt-get update
apt-get dist-upgrade

For some reason this didn't upgrade my kernel, which remained the 3.2.x version. That failed to boot, due to some udev/systemd issues (lots of "waiting for job: udev /dev/vda", etc, etc). To fix this I logged into my KVM-host, chrooted into the disk image (which I mounted via the use of kpartx), and installed the 3.16.x kernel, before rebooting into that.

All my websites seemed to be OK, but I made some changes regardless. (This was mostly for "neatness", using Debian packages instead of gems, and installing the attic package rather than keeping the source-install I'd made to /opt/attic.)

The only surprise was the significant upgrade of the Net::DNS perl-module. Nothing that a few minutes work didn't fix.

Now that I've upgraded the SSL-issue I had with redirections is no longer present. So it was a worthwhile thing to do.

18 Apr 2015 12:00am GMT

17 Apr 2015

feedPlanet Debian

EvolvisForge blog: Tricks for using Googlemail at work

For these who similarily suffer from having to use Googlemail at work. If anyone else has more of these, please do share.

Deactivate the spamfilter

The site admins can do that. Otherwise, you will have work-relevant eMails, for example from your own OTRS system, end up in Spam (where you don't see it, as their IMAP sucks) and deleted without asking 30 days later. (AIUI, the only way to get eMails actually deleted from Google…)

Do not use their SMTP service

Use your own outgoing MTA. This brings back the, well, not feature but should-have-been-granted-but-Google-doesn't-do-it-anyway that, when you write to a mailing list, you also get your own messages into your own INBOX.

Calendars…

I have no solutions for this. I stopped using the Googlemail calendars because they didn't think it a problem that, when I accept an invitation in Kontact (KDEPIM as packaged in Debian sid), the organiser of the calendar item in the sender's calendar (for which I do not have write permissions) changes to me (so the actual meeting organiser cannot change anything afterwards) and/or calendar items get doubled. I now run a local uw-imapd (forward-ported to sid by means of a binNMU) for sent-mail folders etc. and a local iCalendar directory for calendars.

17 Apr 2015 1:54pm GMT

16 Apr 2015

feedPlanet Debian

Daniel Pocock: Debian Jessie release, 100 year ANZAC anniversary

The date scheduled for the jessie release, 25 April 2015, is also ANZAC day and the 100th anniversary of the Gallipoli landings. ANZAC day is a public holiday in Australia, New Zealand and a few other places, with ceremonies remembering the sacrifices made by the armed services in all the wars.

Gallipoli itself was a great tragedy. Australian forces were not victorious. Nonetheless, it is probably the most well remembered battle from all the wars. There is even a movie, Gallipoli, starring Mel Gibson.

It is also the 97th anniversary of the liberation of Villers-Bretonneux in France. The previous day had seen the world's first tank vs tank battle between three British tanks and three German tanks. The Germans won and captured the town. At that stage, Britain didn't have the advantage of nuclear weapons, so they sent in Australians, and the town was recovered for the French. The town has a rue de Melbourne and rue Victoria and is also the site of the Australian National Memorial for the Western Front.

Its great to see that projects like Debian are able to span political and geographic boundaries and allow everybody to collaborate for the greater good. ANZAC day might be an interesting opportunity to reflect on the fact that the world hasn't always enjoyed such community.

16 Apr 2015 5:48pm GMT