17 Dec 2014

feedPlanet Debian

Gregor Herrmann: GDAC 2014/17

my list of IRC channels (& the list of people I'm following on micro-blogging platforms) has a heavy debian bias. a thing I noticed today is that I had read (or at least: seen) messages in 6 languages (English, German, Castilian, Catalan, French, Italian). - thanks guys for the free language courses :) (& the opportunity to at least catch a glimpse into other cultures)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

17 Dec 2014 10:30pm GMT

Keith Packard: MST-monitors

Multi-Stream Transport 4k Monitors and X

I'm sure you've seen a 4k monitor on a friends desk running Mac OS X or Windows and are all ready to go get one so that you can use it under Linux.

Once you've managed to acquire one, I'm afraid you'll discover that when you plug it in, you're limited to 30Hz refresh rates at the full size, unless you're running a kernel that is version 3.17 or later. And then...

Good Grief! What Is My Computer Doing!

Ok, so now you're running version 3.17 and when X starts up, it's like you're using a gigantic version of Google Cardboard. Two copies of a very tall, but very narrow screen greets you.

Welcome to MST island.

In order to drive these giant new panels at full speed, there isn't enough bandwidth in the display hardware to individually paint each pixel once during each frame. So, like all good hardware engineers, they invented a clever hack.

This clever hack paints the screen in parallel. I'm assuming that they've got two bits of display hardware, each one hooked up to half of the monitor. Now, each paints only half of the pixels, avoiding costly redesign of expensive silicon, at least that's my surmise.

In the olden days, if you did this, you'd end up running two monitor cables to your computer, and potentially even having two video cards. Today, thanks to the magic of Display Port Multi-Stream Transport, we don't need all of that; instead, MST allows us to pack multiple cables-worth of data into a single cable.

I doubt the inventors of MST intended it to be used to split a single LCD panel into multiple "monitors", but hardware engineers are clever folk and are more than capable of abusing standards like this when it serves to save a buck.

Turning Two Back Into One

We've got lots of APIs that expose monitor information in the system, and across which we might be able to wave our magic abstraction wand to fix this:

  1. The KMS API. This is the kernel interface which is used by all graphics stuff, including user-space applications and the frame buffer console. Solve the problem here and it works everywhere automatically.

  2. The libdrm API. This is just the KMS ioctls wrapped in a simple C library. Fixing things here wouldn't make fbcons work, but would at least get all of the window systems working.

  3. Every 2D X driver. (Yeah, we're trying to replace all of these with the one true X driver). Fixing the problem here would mean that all X desktops would work. However, that's a lot of code to hack, so we'll skip this.

  4. The X server RandR code. More plausible than fixing every driver, this also makes X desktops work.

  5. The RandR library. If not in the X server itself, how about over in user space in the RandR protocol library? Well, the problem here is that we've now got two of them (Xlib and xcb), and the xcb one is auto-generated from the protocol descriptions. Not plausible.

  6. The Xinerama code in the X server. Xinerama is how we did multi-monitor stuff before RandR existed. These days, RandR provides Xinerama emulation, but we've been telling people to switch to RandR directly.

  7. Some new API. Awesome. Ok, so if we haven't fixed this in any existing API we control (kernel/libdrm/X.org), then we effectively dump the problem into the laps of the desktop and application developers. Given how long it's taken them to adopt current RandR stuff, providing yet another complication in their lives won't make them very happy.

All Our APIs Suck

Dave Airlie merged MST support into the kernel for version 3.17 in the simplest possible fashion -- pushing the problem out to user space. I was initially vaguely tempted to go poke at it and try to fix things there, but he eventually convinced me that it just wasn't feasible.

It turns out that all of our fancy new modesetting APIs describe the hardware in more detail than any application actually cares about. In particular, we expose a huge array of hardware objects:

Each of these objects exposes intimate details about the underlying hardware -- which of them can work together, and which cannot; what kinds of limits are there on data rates and formats; and pixel-level timing details about blanking periods and refresh rates.

To make things work, some piece of code needs to actually hook things up, and explain to the user why the configuration they want just isn't possible.

The sticking point we reached was that when an MST monitor gets plugged in, it needs two CRTCs to drive it. If one of those is already in use by some other output, there's just no way you can steal it for MST mode.

Another problem -- we expose EDID data and actual video mode timings. Our MST monitor has two EDID blocks, one for each half. They happen to describe how they're related, and how you should configure them, but if we want to hide that from the application, we'll have to pull those EDID blocks apart and construct a new one. The same goes for video modes; we'll have to construct ones for MST mode.

Every single one of our APIs exposes enough of this information to be dangerous.

Every one, except Xinerama. All it talks about is a list of rectangles, each of which represents a logical view into the desktop. Did I mention we've been encouraging people to stop using this? And that some of them listened to us? Foolishly?

Dave's Tiling Property

Dave hacked up the X server to parse the EDID strings and communicate the layout information to clients through an output property. Then he hacked up the gnome code to parse that property and build a RandR configuration that would work.

Then, he changed to RandR Xinerama code to also parse the TILE properties and to fix up the data seen by application from that.

This works well enough to get a desktop running correctly, assuming that desktop uses Xinerama to fetch this data. Alas, gtk has been "fixed" to use RandR if you have RandR version 1.3 or later. No biscuit for us today.

Adding RandR Monitors

RandR doesn't have enough data types yet, so I decided that what we wanted to do was create another one; maybe that would solve this problem.

Ok, so what clients mostly want to know is which bits of the screen are going to be stuck together and should be treated as a single unit. With current RandR, that's some of the information included in a CRTC. You pull the pixel size out of the associated mode, physical size out of the associated outputs and the position from the CRTC itself.

Most of that information is available through Xinerama too; it's just missing physical sizes and any kind of labeling to help the user understand which monitor you're talking about.

The other problem with Xinerama is that it cannot be configured by clients; the existing RandR implementation constructs the Xinerama data directly from the RandR CRTC settings. Dave's Tiling property changes edit that data to reflect the union of associated monitors as a single Xinerama rectangle.

Allowing the Xinerama data to be configured by clients would fix our 4k MST monitor problem as well as solving the longstanding video wall, WiDi and VNC troubles. All of those want to create logical monitor areas within the screen under client control

What I've done is create a new RandR datatype, the "Monitor", which is a rectangular area of the screen which defines a rectangular region of the screen. Each monitor has the following data:

There are three requests to define, delete and list monitors. And that's it.

Now, we want the list of monitors to completely describe the environment, and yet we don't want existing tools to break completely. So, we need some way to automatically construct monitors from the existing RandR state while still letting the user override portions of it as needed to explain virtual or tiled outputs.

So, what I did was to let the client specify a list of outputs for each monitor. All of the CRTCs which aren't associated with an output in any client-defined monitor are then added to the list of monitors reported back to clients. That means that clients need only define monitors for things they understand, and they can leave the other bits alone and the server will do something sensible.

The second tricky bit is that if you specify an empty rectangle at 0,0 for the pixel geometry, then the server will automatically compute the geometry using the list of outputs provided. That means that if any of those outputs get disabled or reconfigured, the Monitor associated with them will appear to change as well.

Current Status

Gtk+ has been switched to use RandR for RandR versions 1.3 or later. Locally, I hacked libXrandr to override the RandR version through an environment variable, set that to 1.2 and Gtk+ happily reverts back to Xinerama and things work fine. I suspect the plan here will be to have it use the new Monitors when present as those provide the same info that it was pulling out of RandR's CRTCs.

KDE appears to still use Xinerama data for this, so it "just works".

Where's the code

As usual, all of the code for this is in a collection of git repositories in my home directory on fd.o:

git://people.freedesktop.org/~keithp/randrproto master
git://people.freedesktop.org/~keithp/libXrandr master
git://people.freedesktop.org/~keithp/xrandr master
git://people.freedesktop.org/~keithp/xserver randr-monitors

RandR protocol changes

Here's the new sections added to randrproto.txt

                  ❧❧❧❧❧❧❧❧❧❧❧

1.5. Introduction to version 1.5 of the extension

Version 1.5 adds monitors

 • A 'Monitor' is a rectangular subset of the screen which represents
   a coherent collection of pixels presented to the user.

 • Each Monitor is be associated with a list of outputs (which may be
   empty).

 • When clients define monitors, the associated outputs are removed from
   existing Monitors. If removing the output causes the list for that
   monitor to become empty, that monitor will be deleted.

 • For active CRTCs that have no output associated with any
   client-defined Monitor, one server-defined monitor will
   automatically be defined of the first Output associated with them.

 • When defining a monitor, setting the geometry to all zeros will
   cause that monitor to dynamically track the bounding box of the
   active outputs associated with them

This new object separates the physical configuration of the hardware
from the logical subsets  the screen that applications should
consider as single viewable areas.

1.5.1. Relationship between Monitors and Xinerama

Xinerama's information now comes from the Monitors instead of directly
from the CRTCs. The Monitor marked as Primary will be listed first.

                  ❧❧❧❧❧❧❧❧❧❧❧

5.6. Protocol Types added in version 1.5 of the extension

MONITORINFO { name: ATOM
          primary: BOOL
          automatic: BOOL
          x: INT16
          y: INT16
          width: CARD16
          height: CARD16
          width-in-millimeters: CARD32
          height-in-millimeters: CARD32
          outputs: LISTofOUTPUT }

                  ❧❧❧❧❧❧❧❧❧❧❧

7.5. Extension Requests added in version 1.5 of the extension.

┌───
    RRGetMonitors
    window : WINDOW
     ▶
    timestamp: TIMESTAMP
    monitors: LISTofMONITORINFO
└───
    Errors: Window

    Returns the list of Monitors for the screen containing
    'window'.

    'timestamp' indicates the server time when the list of
    monitors last changed.

┌───
    RRSetMonitor
    window : WINDOW
    info: MONITORINFO
└───
    Errors: Window, Output, Atom, Value

    Create a new monitor. Any existing Monitor of the same name is deleted.

    'name' must be a valid atom or an Atom error results.

    'name' must not match the name of any Output on the screen, or
    a Value error results.

    If 'info.outputs' is non-empty, and if x, y, width, height are all
    zero, then the Monitor geometry will be dynamically defined to
    be the bounding box of the geometry of the active CRTCs
    associated with them.

    If 'name' matches an existing Monitor on the screen, the
    existing one will be deleted as if RRDeleteMonitor were called.

    For each output in 'info.outputs, each one is removed from all
    pre-existing Monitors. If removing the output causes the list of
    outputs for that Monitor to become empty, then that Monitor will
    be deleted as if RRDeleteMonitor were called.

    Only one monitor per screen may be primary. If 'info.primary'
    is true, then the primary value will be set to false on all
    other monitors on the screen.

    RRSetMonitor generates a ConfigureNotify event on the root
    window of the screen.

┌───
    RRDeleteMonitor
    window : WINDOW
    name: ATOM
└───
    Errors: Window, Atom, Value

    Deletes the named Monitor.

    'name' must be a valid atom or an Atom error results.

    'name' must match the name of a Monitor on the screen, or a
    Value error results.

    RRDeleteMonitor generates a ConfigureNotify event on the root
    window of the screen.

                  ❧❧❧❧❧❧❧❧❧❧❧

17 Dec 2014 9:36am GMT

16 Dec 2014

feedPlanet Debian

Gregor Herrmann: GDAC 2014/16

today I met with a young friend (attending the final year of technical high school) for coffee. he's exploring free software since one or two years, & he's running debian jessie on his laptop since some time. it's really amazing to see how exciting this travel into the free software cosmos is for him; & it's good to see that linux & debian are not only appealing to greybeards like me :)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

16 Dec 2014 10:46pm GMT

Raphael Geissert: Editing Debian online with sources.debian.net

How cool would it be to fix that one bug you just found without having to download a source package? and without leaving your browser?

Inspired by github's online code editing, during Debconf 14 I worked on integrating an online editor on debsources (the software behind sources.debian.net). Long story short: it is available today, for users of chromium (or anything supporting chrome extensions).

After installing the editor for sources.debian.net extension, go straight to sources.debian.net and enjoy!

Go from simple debsources:


To debsources on steroids:


All in all, it brings:


Clone it or fork it:

git clone https://github.com/rgeissert/ace-sourced.n.git


For example, head to apt's source code, find a typo and correct it online: open apt.cc, click on edit, make the changes, click on email patch. Yes! it can generate a mail template for sending the patch to the BTS: just add a nice message and your patch is ready to be sent.

Didn't find any typo to fix? how sad, head to codesearch and search Debian for a spelling mistake, click on any result, edit, correct, email! you will have contributed to Debian in less than 5 minutes without leaving your browser.

The editor was meant to be integrated into debsources itself, without the need of a browser extension. This is expected to be done when the requirements imposed by debsources maintainers are sorted out.

Kudos to Harlan Lieberman who helped debug some performance issues in the early implementations of the integration and for working on the packaging of the Ace editor.

16 Dec 2014 8:00am GMT

15 Dec 2014

feedPlanet Debian

Gustavo Noronha Silva: Web Engines Hackfest 2014

For the 6th year in a row, Igalia has organized a hackfest focused on web engines. The 5 years before this one were actually focused on the GTK+ port of WebKit, but the number of web engines that matter to us as Free Software developers and consultancies has grown, and so has the scope of the hackfest.

It was a very productive and exciting event. It has already been covered by Manuel Rego, Philippe Normand, Sebastian Dröge and Andy Wingo! I am sure more blog posts will pop up. We had Martin Robinson telling us about the new Servo engine that Mozilla has been developing as a proof of concept for both Rust as a language for building big, complex products and for doing layout in parallel. Andy gave us a very good summary of where JS engines are in terms of performance and features. We had talks about CSS grid layouts, TyGL - a GL-powered implementation of the 2D painting backend in WebKit, the new Wayland port, announced by Zan Dobersek, and a lot more.

With help from my colleague ChangSeok OH, I presented a description of how a team at Collabora led by Marco Barisione made the combination of WebKitGTK+ and GNOME's web browser a pretty good experience for the Raspberry Pi. It took a not so small amount of both pragmatic limitations and hacks to get to a multi-tab browser that can play youtube videos and be quite responsive, but we were very happy with how well WebKitGTK+ worked as a base for that.

One of my main goals for the hackfest was to help drive features that were lingering in the bug tracker for WebKitGTK+. I picked up a patch that had gone through a number of iterations and rewrites: the HTML5 notifications support, and with help from Carlos Garcia, managed to finish it and land it at the last day of the hackfest! It provides new signals that can be used to authorize notifications, show and close them.

To make notifications work in the best case scenario, the only thing that the API user needs to do is handle the permission request, since we provide a default implementation for the show and close signals that uses libnotify if it is available when building WebKitGTK+. Originally our intention was to use GNotification for the default implementation of those signals in WebKitGTK+, but it turned out to be a pain to use for our purposes.

GNotification is tied to GApplication. This allows for some interesting features, like notifications being persistent and able to reactivate the application, but those make no sense in our current use case, although that may change once service workers become a thing. It can also be a bit problematic given we are a library and thus have no GApplication of our own. That was easily overcome by using the default GApplication of the process for notifications, though.

The show stopper for us using GNotification was the way GNOME Shell currently deals with notifications sent using this mechanism. It will look for a .desktop file named after the application ID used to initialize the GApplication instance and reject the notification if it cannot find that. Besides making this a pain to test - our test browser would need a .desktop file to be installed, that would not work for our main API user! The application ID used for all Web instances is org.gnome.Epiphany at the moment, and that is not the same as any of the desktop files used either by the main browser or by the web apps created with it.

For the future we will probably move Epiphany towards this new era, and all users of the WebKitGTK+ API as well, but the strictness of GNOME Shell would hurt the usefulness of our default implementation right now, so we decided to stick to libnotify for the time being.

Other than that, I managed to review a bunch of patches during the hackfest, and took part in many interesting discussions regarding the next steps for GNOME Web and the GTK+ and Wayland ports of WebKit, such as the potential introduction of a threaded compositor, which is pretty exciting. We also tried to have Bastien Nocera as a guest participant for one of our sessions, but it turns out that requires more than a notebook on top of a bench hooked up to a TV to work well. We could think of something next time ;D.

I'd like to thank Igalia for organizing and sponsoring the event, Collabora for sponsoring and sending ChangSeok and myself over to Spain from far away Brazil and South Korea, and Adobe for also sponsoring the event! Hope to see you all next year!

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

Web Engines Hackfest 2014 sponsors: Adobe, Collabora and Igalia

15 Dec 2014 11:20pm GMT

Gregor Herrmann: GDAC 2014/15

nothing exciting today in my debian life. just yet another nice example of collaboration around an RC bug where the bug submitter, the maintainer & me investigated via the BTS, & the maintainer also got support on IRC from others. - now we just need someone to come up with an actual fix for the problem :)


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

15 Dec 2014 9:58pm GMT

Holger Levsen: 20121214-not-everybody-is-equal

We ain't equal in Debian neither and wishful thinking won't help.

"White people think calling them white is racist." - "White people think calling them racist is racist."

(Thanks to and via 2damnfeisty and blackgirlsparadise!)

Posted here (in this white male community...) as a food for thought. What else is invisible for whom? Or hardly visible or distorted or whatever shade of (in)visible... - and how can we know about things we cannot (yet) see...

15 Dec 2014 7:25pm GMT

Thomas Goirand: Supporting 3 init systems in OpenStack packages

tl;dr: Providing support for all 3 init systems (sysv-rc, Upstart and systemd) isn't hard, and generating the init scripts / Upstart job / systemd using a template system is a lot easier than I previously thought.

As always, when writing this kind of blog post, I do expect that others will not like what I did. But that's the point: give me your opinion in a constructive way (please be polite even if you don't like what you see… I had too many times had to read harsh comments), and I'll implement your ideas if I find it nice.

History of the implementation: how we came to the idea

I had no plan to do this. I don't believe what I wrote can be generalized to all of the Debian archive. It's just that I started doing things, and it made sense when I did it. Let me explain how it happened.

Since it's clear that many, and especially the most advanced one, may have an opinion about which init system they prefer, and because I also support Ubuntu (at least Trusty), I though it was a good idea to support all the "main" init system: sysv-rc, Upstart and systemd. Though I have counted (for the sake of being exact in this blog) : OpenStack in Debian contains currently 64 init scripts to run daemons in total. That's quite a lot. A way too much to just write them, all by hand. Though that's what I was doing for the last years… until this the end of this last summer!

So, doing all by hand, I first started implementing Upstart. Its support was there only when building in Ubuntu (which isn't the correct thing to do, this is now fixed, read further…). Then we thought about adding support for systemd. Gustavo Panizzo, one of the contributors in the OpenStack packages, started implementing it in Keystone (the auth server for OpenStack) for the Juno release which was released this October. He did that last summer, early enough so we didn't expect anyone to use the Juno branch Keystone. After some experiments, we had kind of working. What he did was invoking "/etc/init.d/keystone start-systemd", which was still using start-stop-daemon. Yes, that's not perfect, and it's better to use systemd foreground process handling, but at least, we had a unique place where to write the startup scripts, where we check the /etc/default for the logging configuration, configure the log file, and so on.

Then around in october, I took a step backward to see the whole picture with sysv-rc scripts, and saw the mess, with all the tiny, small difference between them. It became clear that I had to do something to make sure they were all the same, with the support for the same things (like which log system to use, where to store the PID, create /var/lib/project, /var/run/project and so on…).

Last, on this month of December, I was able to fix the remaining issues for systemd support, thanks to the awesome contribution of Mikael Cluseau on the Alioth OpenStack packaging list. Now, the systemd unit file is still invoking the init script, but it's not using start-stop-daemon anymore, no PID file involved, and daemons are used as systemd foreground processes. Finally, daemons service files are also activated on installation (they were not previously).

Implementation

So I took the simplistic approach to use always the same template for the sysv-rc switch/case, and the start and stop functions, happening it at the end of all debian/*.init.in scripts. I started to try to reduce the number of variables, and I was surprised of the result: only a very small part of the init scripts need to change from daemon to daemon. For example, for nova-api, here's the init script (LSB header stripped-out):

DESC="OpenStack Compute API"
PROJECT_NAME=nova
NAME=${PROJECT_NAME}-api

That is it: only 3 lines, defining only the name of the daemon, the name of the project it attaches (eg: nova, cinder, etc.), and a long description. There's of course much more complicated init scripts (see the one for neutron-server in the Debian archive for example), but the vast majority only needs the above.

Here's the sysv-rc init script template that I currently use:

#!/bin/sh
# The content after this line comes from openstack-pkg-tools
# and has been automatically added to a .init.in script, which
# contains only the descriptive part for the daemon. Everything
# else is standardized as a single unique script.

# Author: Thomas Goirand <zigo@debian.org>

# PATH should only include /usr/* if it runs after the mountnfs.sh script
PATH=/sbin:/usr/sbin:/bin:/usr/bin

if [ -z "${DAEMON}" ] ; then
        DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
        SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
        SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
        SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
        STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
        CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
if [ -z "${NO_OPENSTACK_CONFIG_FILE_DAEMON_ARG}" ] ; then
        DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"
fi

# Exit if the package is not installed
[ -x $DAEMON ] || exit 0

# If ran as root, create /var/lock/X, /var/run/X, /var/lib/X and /var/log/X as needed
if [ "x$USER" = "xroot" ] ; then
        for i in lock run log lib ; do
                mkdir -p /var/$i/${PROJECT_NAME}
                chown ${SYSTEM_USER} /var/$i/${PROJECT_NAME}
        done
fi

# This defines init_is_upstart which we use later on (+ more...)
. /lib/lsb/init-functions

# Manage log options: logfile and/or syslog, depending on user's choosing
[ -r /etc/default/openstack ] && . /etc/default/openstack
[ -r /etc/default/$NAME ] && . /etc/default/$NAME
[ "x$USE_SYSLOG" = "xyes" ] && DAEMON_ARGS="$DAEMON_ARGS --use-syslog"
[ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE"

do_start() {
        start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
                        --test > /dev/null || return 1
        start-stop-daemon --start --quiet --background ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} --chdir /var/lib/${PROJECT_NAME} --startas $DAEMON \
                        -- $DAEMON_ARGS || return 2
}

do_stop() {
        start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE
        RETVAL=$?
        rm -f $PIDFILE
        return "$RETVAL"
}

do_systemd_start() {
        exec $DAEMON $DAEMON_ARGS
}

case "$1" in
start)
        init_is_upstart > /dev/null 2>&1 && exit 1
        log_daemon_msg "Starting $DESC" "$NAME"
        do_start
        case $? in
                0|1) log_end_msg 0 ;;
                2) log_end_msg 1 ;;
        esac
;;
stop)
        init_is_upstart > /dev/null 2>&1 && exit 0
        log_daemon_msg "Stopping $DESC" "$NAME"
        do_stop
        case $? in
                0|1) log_end_msg 0 ;;
                2) log_end_msg 1 ;;
        esac
;;
status)
        status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
;;
systemd-start)
        do_systemd_start
;;  
restart|force-reload)
        init_is_upstart > /dev/null 2>&1 && exit 1
        log_daemon_msg "Restarting $DESC" "$NAME"
        do_stop
        case $? in
        0|1)
                do_start
                case $? in
                        0) log_end_msg 0 ;;
                        1) log_end_msg 1 ;; # Old process is still running
                        *) log_end_msg 1 ;; # Failed to start
                esac
        ;;
        *) log_end_msg 1 ;; # Failed to stop
        esac
;;
*)
        echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload|systemd-start}" >&2
        exit 3
;;
esac

exit 0

Nothing particularly fancy here… You'll noticed that it's really OpenStack centric (see the LOGFILE and CONFIGFILE things…). You may have also noticed the call to "init_is_upstart" which is needed for upstart support. I'm not sure if it's at the correct place in the init script. Should I put that on top of the script? Was I right with the exit values for it? Please send me your comments…

Then I thought about generalizing all of this. Because not only the sysv-rc scripts needed to be squared-up, but also Upstart. The approach here was to source the sysv-rc script in debian/*.init.in, and then generate the Upstart job accordingly, using the above 3 variables (or more as needed). Here, the fun is that, instead of taking the approach of calculating everything at runtime with the sysv-rc, for Upstart jobs, many things are calculated at build time. For each debian/*.init.in script that the debian/rules finds, pkgos-gen-upstart-job is called. Here's pkgos-gen-upstart-job:

#!/bin/sh

INIT_TEMPLATE=${1}
UPSTART_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.upstart/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

## Find out what should go in After=
#SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`
#
#if [ -n "${SHOULD_START}" ] ; then
#       AFTER="After="
#       for i in ${SHOULD_START} ; do
#               AFTER="${AFTER}${i}.service "
#       done
#fi

if [ -z "${DAEMON}" ] ; then
        DAEMON=/usr/bin/${NAME}
fi
PIDFILE=/var/run/${PROJECT_NAME}/${NAME}.pid
if [ -z "${SCRIPTNAME}" ] ; then
        SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
        SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
        SYSTEM_GROUP=${PROJECT_NAME}
fi
if [ "${SYSTEM_USER}" != "root" ] ; then
        STARTDAEMON_CHUID="--chuid ${SYSTEM_USER}:${SYSTEM_GROUP}"
fi
if [ -z "${CONFIG_FILE}" ] ; then
        CONFIG_FILE=/etc/${PROJECT_NAME}/${PROJECT_NAME}.conf
fi
LOGFILE=/var/log/${PROJECT_NAME}/${NAME}.log
DAEMON_ARGS="${DAEMON_ARGS} --config-file=${CONFIG_FILE}"

echo "description \"${DESC}\"
author \"Thomas Goirand <zigo@debian.org>\"

start on runlevel [2345]
stop on runlevel [!2345]

chdir /var/run

pre-start script
        for i in lock run log lib ; do
                mkdir -p /var/\$i/${PROJECT_NAME}
                chown ${SYSTEM_USER} /var/\$i/${PROJECT_NAME}
        done
end script

script
        [ -x \"${DAEMON}\" ] || exit 0
        DAEMON_ARGS=\"${DAEMON_ARGS}\"
        [ -r /etc/default/openstack ] && . /etc/default/openstack
        [ -r /etc/default/\$UPSTART_JOB ] && . /etc/default/\$UPSTART_JOB
        [ \"x\$USE_SYSLOG\" = \"xyes\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --use-syslog\"
        [ \"x\$USE_LOGFILE\" != \"xno\" ] && DAEMON_ARGS=\"\$DAEMON_ARGS --log-file=${LOGFILE}\"

        exec start-stop-daemon --start --chdir /var/lib/${PROJECT_NAME} \\
                ${STARTDAEMON_CHUID} --make-pidfile --pidfile ${PIDFILE} \\
                --exec ${DAEMON} -- --config-file=${CONFIG_FILE} \${DAEMON_ARGS}
end script
" >${UPSTART_FILE}

The only thing which I don't know how to do, is how to implement the Should-Start / Should-Stop in an Upstart job. Can anyone shoot me a mail and tell me the solution?

Then, I wanted to add support for systemd. Here, we cheated, since we only just called the sysv-rc script from the systemd unit, however, the systemd-start target uses exec, so the process stays in the foreground. It's also much smaller than the Upstart thing. However, here, I could implement the "After" thing, corresponding to the Should-Start:

#!/bin/sh

INIT_TEMPLATE=${1}
SERVICE_FILE=`echo ${INIT_TEMPLATE} | sed 's/.init.in/.service/'`

# Get the variables defined in the init template
. ${INIT_TEMPLATE}

if [ -z "${SCRIPTNAME}" ] ; then
        SCRIPTNAME=/etc/init.d/${NAME}
fi
if [ -z "${SYSTEM_USER}" ] ; then
        SYSTEM_USER=${PROJECT_NAME}
fi
if [ -z "${SYSTEM_GROUP}" ] ; then
        SYSTEM_GROUP=${PROJECT_NAME}
fi

# Find out what should go in After=
SHOULD_START=`cat ${INIT_TEMPLATE} | grep "# Should-Start:" | sed 's/# Should-Start://'`

if [ -n "${SHOULD_START}" ] ; then
        AFTER="After="
        for i in ${SHOULD_START} ; do
                AFTER="${AFTER}${i}.service "
        done
fi

echo "[Unit]
Description=${DESC}
$AFTER

[Service]
User=${SYSTEM_USER}
Group=${SYSTEM_GROUP}
WorkingDirectory=/var/lib/${PROJECT_NAME}
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStartPre=/bin/chown ${SYSTEM_USER}:${SYSTEM_GROUP} /var/lock/${PROJECT_NAME} /var/log/${PROJECT_NAME} /var/lib/${PROJECT_NAME}
ExecStart=${SCRIPTNAME} systemd-start
Restart=on-failure

[Install]
WantedBy=multi-user.target
" >${SERVICE_FILE}

As you can see, it's calling /etc/init.d/${SCRIPTNAME} sytemd-start, which isn't great. I'd be happy to have comments from systemd user / maintainers on how to fix it to make it better.

Integrating in debian/rules

To integrate with the Debian package build system, we only need had to write this:

override_dh_installinit:
        # Create the init scripts from the template
        for i in `ls -1 debian/*.init.in` ; do \
                MYINIT=`echo $$i | sed s/.init.in//` ; \
                cp $$i $$MYINIT.init ; \
                cat /usr/share/openstack-pkg-tools/init-script-template >>$$MYINIT.init ; \
                pkgos-gen-systemd-unit $$i ; \
        done
        # If there's an upstart.in file, use that one instead of the generated one
        for i in `ls -1 debian/*.upstart.in` ; do \
                MYPKG=`echo $$i | sed s/.upstart.in//` ; \
                cp $$MYPKG.upstart.in $$MYPKG.upstart ; \
        done
        # Generate the upstart job if there's no already existing .upstart.in
        for i in `ls debian/*.init.in` ; do \
                MYINIT=`echo $$i | sed s/.init.in/.upstart.in/` ; \
                if ! [ -e $$MYINIT ] ; then \
                        pkgos-gen-upstart-job $$i ; \
                fi \
        done
        dh_installinit --error-handler=true
        # Generate the systemd unit file
        # Note: because dh_systemd_enable is called by the
        # dh sequencer *before* dh_installinit, we have
        # to process it manually.
        for i in `ls debian/*.init.in` ; do \
                pkgos-gen-systemd-unit $$i ; \
                MYSERVICE=`echo $$i | sed 's/debian\///'` ; \
                MYSERVICE=`echo $$MYSERVICE | sed 's/.init.in/.service/'` ; \
                dh_systemd_enable $$MYSERVICE ; \
        done

As you can see, it's possible to use a debian/*.upstart.in and not use the templating system, in the more complicated case (I needed it mostly for neutron-server and neutron-plugin-openvswitch-agent).

Conclusion

I do not pretend that what I wrote in the openstack-pkg-tools is the ultimate solution. But I'm convince that it answers our own need as the OpenStack maintainers in Debian. There's a lot of room for improvements (like implementing the Should-Start in Upstart jobs, or stop calling the sysv-rc script in the systemd units), but that this is a very good move that we did to use templates and generated scripts, as the init scripts are a way more easy to maintain now, in a much more unified way. As I'm not completely satisfied for the systemd and Upstart implementation, I'm sure that there's already a huge improvements on the sysv-rc script maintainability.

Last and again: please send your comments and help improving the above! :)

15 Dec 2014 8:15am GMT

14 Dec 2014

feedPlanet Debian

Gregor Herrmann: GDAC 2014/14

I just got a couple of mails from the BTS. like almost every day, several times per day. now it made me realize how much I like the BTS, & how happy I am that it works so well & even gets new features. - thanks to the BTS maintainers for their continuous work!


this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

14 Dec 2014 9:27pm GMT

Mario Lang: Data-binding MusicXML

My long-term free software project (Braille Music Compiler) just produced some offspring! xsdcxx-musicxml is now available on GitHub.

I used CodeSynthesis XSD to generate a rather complete object model for MusicXML 3.0 documents. Some of the classes needed a bit of manual adjustment, to make the client API really nice and tidy.

During the process, I have learnt (as is almost always the case when programming) quite a lot. I have to say, once you got the hang of it, CodeSynthesis XSD is really a very powerful tool. I definitely prefer having these 100k lines of code auto-generated from a XML Schema, instead of having to implement small parts of it by hand.

If you are into MusicXML for any reason, and you like C++, give this library a whirl. At least to me, it is what I was always looking for: Rather type-safe, with a quite self-explanatory API.

For added ease of integration, xsdcxx-musicxml is sub-project friendly. In other words, if your project uses CMake and Git, adding xsdcxx-musicxml as a subproject is as easy as using git submodule add and putting add_subdirectory(xsdcxx-musicxml) into your CMakeLists.txt.

Finally, if you want to see how this library can be put to use: The MusicXML export functionality of BMC is all in one C++ source file: musicxml.cpp.

14 Dec 2014 8:30pm GMT

Gregor Herrmann: RC bugs 2014/49-50

it's getting harder to find "nice" RC bugs, due to the efforts of various bug hunters & the awesome auto-removal-from-testing feature. - anyway, here's the list of bugs I worked on in the last 2 weeks:

14 Dec 2014 4:01pm GMT

Enrico Zini: html5-sse

HTML5 Server-sent events

I have a Django view that runs a slow script server-side, and streams the script output to Javascript. This is the bit of code that runs the script and turns the output into a stream of events:

def stream_output(proc):
    '''
    Take a subprocess.Popen object and generate its output, line by line,
    annotated with "stdout" or "stderr". At process termination it generates
    one last element: ("result", return_code) with the return code of the
    process.
    '''
    fds = [proc.stdout, proc.stderr]
    bufs = [b"", b""]
    types = ["stdout", "stderr"]
    # Set both pipes as non-blocking
    for fd in fds:
        fcntl.fcntl(fd, fcntl.F_SETFL, os.O_NONBLOCK)
    # Multiplex stdout and stderr with different prefixes
    while len(fds) > 0:
        s = select.select(fds, (), ())
        for fd in s[0]:
            idx = fds.index(fd)
            buf = fd.read()
            if len(buf) == 0:
                fds.pop(idx)
                if len(bufs[idx]) != 0:
                    yield types[idx], bufs.pop(idx)
                types.pop(idx)
            else:
                bufs[idx] += buf
                lines = bufs[idx].split(b"\n")
                bufs[idx] = lines.pop()
                for l in lines:
                    yield types[idx], l
    res = proc.wait()
    yield "result", res

I used to just serialize its output and stream it to JavaScript, then monitor onreadystatechange on the XMLHttpRequest object browser-side, but then it started failing on Chrome, which won't trigger onreadystatechange until something like a kilobyte of data has been received.

I didn't want to stream a kilobyte of padding just to work-around this, so it was time to try out Server-sent events. See also this.

This is the Django view that sends the events:

class HookRun(View):
    def get(self, request):
        proc = run_script(request)
        def make_events():
            for evtype, data in utils.stream_output(proc):
                if evtype == "result":
                    yield "event: {}\ndata: {}\n\n".format(evtype, data)
                else:
                    yield "event: {}\ndata: {}\n\n".format(evtype, data.decode("utf-8", "replace"))

        return http.StreamingHttpResponse(make_events(), content_type='text/event-stream')

    @method_decorator(never_cache)
    def dispatch(self, *args, **kwargs):
        return super().dispatch(*args, **kwargs)

And this is the template that renders it:

{% extends "base.html" %}
{% load i18n %}

{% block head_resources %}
{{block.super}}
<style type="text/css">
.out {
    font-family: monospace;
    padding: 0;
    margin: 0;
}
.stdout {}
.stderr { color: red; }
.result {}
.ok { color: green; }
.ko { color: red; }
</style>
{# Polyfill for IE, typical... https://github.com/remy/polyfills/blob/master/EventSource.js #}
<script src="{{ STATIC_URL }}js/EventSource.js"></script>
<script type="text/javascript">
$(function() {
    // Manage spinners and other ajax-related feedback
    $(document).nav();
    $(document).nav("ajax_start");

    var out = $("#output");

    var event_source = new EventSource("{% url 'session_hookrun' name=name %}");
    event_source.addEventListener("open", function(e) {
      //console.log("EventSource open:", arguments);
    });
    event_source.addEventListener("stdout", function(e) {
      out.append($("<p>").attr("class", "out stdout").text(e.data));
    });
    event_source.addEventListener("stderr", function(e) {
      out.append($("<p>").attr("class", "out stderr").text(e.data));
    });
    event_source.addEventListener("result", function(e) {
      if (+e.data == 0)
          out.append($("<p>").attr("class", "result ok").text("{% trans 'Success' %}"));
      else
          out.append($("<p>").attr("class", "result ko").text("{% trans 'Script failed with code' %} " + e.data));
      event_source.close();
      $(document).nav("ajax_end");
    });
    event_source.addEventListener("error", function(e) {
      // There is an annoyance here: e does not contain any kind of error
      // message.
      out.append($("<p>").attr("class", "result ko").text("{% trans 'Error receiving script output from the server' %}"));
      console.error("EventSource error:", arguments);
      event_source.close();
      $(document).nav("ajax_end");
    });
});
</script>
{% endblock %}

{% block content %}

<h1>{% trans "Processing..." %}</h1>

<div id="output">
</div>

{% endblock %}

It's simple enough, it seems reasonably well supported besides needing a polyfill for IE and, astonishingly, it even works!

14 Dec 2014 3:32pm GMT

Daniel Leidert: Issues with Server4You vServer running Debian Stable (Wheezy)

I recently acquired a vServer hosted by Server4You and decided to install a Debian Wheezy image. Usually I boot any device in backup mode and first install a fresh Debian copy using debootstrap over the provided image, to have a clean system. In this case I did not and I came across a few glitches I want to talk about. So hopefully, if you are running the same system image, it saves you some time to figure out, why the h*ll some things don't work as expected :)

Cron jobs not running

I installed unattended-upgrades and adjusted all configuration files to enable unattended upgrades. But I never received any mail about an update although looking at the system, I saw updates waiting. I checked with

# run-parts --list /etc/cron.daily

and apt was not listed although /etc/cron.daily/apt was there. After spending some time to figure out, what was going on, I found the rather simple cause: Several scripts were missing the executable bit, thus did not run. So it seems, for whatever reason, the image authors have tempered with file permissions and of course, not by using dpkg-statoverride :( It was easy to fix the file permissions for everything beyond /etc/cron*, but that still leaves a very bad feeling, that there are more files that have been tempered with! I'm not speaking about customizations. That are easy to find using debsums. I'm speaking about file permissions and ownership.

Now there seems no easy way to either check for changed permissions or ownership. The only solution I found is to get a list of all installed packages on the system, install them into a chroot environment and get all permission and ownership information from this very fresh system. Then compare file permissions/ownership of the installed system with this list. Not fun.

init from testing / upstart on hold

Today I've discovered, that apt-get wanted to update the init package. Of course I was curious, why unattended-upgrades didn't yet already do so. Turns out, init is only in testing/unstable and essential there. I purged it, but apt-get keeps bugging me to update/install this package. I really began to wonder, what is going on here, because this is a plain stable system:

  • no sources listed for backports, volatile, multimedia etc.
  • sources listed for testing and unstable
  • only packages from stable/stable-updates installed
  • sets APT::Default-Release "stable";

First I checked with aptitude:

# aptitude why init
Unable to find a reason to install init.

Ok, so why:

# apt-get dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

JFTR: I see a stable system bugging me to install systemd for no obvious reason. The issue might be similar! I'm still investigating. (not reproducible anymore)

Now I tried to debug this:

# apt-get -o  Debug::pkgProblemResolver="true" dist-upgrade -u
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Starting
Starting 2
Investigating (0) upstart [ amd64 ] < 1.6.1-1 | 1.11-5 > ( admin )
Broken upstart:amd64 Conflicts on sysvinit [ amd64 ] < none -> 2.88dsf-41+deb7u1 | 2.88dsf-58 > ( admin )
Conflicts//Breaks against version 2.88dsf-58 for sysvinit but that is not InstVer, ignoring
Considering sysvinit:amd64 5102 as a solution to upstart:amd64 10102
Added sysvinit:amd64 to the remove list
Fixing upstart:amd64 via keep of sysvinit:amd64
Done
Done
The following NEW packages will be installed:
init
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/4674 B of archives.
After this operation, 29.7 kB of additional disk space will be used.
Do you want to continue [Y/n]?

Eh, upstart?

# apt-cache policy upstart
upstart:
Installed: 1.6.1-1
Candidate: 1.6.1-1
Version table:
1.11-5 0
500 http://ftp.de.debian.org/debian/ testing/main amd64 Packages
500 http://ftp.de.debian.org/debian/ sid/main amd64 Packages
*** 1.6.1-1 0
990 http://ftp.de.debian.org/debian/ stable/main amd64 Packages
100 /var/lib/dpkg/status
# dpkg -l upstart
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=============================-===================-===================-===============================================================
hi upstart 1.6.1-1 amd64 event-based init daemon

Ok, at least one package is at hold. This is another questionable customization, but in case easy to fix. But I still don't understand apt-get and the difference to aptitude behaviour? Can someone please enlighten me?

Customized files

This isn't really an issue, but just for completion: several files have been customized. debsums easily shows which ones:

# debsums -ac
I don't have the original list anymore - please check yourself

14 Dec 2014 1:58pm GMT

Dirk Eddelbuettel: rfoaas 0.0.4.20141212

A new version of rfoaas is now on CRAN. The rfoaas package provides an interface for R to the most excellent FOAAS service -- which provides a modern, scalable and RESTful web service for the frequent need to tell someone to eff off.

The FOAAS backend gets updated in spurts, and yesterday a few pull requests were integrated, including one from yours truly. So with that it was time for an update to rfoaas. As the version number upstream did not change (bad, bad, practice) I appended the date the version number.

CRANberries also provides a diff to the previous release. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 Dec 2014 12:20am GMT

13 Dec 2014

feedPlanet Debian

Holger Levsen: having fun cardboard-crack.com.gif

13 Dec 2014 4:11pm GMT

Holger Levsen: 20141213-on-having-fun-in-debian

On having fun in Debian

(Thanks to cardboard-crack.com for this awesome comic!)

13 Dec 2014 4:11pm GMT