10 Feb 2016

feedPlanet Debian

Guido Günther: Debian Fun in January 2016

Debian LTS

January was the ninth month I contributed to Debian LTS under the Freexian umbrella. In total I spent 13 hours working on:

There was no progress on using the same nss in all suites. This will continue in February as does the Squeeze-lts Wheezy forward porting.

Other Debian stuff

10 Feb 2016 7:10am GMT

09 Feb 2016

feedPlanet Debian

Mark Brown: Maintaining your email

One of the difficulties of being a kernel maintainer for a busy subsystem is that you will often end up getting a lot of mail that requires reading and handling which in turn requires sending a lot of mail out in reply. Some of that requires thought and careful consideration but a lot of it is quite routine and (perhaps surprisingly) there is often more challenge in doing a good job of handling these routine messages.

For a long time I used to hand write every reply I sent but the problem with doing that is that sending the same message a lot of times tends to result in the messages getting more and more brief as the message becomes routine and practised. Your words become more optimised and if you've stopped thinking about the message before you've finished typing it then there's a desire to finish the typing and get on to the next thing. This is I think a lot of the reputation that kernel maintainers have for being terse and unhelpful comes from - messages that are very practised for someone sending them all the time aren't always going to be obvious or helpful for someone who's not so intimately familiar with what's going on. The good part of it is that everyone is getting a personalised response and it's easy to insert a comment about that specific situation when you're already replying but it's not clear that the tradeoff is a good one.

What I've started doing instead for most things is keeping a set of pre-written paragraphs for common cases that I can just insert into a mail and edit as needed. Hopefully it's working well for people, it means the replies are that bit more verbose than they might otherwise be (mainly adding an explanation of why a given thing is being asked for) but can easily be adapted as needed. The one exception is the "Applied, thanks" mails I used to send when I apply a patch (literally just saying that). Those are now automatically generated by the script I use to sync my local git repository with kernel.org and very much more verbose:

From: Mark Brown <broonie@kernel.org>
To: ${CCS}
Cc: ${LIST}
Subject: ${SUBJECT}
In-Reply-To: ${MSGID}

The patch

   ${TITLE}

has been applied to the ${REPO} tree at

   ${URL} ${BRANCH}

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

(unfortunately this bit seems to be something that it's worth pointing out)

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

(the script does try to CC relevant lists). As well as giving people more information this also means that the mails only get sent out when things actually get published to my public repositories which avoids some confusion that used to happen sometimes with people getting my replies before I'd pushed, especially when I'd been working with poor connectivity as often happens when travelling. On the down side it's very much an obvious form letter which some people don't like and which can make people glaze over.

My hope with this is to make things easier on average for patch submitters and easier for me, feedback on the scripted e-mails appears to be good thus far and the goal with the pasted in content is that it should be less obvious that it's happening so I'd expect less feedback there.

09 Feb 2016 7:47pm GMT

Jose M. Calhariz: A Selection of Talks from FOSDEM 2016

It's that time of the year where I go to FOSDEM (Free and Open Source Software Developers' European Meeting). The keynotes and the maintracks are very good, with good presentations and contents.

It's very dificult to choose what talks to see, what talks to see later in video and what talks to loose. What I leave here is my selection of talks. This selection is representative of my tastes, not of the quality of the presentations. I will give links for material that is available now. I will do periodic updates when the new material is available: video or slides.

09 Feb 2016 7:23pm GMT

Mike Gabriel: Systemd based network setup on Debian Edu jessie workstations

This article describes how to use systemd-networkd on Debian Edu 8.x (aka jessie) notebooks.

What we have to deal with?

At the schools we support we have several notebooks running Debian Edu 8.x (aka jessie) in the field.

For school notebooks (classroom sets) we install the Debian Edu Workstation Profile. Those machines are mostly used over wireless network.

We know that Debian Edu also offers a Roaming Workstation Profile at installation time, but with that profile chosen, user logins create local user accounts and local home directories on the notebooks (package: libpam-mklocaluser). For our customers, we do not want that. People using the school notebooks shall always work on their NFS home directories. School notebooks shall not be usable outside of the school network.

Our woes...

The default setup on Debian Edu jessie workstations regarding networking is this:

We have observed various problems with that setup:

read more

09 Feb 2016 6:35pm GMT

Sven Hoexter: examine gpg key properties

Note to myself so I don't have to search for it the next time I've to answer security audit questions.

If you're lucky and you're running Debian you can install pgpdump and use

gpg --export-options export-minimal --export $KEYID | pgpdump

to retrieve a human friendly output. If you're unlucky you have to use

gpg --export-options export-minimal --export $KEYID | gpg --list-packets

and match the CIPHER_ALGO_∗ and DIGEST_ALGO_∗ numbers with those in include/cipher.h.

Found the information in this thread.

Update: anarcat suggested to take a look at the tools contained in hopenpgp-tools.

09 Feb 2016 4:03pm GMT

Joachim Breitner: GHC performance is rather stable

Johannes Bechberger, while working on his Bachelor's thesis supervised by my colleague Andreas Zwinkau, has developed a performance benchmark runner and results visualizer called "temci", and used GHC as a guinea pig. You can read his elaborate analysis on his blog.

This is particularly interesting given recent discussions about GHC itself becoming slower and slower, as for example observed by Johannes Waldmann and Anthony Cowley.

Johannes Bechberger's take-away is that, at least for the programs at hand (which were taken from the The Computer Language Benchmarks Game, there are hardly any changes worth mentioning, as most of the observed effects are less than a standard deviation and hence insignificant. He tries hard to distill some useful conclusions from the data; the one he finds are:

If you are interested, please head over to Johannes's post and look at the gory details of the analysis and give him feedback on that. Also, maybe his tool temci is something you want to try out?

Personally, I find it dissatisfying to learn so little from so much work, but as he writes: "It's so easy to lie with statistics.", and I might add "lie to yourself", e.g. by ignoring good advise about standard deviations and significance. I'm sure my tool gipeda (which powers perf.haskell.org) is guilty of that sin.

Maybe a different selection of test programs would yield more insight; the benchmark's games programs are too small and hand-optimized, the nofib programs are plain old and the fibon collection has bitrotted. I would love to see a curated, collection of real-world programs, bundled with all dependencies and frozen to allow meaningful comparisons, but updated to a new, clearly marked revision, on a maybe bi-yearly basis - maybe Haskell-SPEC-2016 if that were not a trademark infringement.

09 Feb 2016 3:17pm GMT

Alessio Treglia: The poetic code

Tweet

As well as the simple reading of a musical score is sufficient to an experienced musician to recognize the most velvety harmonic variations of an orchestral piece, so the apparent coldness of a fragment of program code can stimulate emotions of ecstatic contemplation in the developer.

Don't be misled by the simplicity of the laconic definition of the code as a sequence of instructions to be given to a computer to solve a specific problem. Generally, a problem has multiple solutions, the most simple and fast to implement, the most economical from the point of view of machine cycles or memory, the elegant solution and the makeshift one.

However, there is always a "poetic" solution, the one that has a particular and unusual beauty and that is always generated by the inexhaustible forge of the human intuition….

[Read More…]

Tweet

09 Feb 2016 11:31am GMT

Ingo Juergensmann: Letsencrypt - when your blog entries don't show up on Planet Debian

Recently there is much talk on Planet Debian about LetsEncrypt certs. This is great, because using HTTPS everywhere improves security and gives the NSA some more work to decrypt the traffic.

However, when you enabled your blog with a LetsEncrypt cert, you might run into the same problem as I: your new article won't show up on Planet Debian after changing your feed URI to HTTPS. The reason seems to be quite simple: planet-venus, which is the software behind Planet Debian seems to have problems with SNI enabled websites.

When following the steps outlined in the Debian Wiki, you can check this by yourself:

INFO:planet.runner:Fetching https://blog.windfluechter.net/taxonomy/term/2/feed via 5
ERROR:planet.runner:HttpLib2Error: Server presented certificate that does not match host blog.windfluechter.net: {'subjectAltName': (('DNS', 'abi94oesede.de'), ('DNS', 'www.abi94oesede.de')), 'notBefore': u'Jan 26 18:05:00 2016 GMT', 'caIssuers': (u'http://cert.int-x1.letsencrypt.org/',), 'OCSP': (u'http://ocsp.int-x1.letsencrypt.org/',), 'serialNumber': u'01839A051BF9D2873C0A3BAA9FD0227C54D1', 'notAfter': 'Apr 25 18:05:00 2016 GMT', 'version': 3L, 'subject': ((('commonName', u'abi94oesede.de'),),), 'issuer': ((('countryName', u'US'),), (('organizationName', u"Let's Encrypt"),), (('commonName', u"Let's Encrypt Authority X1"),))} via 5

I've filed bug #813313 for this. So, this might explain why your blog post doesn't appear on Planet Debian. Currently there seem 18 sites to be affected by this cert mismatch.

Kategorie:
Debian
Tags:
Debian
Software
Bug
Python

09 Feb 2016 11:27am GMT

Michal Čihař: Weekly phpMyAdmin contributions 2016-W05

Last week was really focused on code cleanups. The biggest change was removal of PmaAbsoluteUri configuration directive, which has caused quite some pain in past and is not really needed these days (when browsers support relative paths in the Location HTTP header).

This lead to cleanup in other parts as well - support for dead Mozilla Prism is gone, used HTTPS for OpenStreetMap tiles (the map layer now works on HTTPS as well), removed ForceSSL configuration directive as this is something what really needs to be handled at web server level. To improve test coverage, several tests no longer require runkit as the header() call is wrapped within Response class and can be overridden for testing without using runkit.

The list of handled issues is not that impressive this week:

Filed under: English phpMyAdmin | 0 comments

09 Feb 2016 11:00am GMT

Mike Gabriel: Résumé of our Edu Workshop in Kiel (26th - 29th January)

In the last week of January, the project IT-Zukunft Schule (Logo EDV-Systeme GmbH and DAS-NETZWERKTEAM) had visitors from Norway: Klaus Ade Johnstad and Linnea Skogtvedt from LinuxAvdelingen [1] came by for exchanging insights, knowledge, technology, stories regarding IT services at school in Norway and Nothern Germany.

This was our schedule...

Tuesday

Wednesday

Thursday

read more

09 Feb 2016 9:50am GMT

Orestis Ioannou: Using debsources API to determine the license of foo.bar

Following up on the hack of Matthieu - A one-liner to catch'em all! - and the recent features of Debsources I got the idea to modify a bit the one liner in order to retrieve the license of foo.bar.

The script will calculate the SHA256 hash of the file and then query the Debsources API in order to retrieve the license of that particular file.

Save the following in a file as license-of and add it in your $PATH

#!/bin/bash

function license-of {
    readlink -f $1 | xargs dpkg-query --search | awk -F ": " '{print $1}' | xargs apt-cache showsrc | grep-dctrl -s 'Package' -n '' | awk -v sha="$(sha256sum $1 | awk '{ print $1 }')" -F " " '{print "https://sources.debian.net/copyright/api/sha256/?checksum="sha"&packagename="$1""}' | xargs curl -sS
}

CMD="$1"
license-of ${CMD}

Then you can try something like:

    license-of /usr/lib/python2.7/dist-packages/pip/exceptions.py

Notes:

09 Feb 2016 8:45am GMT

08 Feb 2016

feedPlanet Debian

Ingo Juergensmann: rpcbind listening on all interfaces

Currently I'm testing GlusterFS as a replicating network filesystem. GlusterFS depends on rpcbind package. No problem with that, but I usually want to have the services that run on my machines to only listen on those addresses/interfaces that are needed to fulfill the task. This is especially important, because rpcbind can be abused by remote attackers for rpc amplification attacks (dDoS). So, the rpcbind man page states:

-h : Specify specific IP addresses to bind to for UDP requests. This option may be specified multiple times and is typically necessary when running on a multi-homed host. If no -h option is specified, rpcbind will bind to INADDR_ANY, which could lead to problems on a multi-homed host due to rpcbind returning a UDP packet from a different IP address than it was sent to. Note that when specifying IP addresses with -h, rpcbind will automatically add 127.0.0.1 and if IPv6 is enabled, ::1 to the list.

Ok, although there is neither a /etc/default/rpcbind.conf nor a /etc/rpcbind.conf nor a sample-rpcbind.conf under /usr/share/doc/rpcbind, some quick websearch revealed a sample config file. I'm using this one:

# /etc/init.d/rpcbind
OPTIONS=""

# Cause rpcbind to do a "warm start" utilizing a state file (default)
# OPTIONS="-w "

# Uncomment the following line to restrict rpcbind to localhost only for UDP requests
OPTIONS="${OPTIONS} -h 192.168.1.254"
#127.0.0.1 -h ::1"

# Uncomment the following line to enable libwrap TCP-Wrapper connection logging
OPTIONS="${OPTIONS} -l "

As you can see, I want to bind to 192.168.1.254. After a /etc/init.d/rpcbind restart verifying that everything works as desired with netstat is showing...

tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 2084266 30777/rpcbind
tcp6 0 0 :::111 :::* LISTEN 0 2084272 30777/rpcbind
udp 0 0 0.0.0.0:848 0.0.0.0:* 0 2084265 30777/rpcbind
udp 0 0 192.168.1.254:111 0.0.0.0:* 0 2084264 30777/rpcbind
udp 0 0 127.0.0.1:111 0.0.0.0:* 0 2084260 30777/rpcbind
udp6 0 0 :::848 :::* 0 2084271 30777/rpcbind
udp6 0 0 ::1:111 :::* 0 2084267 30777/rpcbind

Whoooops! Although I've specified that rpcbind should only listen to 192.168.1.254 (and localhost as described by the man page) rpcbind is still listening on all addresses. Quick check if the process is using the correct options:

root 30777 0.0 0.0 37228 2360 ? Ss 16:11 0:00 /sbin/rpcbind -h 192.168.1.254 -l

Hmmm, yes, -h 192.168.1.254 is specified. Ok, something is going wrong here...

According to an entry in Ubuntus Launchpad I'm not the only one that experienced this problem. However this Launchpad entry mentioned that upstream seems to have a fix in version 0.2.3, but I experienced the same behaviour in stable as well as in unstable, where the package version is 0.2.3-0.2. Apparently the problem still exists in Debian unstable.

I'm somewhat undecided whether to file a normal bug against rpcbind or if I should label it as a security bug, because it opens a service to the public that can be abused for amplification attacks, although you might have configured rpcbind to just listen on internal addresses.

Kategorie:
Debian
Tags:
Debian
Software
Network
Bug

08 Feb 2016 10:20pm GMT

Niels Thykier: Performance tuning of lintian, take 3

About 7 months ago, I wrote about we had improved Lintian's performance. In 2.5.41, we are doing another memory reduction, where we primarily reduce the memory consumption of data about ELF binaries. Like previously, memory reductions follows the "less is more" pattern.

My initial test subject was linux-image-4.4.0-trunk-rt-686-pae_4.4-1~exp1_i386.deb. It had a somewhat unique property that the ELF data made up a little over half the cache.

At this point, we had reduced the total memory usage from 18.35MB to 8.92MB (the ELF data going from 10.56MB to 1.98MB)[1]. At this point, I figured that I was happy with the improvement and discarded my test subject.

While impressive, the test subject was unsurprisingly a special case. The improvement in "regular" packages[2] (with ELF binaries) were closer to 8% in total. Not being satisfied with that, I pulled one more trick.

In the grand total, coreutils 8.24-1 amd64 went from 4.09MB to 3.48MB. The ELF data cache went from 3.38MB to 2.84MB. Similarly, libreoffice/4.2.5-1 (including its ~170 binaries) has also seen a 8.5% reduction in total cache size[3] and is now down to 260.48MB (from 284.83MB).

-

[1] If you are wondering why I in 3fd98d9 wrote "The total "cache" memory usage is approaching 1/3 of the original for that package", then you are not alone. I am not sure myself any more, but it seems obviously wrong.

[2] FTR: The sample size of "regular packages" is 2 in this case. Of which one of them being coreutils…

[3] Admittedly, since "take 2" and not since 2.5.40.2 like the rest.


Filed under: Debian, Lintian

08 Feb 2016 10:19pm GMT

Joachim Breitner: Protecting static content with mod_rewrite

Since fourteen years, I have been photographing digitally and putting the pictures on my webpage. Back then, online privacy was not a big deal, but things have changed, and I had to at least mildly protect the innocent. In particular, I wanted to prevent search engines from accessing some of my pictures.

As I did not want my friends and family having to create an account and remember a password, I set up an OpenID based scheme five years ago. This way, they could use any of their OpenID enabled account, e.g. their Google Mail account, to log in, without disclosing any data to me. As my photo album consists of just static files, I created two copies on the server: the real one with everything, and a bunch of symbolic links representing the publicly visible parts. I then used mod_auth_openid to prevent access to the protected files, unless the users logged in. I never got around of actually limiting who could log in, so strangers were still able to see all photos, but at least search engine spiders were locked out.

But, very unfortunately, OpenID did never really catch on, Google even stopped being a provider, and other promising decentralized authentication schemes like Mozilla Persona are also phased out. So I needed an alternative.

A very simply scheme would be a single password that my friends and family can get from me and that unlocks the pictures. I could have done that using HTTP Auth, but that is not very user-friendly, and the login does not persist (at least not without the help of the browser). Instead, I wanted something that involves a simple HTTP form. But I also wanted to avoid server-side programming, for performance and security reasons. I love serving static files whenever it is feasible.

Then I found that mod_rewrite, Apache's all-around-tool for URL rewriting and request mangling, supports reading and writing cookies! So I came up with a scheme that implements the whole login logic in the Apache server configuration. I'd like to describe this setup here, in case someone finds it inspiring.

I created a login.html with a simple HTML form:

<form method="GET" action="/bilder/login.html">
 <div style="text-align:center">
  <input name="password" placeholder="Password" />
  <button type="submit">Sign-In</button>
 </div>
</form>

It sends the user to the same page again, putting the password into the query string, hence the method="GET" - mod_rewrite unfortunately cannot read the parameters of a POST request.

The Apache configuration is as follows:

RewriteMap public "dbm:/var/www/joachim-breitner.de/bilder/publicfiles.dbm"
<Directory /var/www/joachim-breitner.de/bilder>
 RewriteEngine On

 # This is a GET request, trying to set a password.
 RewriteCond %{QUERY_STRING} password=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R,QSD,CO=bilderhp:correcthorsebatterystaple:www.joachim-breitner.de:2000000:/bilder]

 # This is a GET request, trying to set a wrong password.
 RewriteCond %{QUERY_STRING} password=
 RewriteRule ^login.html /bilder/notloggedin.html [L,R,QSD]

 # No point in loggin in if there is already the right password
 RewriteCond %{HTTP:Cookie} bilderhp=correcthorsebatterystaple
 RewriteRule ^login.html /bilder/loggedin.html [L,R]

 # If protected file is requested, check for cookie.
 # If no cookie present, redirect pictures to replacement picture
 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.*\.(png|jpg)$ /bilder/pleaselogin.png [L]

 RewriteCond %{HTTP:Cookie} !bilderhp=correcthorsebatterystaple
 RewriteCond ${public:$0|private} private
 RewriteRule ^.+$ /bilder/login.html [L,R]
</Directory>

The publicfiles.dbm file is generated from a text file with lines like

login.html.en 1
login.html.de 1
pleaselogin.png 1
thumbs/20030920165701_thumb.jpg 1
thumbs/20080813225123_thumb.jpg 1
...

using

/usr/sbin/httxt2dbm -i publicfiles.txt -o publicfiles.dbm

and whitelists all files that are visible without login. Make sure it contains the login page, otherwise you'll get a redirect circle.

The other directives in the above configure fulfill these tasks:

And that's it! No resource-hogging web frameworks, not security-dubious scripting languages, and a dead-simple way to authenticate.

Oh, and if you believe you know me well enough to be allowed to see all photos: The real password is not correcthorsebatterystaple; just ask me what it is.

08 Feb 2016 4:39pm GMT

Lunar: Reproducible builds: week 41 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

After remarks from Guillem Jover, Lunar updated his patch adding generation of .buildinfo files in dpkg.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: dracut, ent, gdcm, guilt, lazarus, magit, matita, resource-agents, rurple-ng, shadow, shorewall-doc, udiskie.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Patches submitted which have not made their way to the archive yet:

reproducible.debian.net

For the first time, we've reached more than 20,000 packages with reproducible builds for sid on amd64 with our current test framework.

Vagrant Cascadian has set up another test system for armhf. Enabling four more builder jobs to be added to Jenkins. (h01ger)

Package reviews

233 reviews have been removed, 111 added and 86 updated in the previous week.

36 new FTBFS bugs were reported by Chris Lamb and Alastair McKinstry.

New issue: timestamps_in_manpages_generated_by_yat2m. The description for the blacklisted_on_jenkins issue has been improved. Some packages are also now tagged with blacklisted_on_jenkins_armhf_only.

Misc.

Steven Chamberlain gave an update on the status of FreeBSD and variants after the BSD devroom at FOSDEM'16. He also discussed how jails can be used for easier and faster reproducibility tests.

The video for h01ger's talk in the main track of FOSDEM'16 about the reproducible ecosystem is now available.

08 Feb 2016 3:43pm GMT

Orestis Ioannou: Debian - your patches and machine readable copyright files are available on Debsources

TL;DR All Debian license and patches are belong to us. Discover them here and here.

In case you hadn't already stumbled upon sources.debian.net in the past, Debsources is a simple web application that allows to publish an unpacked Debian source mirror on the Web. On the live instance you can browse the contents of Debian source packages with syntax highlighting, search files matching a SHA-256 hash or a ctag, query its API, highlight lines, view accurate statistics and graphs. It was initially developed at IRILL by Stefano Zacchiroli and Matthieu Caneill.

During GSOC 2015 I helped introduce two new features.

License Tracker

Since Debsources has all the debian/copyright files and that many of them adopted the DEP-5 suggestion (machine readable copyright files) it was interesting to exploit them for end users. You may find interesting the following features:

Have a look at the documentation to discover more!

Patch tracker

The old patch tracker unfortunately died a while ago. Since Debsources stores all the patches it was, probably, natural for it to be able to exploit them and present them over the web. You can navigate through packages by prefix or by searching them here. Among the use cases:

Read more about the API!

Coming ...

I hope you find these new features useful. Don't hesitate to report any bugs or suggestions you come accross.

08 Feb 2016 9:33am GMT