17 Jan 2019

feedPlanet Debian

Kunal Mehta: Eliminating PHP polyfills

The Symfony project has recently created a set of pure-PHP polyfills for both PHP extensions and newer language features. It allows developers to add requirements upon those functions or language additions without increasing the system requirements upon end users. For the most part, I think this is a good thing, and valuable to have. We've done similar things inside MediaWiki as well for CDB support, Memcached, and internationalization, just to name a few.

But the downside is that on platforms where it is possible to install the missing PHP extensions or upgrade PHP itself, we're shipping empty code. MediaWiki requires both the ctypes and mbstring PHP extensions, and our servers have those, so there's no use in deploying polyfills for those, because they'll never be used. In September, Reedy and I replaced the polyfills with "unpolyfills" that simply provide the correct package, so the polyfill is skipped by composer. That removed about 3,700 lines of code from what we're committing, reviewing, and deploying - a big win.

Last month I came across the same problem in Debian: #911832. The php-symfony-polyfill package was failing tests on the new PHP 7.3, and up for removal from the next stable release (Buster). On its own, the package isn't too important, but was a dependency of other important packages. In Debian, the polyfills are even more useless, since instead of depending upon e.g. php-symfony-polyfill-mbstring, the package could simply depend upon the native PHP extension, php-mbstring. In fact, there was already a system designed to implement those kinds of overrides. After looking at the dependencies, I uploaded a fixed version of php-webmozart-assert, filed bugs for two other packages. and provided patches for symfony. I also made a patch to the default overrides in pkg-php-tools, so that any future package that depends upon a symfony polyfill should now automatically depend upon the native PHP extension if necessary.

Ideally composer would support alternative requirements like ext-mbstring | php-symfony-polyfill-mbstring, but that's been declined by their developers. There's another issue that is somewhat related, but doesn't do much to reduce the installation of polyfills when unnecessary.

17 Jan 2019 7:50am GMT

16 Jan 2019

feedPlanet Debian

Reproducible builds folks: Reproducible Builds: Weekly report #194

Here's what happened in the Reproducible Builds effort between Sunday January 6 and Saturday January 12 2019:

Packages reviewed and fixed, and bugs filed

Website development

There were a number of updates to the reproducible-builds.org project website this week, including:

Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

16 Jan 2019 4:39pm GMT

Iain R. Learmonth: A Solution for Authoritative DNS

I've been thinking about improving my DNS setup. So many things will use e-mail verification as a backup authentication measure that it is starting to show as a real weak point. An Ars Technica article earlier this year talked about how "[f]ederal authorities and private researchers are alerting companies to a wave of domain hijacking attacks that use relatively novel techniques to compromise targets at an almost unprecedented scale."

The two attacks that are mentioned in that article, changing the nameserver and changing records, are something that DNSSEC could protect against. Records wouldn't have to be changed on my chosen nameservers, a BGP-hijacking could just give another server the queries for records on my domain instead and then reply with whatever it chooses.

After thinking for a while, my requirements come down to:

After some searching I discovered GooDNS, a "good" DNS hosting provider. They have an interesting setup that looks to fit all of my requirements. If you're coming from a more traditional arrangement with either a self-hosted name server or a web panel then this might seem weird, but if you've done a little "infrastructure as code" then maybe it is not so weird.

The inital setup must be completed via the web interface. You'll need to have an hardware security module (HSM) for providing a time based one time password (TOTP), an SSH key and optionally a GPG key as part of the registration. You will need the TOTP to make any changes via the web interface, the SSH key will be used to interact with the git service, and the GPG key will be used for any email correspondance including recovery in the case that you lose your TOTP HSM or password.

You must validate your domain before it will be served from the GooDNS servers. There are two options for this, one for new domains and one "zero-downtime" option that is more complex but may be desirable if your domain is already live. For new domains you can simply update your nameservers at the registrar to validate your domain, for existing domains you can add a TXT record to the current DNS setup that will be validated by GooDNS to allow for the domain to be configured fully before switching the nameservers. Once the domain is validated, you will not need to use the web interface again unless updating contact, security or billing details.

All the DNS configuration is managed in a single git repository. There are three branches in the repository: "master", "staging" and "production". These are just the default branches, you can create other branches if you like. The only two that GooDNS will use are the "staging" and "production" branches.

GooDNS provides a script that you can install at /usr/local/bin/git-dns (or elsewhere in your path) which provides some simple helper commands for working with the git repository. The script is extremely readable and so it's easy enough to understand and write your own scripts if you find yourself needing something a little different.

When you clone your git repository you'll find one text file on the master branch for each of your configured zones:

irl@computer$ git clone git@goodns.net:irl.git
Cloning into 'irl1'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Receiving objects: 100% (3/3), 22.55 KiB | 11.28 MiB/s, done.
Resolving deltas: 100% (1/1), done.
irl@computer$ ls
irl1.net   learmonth.me
irl@computer$ cat irl1.net
@ IN SOA ns1.irl1.net. hostmaster.irl1.net. (
            _SERIAL_
            28800
            7200
            864000
            86400
            )

@           IN      NS        ns1.goodns.net.
@           IN      NS        ns2.goodns.net.
@           IN      NS        ns3.goodns.net.

In the backend GooDNS is using OpenBSD 6.4 servers with nsd(8). This means that the zone files use the same syntax. If you don't know what this means then that is fine as the documentation has loads of examples in it that should help you to configure all the record types you might need. If a record type is not yet supported by nsd(8), you can always specify the record manually and it will work just fine.

One thing you might note here is that the string _SERIAL_ appears instead of a serial number. The git-dns script will replace this with a serial number when you are ready to publish the zone file.

I'll assume that you already have you GPG key and SSH key set up, now let's set up the DNSSEC signing key. For this, we will use one of the four slots of the YubiKey. You could use either 9a or 9e, but here I'll use 9e as 9a is already the SSH key for me.

To set up the token, we will need the yubico-piv-tool. Be extremely careful when following these steps especially if you are using a production device. Try to understand the commands before pasting them into the terminal.

First, make sure the slot is empty. You should get an output similar to the following one:

irl@computer$ yubico-piv-tool -s 9e -a status 
CHUID:  ...
CCC:    No data available
PIN tries left: 10

Now we will use git-dns to create our key signing key (KSK):

irl@computer$ git dns kskinit --yubikey-neo
Successfully generated a new private key.
Successfully generated a new self signed certificate.
Found YubiKey NEO.
Slots available:
 (1) 9a - Not empty
 (2) 9e - Empty
Which slot to use for DNSSEC signing key? 2
Successfully imported a new certificate.
CHUID:  ...
CCC:    No data available
Slot 9e:    
    Algorithm:  ECCP256
    Subject DN: CN=irl1.net
    Issuer DN:  CN=irl1.net
    Fingerprint:    97dda8a441a401102328ab6ed4483f08bc3b4e4c91abee8a6e144a6bb07a674c
    Not Before: Feb 01 13:10:10 2019 GMT
    Not After:  Feb 01 13:10:10 2021 GMT
PIN tries left: 10

We can see the public key for this new KSK:

irl@computer$ git dns pubkeys
irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw==

Next we will create a zone signing key (ZSK). These are stored in the keys/ folder of your git repository but are not version controlled. You can optionally encrypt these with GnuPG (and so requiring the YubiKey to sign zones) but I've not done that here. Operations using slot 9e do not require the PIN so leaving the YubiKey connected to the computer is pretty much the same as leaving the KSK on the disk. Maybe a future YubiKey will not have this restriction or will add more slots.

irl@computer$ git dns zskinit
Created ./keys/
Successfully generated a new private key.
irl@computer$ git dns pubkeys
irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw=
irl1.net. DNSKEY 257 3 13 kS7DoH7fxDsuH8o1vkvNkRcMRfTbhLqAZdaT2SRdxjRwZSCThxxpZ3S750anoPHV048FFpDrS8Jof08D2Gqj9w==

Now we can go to our domain registrar and add DS records to the registry for our domain using the public keys. First though, we should actually sign the zone. To create a signed zone:

irl@computer$ git dns signall
Signing irl1.net...
Signing learmonth.me...
[production 51da0f0] Signed all zone files at 2019-02-01 13:28:02
 2 files changed, 6 insertions(+), 0 deletions(-)

You'll notice that all the zones were signed although we only created one set of keys. Set ups where you have one shared KSK and individual ZSK per zone are possible but they provide questionable additional security. Reducing the number of keys required for DNSSEC helps to keep them all under control.

To make these changes live, all that is needed is to push the production branch. To keep things tidy, and to keep a backup of your sources, you can push the master branch too. git-dns provides a helper function for this:

irl@computer$ git dns push
Pushing master...done
Pushing production...done
Pushing staging...done

If I now edit a zone file on the master branch and want to try out the zone before making it live, all I need to do is:

irl@computer$ git dns signall --staging
Signing irl1.net...
Signing learmonth.me...
[staging 72ea1fc] Signed all zone files at 2019-02-01 13:30:12
 2 files changed, 8 insertions(+), 0 deletions(-)
irl@computer$ git dns push
Pushing master...done
Pushing production...done
Pushing staging...done

If I now use the staging resolver or lookup records at irl1.net.staging.goodns.net then I'll see the zone live. The staging resolver is a really cool idea for development and testing. They give you a couple of unique IPv6 addresses just for you that will serve your staging zone files and act as a resolver for everything else. You just have to plug these into your staging environment and everything is ready to go. In the future they are planning to allow you to have more than one staging environment too.

All that is left to do is ensure that your zone signatures stay fresh. This is easy to achieve with a cron job:

0 3 * * * /usr/local/bin/git-dns cron --repository=/srv/dns/irl1.net --quiet

I monitor the records independently and disable the mail output from this command but you might want to drop the --quiet if you'd like to get mails from cron on errors/warnings.

On the GooDNS blog they talk about adding an Onion service for the git server in the future so that they do not have logs that could show the location of your DNSSEC signing keys, which allows you to have even greater protection. They already support performing the git push via Tor but the addition of the Onion service would make it faster and more reliable.


Unfortunately, GooDNS is entirely fictional and you can't actually manage your DNS in this way, but wouldn't it be nice? This post has drawn inspiration from the following:

16 Jan 2019 4:30pm GMT

15 Jan 2019

feedPlanet Grep

Dries Buytaert: Happy eighteenth birthday, Drupal

Eighteen years ago today, I released Drupal 1.0.0. What started from humble beginnings has grown into one of the largest Open Source communities in the world. Today, Drupal exists because of its people and the collective effort of thousands of community members. Thank you to everyone who has been and continues to contribute to Drupal.

Eighteen years is also the voting age in the US, and the legal drinking age in Europe. I'm not sure which one is better. :) Joking aside, welcome to adulthood, Drupal. May your day be bug free and filled with fresh patches!

15 Jan 2019 8:45pm GMT

Mark Van den Borre: Brother P750W label printer: Debian print server, Debian desktop clients

We have this Brother P750W label printer for https://fosdem.org . It's a model with wireless networking only. We wanted something with decent wired networking for use with multiple Debian desktop clients, but that was ~300€ more expensive. So here's how we went about configuring the bloody thing...

Not wanting to use proprietary drivers, this is what the device told me after some prying:
* The thing speaks Apple Airprint.
* By default, it shows up as an access point with SSID "DIRECT-brPT-P750WXXXX". XXXX is the last 4 digits of the printer's serial number.
* Default wireless password "00000000".
* Default ip address 192.168.118.1.
* It allows only one client to connect at a time.
* Its web interface is totally utterly broken:
* Pointing a browser at the default device ip redirects to a page that 404's.
* Pointing a browser at the url of the admin page gleaned from cups 404's too.
* An upgrade to the latest firmware doesn't seem to solve any issues.

As a reminder to myself, this is what I did to get it to work:
* Get a Debian stable desktop.
* Add the Debian buster repositories. That is Debian testing at the time of me writing this.
* Set up apt pinning in /etc/apt/preferences to prefer stable over buster.
* Upgrade cups related packages to the version from Debian buster. "apt install cups* -t buster" did the trick.
* Notice this makes the printer get autodiscovered.
* Watch /etc/cups/printers.conf and /etc/cups/ppd/ for what happens when connected to the P750W. Copy the relevant bits.
* Get a Debian headless thingie (rpi, olinuxino, whatever...) running Debian stable.
* Connect it to the printer wifi using wpa_supplicant.
* Install cups* on it.
* Drop the printers.conf and P750W ppd from the Debian buster desktop into /etc/cups and /etc/cups/ppd respectively. The only change was in printers.conf, from the avahi autodiscovery url to the default 192.168.118.1.
* Make sure to share the printer. I don't remember if I had to set that or if it was the default, but the cups web interface on port 631 should help there if needed.
* Add the new shared printer on the Debian buster desktop. Entering the print server IP auto discovers it. Works flawlessly.
* Try the same on a Debian stable desktop. Fails complaining about airprint related stuff.
* Upgrade cups* to the Debian buster version. Apt pinning blah. Apt-listchanges says something about one of the package upgrades being crucial to get some airprint devices to work. Didn't notice the exact package alas and too lazy to get it to run.
* Install the printer again. Now works flawlessly.

15 Jan 2019 8:54am GMT

11 Jan 2019

feedPlanet Grep

Lionel Dricot: 3 mois de déconnexion : bilan final

Et transition vers une déconnexion douce permanente

Sans que je m'en rende particulièrement compte, voici que je suis arrivé à la fin de ma déconnexion (dont vous pouvez retrouver tous les billets ici). Une date symbolique qui imposait un bilan. Tout d'abord en enlevant mon filtre et en faisant un tour sur les réseaux sociaux désormais abhorrés.

Pas que j'en avais pas vraiment envie mais plus par curiosité, pour voir ce que ça me faisait et vérifier si j'avais raté des choses. On pourrait croire que j'étais impatient mais, contre toute attente, j'ai du me forcer. Au nom de la science, pour la complétude de l'expérience ! Ces sites ne me manquent pas, au contraire. Je n'avais pas l'impression de rater quoi que ce soit d'important et, même si c'était le cas, je m'en portais au fond très bien.

Ma première impression a été d'arriver en retard dans une soirée à l'ambiance un peu morne. Vous savez, le genre de soirée où vous arrivez stressé de rater le meilleur pour vous rendre compte qu'en fait tout le monde semble s'emmerder.

Oh certes, il y'avait des commentaires sur mes posts dont certains étaient intéressants (je n'ai pas tout lu, juste regardé rapidement les derniers). J'avais plein de notifications, des centaines de demandes d'ajout sur Linkedin (que j'ai acceptée).

Mais, au final, rien qui me donne envie de revenir. Au contraire, j'avais la nausée, comme un addict au sucre qui se tape tout un gâteau au chocolat après 3 mois de diète.

Ce qui est encore plus frappant c'est que cette demi-heure de rattrapage de réseaux sociaux m'a obsédée durant plusieurs heures. J'avais envie d'aller vérifier des choses, je pensais à ce que j'avais vu passer, je me demandais ce que je devrais répondre à tel commentaire. Mon esprit était de nouveau complètement encombré.

Il faut se rendre l'évidence : je ne suis pas capable d'utiliser sainement les réseaux sociaux. Je suis trop sensible à leurs messages inconscients, à leurs tactiques d'addiction.

Pour être tout à fait honnête avec moi-même, il faut avouer que, techniquement, je n'ai pas respecté complètement ma déconnexion. J'ai assoupli certaines règles initiales en "débloquant" Slack, pour raisons professionnelles, et Reddit. Il m'est également arrivé assez souvent de devoir désactiver mes filtres pour accéder à un lien qu'on m'envoyait sur Twitter, pour chercher les coordonnées d'un contact professionnel sur Linkedin voire pour accéder à un article de la presse généraliste qu'on m'avait envoyé. Mais ce n'est pas grave. Le but n'était pas de devenir "pur" mais bien de reprendre le contrôle sur mon utilisation d'Internet. À chaque fois, la désactivation de mes filtres ne durait que le temps strictement nécessaire à charger la page incriminée.

Une anecdote illustre bien ma déconnexion : au cours d'un repas de famille, la discussion porta sur les gilets jaunes. Je n'en avais jamais entendu parler. Après quelques secondes d'étonnement face à mon ignorance, on m'expliqua et, le soir même, je lisais la page Wikipédia sur le sujet.

Wikipédia qui s'est révélé un outil de déconnexion extraordinaire. La page d'accueil dispose en effet d'une petite section concernant les actualités et les événements en cours. J'en ai déduis que si un événement n'est pas sur Wikipedia, alors il n'est pas vraiment important.

Si ne pas être informé libère de l'espace mental et ne semble prêter à aucune conséquence néfaste, il est dramatique de constater à quel point mon cerveau est addict. Devant un écran, il veut recevoir des informations, quelle qu'elles soient. Quand je procrastine, je me retrouve à chercher tout ce qui pourrait m'apporter des news sans désactiver mon blocage.

C'est d'ailleurs je pense la raison pour laquelle mes visites à Reddit (au départ utilisé uniquement pour poser des questions dans certains subreddit) sont devenues plus fréquentes (mais sans devenir envahissante mais à surveiller). Je regarde également mon lecteur RSS tous les jours (heureusement, il n'est pas sur mon téléphone) mais les flux réellement utiles sont rares. Les réseaux sociaux m'avaient habitué à m'intéresser à tout et n'importe quoi. Avec le RSS, je dois choisir des sites qui postent des choses que je trouvent intéressantes dans la durée et qui ne noient pas cela dans du bruit marketing.

Un autre effet important de ces 3 mois de déconnexion est le début d'un détachement de mon besoin de reconnaissance immédiate. Outre les likes sur les réseaux, je me rends compte que donner des conférences gratuites ou intervenir dans les médias me rapporte peu voire rien du tout pour beaucoup d'efforts, de transports et de fatigue. De manière amusante, j'ai déjà reçu pas mal de sollicitations pour parler dans les médias de ma déconnexion (que, jusqu'à présent, j'ai toutes refusées). Mon ego est toujours là mais souhaite désormais être reconnu sur le long terme, ce qui nécessite un investissement plus profond et pas de simples apparitions médiatiques. D'ailleurs, entre nous, refuser une sollicitation médiatique est encore plus jouissif pour l'égo que de l'accepter.

J'ai également pris conscience que, contrairement à ce que Facebook essaye d'instiller, mon blog n'est pas un business. Je ne dois pas répondre dans les 24h aux messages (ce que Facebook encourage très fortement). J'ai le droit de ne répondre qu'aux emails et ne pas devoir me connecter sur différentes messageries propriétaires. J'ai le droit de rater des opportunités. Je suis un humain qui partage certaines de ses expériences à travers l'écriture. Libre à chacun de lire, de copier, de partager, de s'inspirer voire de me contacter ou de me soutenir. Mais libre à moi de ne pas être le service client de mes écrits.

La conclusion de tout ça c'est que, les 3 mois écoulés, je n'ai aucune envie de stopper ma déconnexion. Ma vie d'aujourd'hui sans Facebook ou les médias me semble meilleure. Une fois tous les deux ou trois jours, je désactive mon filtre pour voir si j'ai des notifications sur Mastodon, Twitter ou Linkedin mais je n'ai même pas envie de regarder le flux. Je lis des choses qui m'intéressent grâce au RSS, je me plonge avec délice dans les livres qui attendaient sur mon étagère et j'ai beaucoup de conversations enrichissantes par mail.

Pourquoi quitterais-je ma thébaïde ?

Photo by Jay Mantri on Unsplash

Je suis @ploum, conférencier et écrivain électronique déconnecté rémunérés en prix libre sur Tipeee, Patreon, Paypal, Liberapay ou en millibitcoins 34pp7LupBF7rkz797ovgBTbqcLevuze7LF. Vos soutiens, même symboliques, font une réelle différence pour moi. Merci !

Ce texte est publié sous la licence CC-By BE.

11 Jan 2019 11:04pm GMT

08 Nov 2011

feedfosdem - Google Blog Search

papupapu39 (papupapu39)'s status on Tuesday, 08-Nov-11 00:28 ...

papupapu39 · http://identi.ca/url/56409795 #fosdem #freeknowledge #usamabinladen · about a day ago from web. Help · About · FAQ · TOS · Privacy · Source · Version · Contact. Identi.ca is a microblogging service brought to you by Status.net. ...

08 Nov 2011 12:28am GMT

05 Nov 2011

feedfosdem - Google Blog Search

Write and Submit your first Linux kernel Patch | HowLinux.Tk ...

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the world. ...

05 Nov 2011 1:19am GMT

03 Nov 2011

feedfosdem - Google Blog Search

Silicon Valley Linux Users Group – Kernel Walkthrough | Digital Tux

FOSDEM (Free and Open Source Development European Meeting) is a European event centered around Free and Open Source software development. It is aimed at developers and all interested in the Free and Open Source news in the ...

03 Nov 2011 3:45pm GMT

26 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Update your RSS link

If you see this message in your RSS reader, please correct your RSS link to the following URL: http://fosdem.org/rss.xml.

26 Jul 2008 5:55am GMT

25 Jul 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Archive of FOSDEM 2008

These pages have been archived.
For information about the latest FOSDEM edition please check this url: http://fosdem.org

25 Jul 2008 4:43pm GMT

09 Mar 2008

feedFOSDEM - Free and Open Source Software Developers' European Meeting

Slides and videos online

Two weeks after FOSDEM and we are proud to publish most of the slides and videos from this year's edition.

All of the material from the Lightning Talks has been put online. We are still missing some slides and videos from the Main Tracks but we are working hard on getting those completed too.

We would like to thank our mirrors: HEAnet (IE) and Unixheads (US) for hosting our videos, and NamurLUG for quick recording and encoding.

The videos from the Janson room were live-streamed during the event and are also online on the Linux Magazin site.

We are having some synchronisation issues with Belnet (BE) at the moment. We're working to sort these out.

09 Mar 2008 3:12pm GMT