26 Aug 2016

feedPlanet Debian

Matthew Garrett: Priorities in security

I read this tweet a couple of weeks ago:

to me, an inclusive security community would focus as much (or at all) on surveillance of women by abusive partners as it does the state

- kelsey ᕕ( ᐛ )ᕗ (@_K_E_L_S_E_Y) August 2, 2016


and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comment count unavailable comments

26 Aug 2016 12:02am GMT

25 Aug 2016

feedPlanet Debian

Alessio Treglia: The Breath of Time

Tweet

For centuries man has hunted, he brought the animals to pasture, cultivated fields and sailed the seas without any kind of tool to measure time. Back then, the time was not measured, but only estimated with vague approximation and its pace was enough to dictate the steps of the day and the life of man. Subsequently, for many centuries, hourglasses accompanied the civilization with the slow flow of their sand grains. About hourglasses, Ernst Junger writes in "Das Sanduhrbuch - 1954" (no English translation): "This small mountain, formed by all the moments lost that fell on each other, it could be understood as a comforting sign that the time disappears but does not fade. It grows in depth".

For the philosophers of ancient Greece, the time was just a way to measure how things move in everyday life and in any case there was a clear distinction between "quantitative" time (Kronos) and "qualitative" time (Kairòs). According to Parmenides, time is guise, because its existence…

<Read More…[by Fabio Marzocca]>

Tweet

25 Aug 2016 4:33pm GMT

Joerg Jaspert: New gnupg-agent in Debian

In case you just upgraded to the latest gnupg-agent and used gnupg-agent as your ssh-agent you may find that ssh refuses to work with a simple but not helpful

sign_and_send_pubkey: signing failed: agent refused operation

This seems to come from systemd starting the agent, no longer a script at the start of the X session. And so it ends up with either no or an unusable tty. A simple

gpg-connect-agent updatestartuptty /bye

updates that and voila, ssh agent functionality is back in.

Note: This assumes you have "enable-ssh-support" in your ~/.gnupg/gpg-agent.conf

25 Aug 2016 9:55am GMT

Norbert Preining: Gaming: Deus Ex Go

Long flights and lazy afternoons relaxing from teaching, I tried out another game on my Android device, Deus Ex Go. It is a turn based game in the style of the Deus Ex series (long years ago I was beta tester for the Deus Ex version of LGP). A turn based game where you have to pass through a series of levels, each one consisting of an hexagonal grid with an entry and exit point, and some nasty villains or machines trying to kill you.

Deus-ex-go

Without any explanations given you are thrown into the game and it takes a few iterations until you understand what kind of attacks you are facing, but once you have figured that out, it is a more or less simple game of combination how to manage to get to the exit. I played through the around 50 levels of the story mode and I think it was only in the last five that I once or twice had to actually try and think hard to find a solution.

I found the game quite amusing at the beginning, but soon it became repetitive. But since you can play through the whole story mode in probably one long afternoon, that is not so much of a problem. More a problem is the apparently incredible battery usage of this game. Playing without checking for some time leaves you soon with a near empty battery.

Graphically well done, with more or less interesting gameplay, it still does not stand up to Monument Valley.

25 Aug 2016 9:34am GMT

Francois Marier: Debugging gnome-session problems on Ubuntu 14.04

After upgrading an Ubuntu 14.04 ("trusty") machine to the latest 16.04 Hardware Enablement packages, I ran into login problems. I could log into my user account and see the GNOME desktop for a split second before getting thrown back into the LightDM login manager.

The solution I found was to install this missing package:

apt install libwayland-egl1-mesa-lts-xenial

Looking for clues in the logs

The first place I looked was the log file for the login manager (/var/log/lightdm/lightdm.log) where I found the following:

DEBUG: Session pid=12743: Running command /usr/sbin/lightdm-session gnome-session --session=gnome
DEBUG: Creating shared data directory /var/lib/lightdm-data/username
DEBUG: Session pid=12743: Logging to .xsession-errors

This told me that the login manager runs the gnome-session command and gets it to create a session of type gnome. That command line is defined in /usr/share/xsessions/gnome.desktop (look for Exec=):

[Desktop Entry]
Name=GNOME
Comment=This session logs you into GNOME
Exec=gnome-session --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME

I couldn't see anything unexpected there, but it did point to another log file (~/.xsession-errors) which contained the following:

Script for ibus started at run_im.
Script for auto started at run_im.
Script for default started at run_im.
init: Le processus gnome-session (GNOME) main (11946) s'est achevé avec l'état 1
init: Déconnecté du bus D-Bus notifié
init: Le processus logrotate main (11831) a été tué par le signal TERM
init: Le processus update-notifier-crash (/var/crash/_usr_bin_unattended-upgrade.0.crash) main (11908) a été tué par le signal TERM

Seaching for French error messages isn't as useful as searching for English ones, so I took a look at /var/log/syslog and found this:

gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' exited with code 127
gnome-session[4134]: WARNING: App 'gnome-shell.desktop' respawning too quickly
gnome-session[4134]: CRITICAL: We failed, but the fail whale is dead. Sorry....

It looks like gnome-session is executing gnome-shell and that this last command is terminating prematurely. This would explain why gnome-session exits immediately after login.

Increasing the amount of logging

In order to get more verbose debugging information out of gnome-session, I created a new type of session (GNOME debug) by copying the regular GNOME session:

cp /usr/share/xsessions/gnome.desktop /usr/share/xsessions/gnome-debug.desktop

and then adding --debug to the command line inside gnome-debug.desktop:

[Desktop Entry]
Name=GNOME debug
Comment=This session logs you into GNOME debug
Exec=gnome-session --debug --session=gnome
TryExec=gnome-shell
X-LightDM-DesktopName=GNOME debug

After restarting LightDM (service lightdm restart), I clicked the GNOME logo next to the password field and chose GNOME debug before trying to login again.

This time, I had a lot more information in ~/.xsession-errors:

gnome-session[12878]: DEBUG(+): GsmAutostartApp: starting gnome-shell.desktop: command=/usr/bin/gnome-shell startup-id=10d41f1f5c81914ec61471971137183000000128780000
gnome-session[12878]: DEBUG(+): GsmAutostartApp: started pid:13121
...
/usr/bin/gnome-shell: error while loading shared libraries: libwayland-egl.so.1: cannot open shared object file: No such file or directory
gnome-session[12878]: DEBUG(+): GsmAutostartApp: (pid:13121) done (status:127)
gnome-session[12878]: WARNING: App 'gnome-shell.desktop' exited with code 127

which suggests that gnome-shell won't start because of a missing library.

Finding the missing library

To find the missing library, I used the apt-file command:

apt-file update
apt-file search libwayland-egl.so.1

and found that this file is provided by the following packages:

Since I installed the LTS Enablement stack, the package I needed to install to fix this was libwayland-egl1-mesa-lts-xenial.

I filed a bug for this on Launchpad.

25 Aug 2016 5:00am GMT

24 Aug 2016

feedPlanet Debian

Don Armstrong: H3ABioNet Hackathon (Workflows)

I'm in Pretoria, South Africa at the H3ABioNet hackathon which is developing workflows for Illumina chip genotyping, imputation, 16S rRNA sequencing, and population structure/association testing. Currently, I'm working with the imputation stream and we're using Nextflow to deploy an IMPUTE-based imputation workflow with Docker and NCSA's openstack-based cloud (Nebula) underneath.

The OpenStack command line clients (nova and cinder) seem to be pretty usable to automate bringing up a fleet of VMs and the cloud-init package which is present in the images makes configuring the images pretty simple.

Now if I just knew of a better shared object store which was supported by Nextflow in OpenStack besides mounting an NFS share, things would be better.

You can follow our progress in our git repo: [https://github.com/h3abionet/chipimputation]

24 Aug 2016 2:40pm GMT

Zlatan Todorić: Take that boredom

While I was bored on Defcon, I took the smallest VPS in DO offering (512MB RAM, 20GB disk), configured nginx on it, bought domain zlatan.tech and cp'ed my blog data to blog.zlatan.tech. I thought it will just be out of boredom and tear it apart in a day or two but it is still there.

Not only that, the droplet came with Debian 8.5 but I just added unstable and experimental to it and upgraded. Just to experiment and see what time will I need to break it. To make it even more adventurous (and also force me to not take it too much serious, at least at this point) I did something on what Lars would scream - I did not enable backups!

While having fun with it I added letsencrypt certificate to it (wow, that was quite easy).

Then I installed and configured Tor. Ende up adding an .onion domain for it! It is: pvgbzphm622hv4bo.onion

My main blog is still going to be zgrimshell.github.io (for now at least) where I push my Nikola (static site generator written in python) generated content as git commits. To my other two domains (on my server) I just rsync the content now. Simple and efficient.

I must admit I like my blog layout. It is simple, easy to read, efficient and fast, I don't bother with comments and writing a blog in markdown (inside terminal as all good behaving hacker citizen) while compiling it with Nikola is breeze (and yes, I did choose Nikola because of Nikola Tesla and python). Also I must admit that nginx is pretty nice webserver, no need to explain the beauty of git but I can't recommend enough of rsync.

If anyone is interested in doing the same I am happy to talk about it but these tools are really simple (as I enjoy simple things and by simple I mean small tools, no complicated configs and easy execution).

24 Aug 2016 5:45am GMT

23 Aug 2016

feedPlanet Debian

Reproducible builds folks: Reproducible Builds: week 69 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday August 14 and Saturday August 20 2016:

Fasten your seatbelts

Important note: we enabled build path variation for unstable now, so your package(s) might become unreproducible, while previously it was said to be reproducible… given a specific build path it probably still is reproducible but read on for the details below in the tests.reproducible-builds.org section! As said many times: this is still research and we are working to make it reality.

Media coverage

Daniel Stender blogged about python packaging and explained some caveats regarding reproducible builds.

Toolchain developments

Thomas Schmitt uploaded xorriso which now obeys SOURCE_DATE_EPOCH. As stated in its man pages:

ENVIRONMENT
[...]
SOURCE_DATE_EPOCH  belongs to the specs of reproducible-builds.org.  It
is supposed to be either undefined or to contain a decimal number which
tells the seconds since january 1st 1970. If it contains a number, then
it is used as time value to set the  default  of  --modification-date=,
--gpt_disk_guid,  and  --set_all_file_dates.  Startup files and program
options can override the effect of SOURCE_DATE_EPOCH.

Packages reviewed and fixed, and bugs filed

The following packages have become reproducible after being fixed:

The following updated packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

The following 2 packages were not changed, but have become reproducible due to changes in their build-dependencies: tagsoup tclx8.4.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Bug tracker house keeping:

Reviews of unreproducible packages

55 package reviews have been added, 161 have been updated and 136 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been updated:

Weekly QA work

FTBFS bugs have been reported by:

diffoscope development

Chris Lamb, Holger Levsen and Mattia Rizzolo worked on diffoscope this week.

Improvements were made to SquashFS and JSON comparison, the https://try.diffoscope.org/ web service, documentation, packaging, and general code quality.

diffoscope 57, 58, and 59 were uploaded to unstable by Chris Lamb. Versions 57 and 58 were both broken, so Holger set up a job on jenkins.debian.net to test diffoscope on each git commit. He also wrote a CONTRIBUTING document to help prevent this from happening in future.

From these efforts, we were also able to learn that diffoscope is now reproducible even when built across multiple architectures:

< h01ger> | https://tests.reproducible-builds.org/debian/rb-pkg/unstable/amd64/diffoscope.html shows these packages were built on amd64:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on i386:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb
< h01ger> | and on armhf:
< h01ger> |  bd21db708fe91c01ba1c9cb35b9d41a7c9b0db2b 62288 diffoscope_59_all.deb
< h01ger> |  366200bf2841136a4c8f8c30bdc87057d59a4cdd 20146 trydiffoscope_59_all.deb

And those also match the binaries uploaded by Chris in his diffoscope 59 binary upload to ftp.debian.org, yay! Eating our own dogfood and enjoying it!

tests.reproducible-builds.org

Debian related:

The last change probably will have an impact you will see: your package might become unreproducible in unstable and this will be shown on tracker.debian.org, while it will still be reproducible in testing.

We've done this, because we think reproducible builds are possible with arbitrary build paths. But: we don't think those are a realistic goal for stretch, where we still recommend to use ´.buildinfo´ to record the build patch and then do rebuilds using that path.

We are doing this, because besides doing theoretical groundwork we also have a practical goal: enable users to independently verify builds. And if they only can do this with a fixed path, so be it. For now :)

To be clear: for Stretch we recommend that reproducible builds are done in the same build path as the "original" build.

Finally, and just for our future references, when we enabled build path variation on Saturday, August 20th 2016, the numbers for unstable were:

suite all reproducible unreproducible ftbfs depwait not for this arch blacklisted
unstable/amd64 24693 21794 (88.2%) 1753 (7.1%) 972 (3.9%) 65 (0.2%) 95 (0.3%) 10 (0.0%)
unstable/i386 24693 21182 (85.7%) 2349 (9.5%) 972 (3.9%) 76 (0.3%) 103 (0.4%) 10 (0.0%)
unstable/armhf 24693 20889 (84.6%) 2050 (8.3%) 1126 (4.5%) 199 (0.8%) 296 (1.1%) 129 (0.5%)

Misc.

Ximin Luo updated our git setup scripts to make it easier for people to write proper descriptions for our repositories.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

23 Aug 2016 12:56pm GMT

22 Aug 2016

feedPlanet Debian

Sylvain Le Gall: Release of OASIS 0.4.7

I am happy to announce the release of OASIS v0.4.7.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

Features:

This version contains a lot of changes and is the achievement of a huge amount of work. The addition of OMake as a plugin is a huge progress. The overall work has been targeted at making OASIS more library like. This is still a work in progress but we made some clear improvement by getting rid of various side effect (like the requirement of using "chdir" to handle the "-C", which leads to propage ~ctxt everywhere and design OASISFileSystem).

I would like to thanks again the contributor for this release: Spiros Eliopoulos, Paul Snively, Jeremie Dimino, Christopher Zimmermann, Christophe Troestler, Max Mouratov, Jacques-Pascal Deplaix, Geoff Shannon, Simon Cruanes, Vladimir Brankov, Gabriel Radanne, Evgenii Lepikhin, Petter Urkedal, Gerd Stolpmann and Anton Bachin.

22 Aug 2016 11:51pm GMT

Luciano Prestes Cavalcanti: AppRecommender - Last GSoC Report

My work on Google Summer of Code is to create a new strategy on AppRecommender, where this strategy should be able to get a referenced package, or a list of referenced packages, then analyze the packages that the user has already installed and make a recommendation using the referenced packages as a base, for example: if the user runs "$ sudo apt install vim", the AppRecommender uses "vim" as the referenced package, and should recommend packages with relation between "vim" and the other packages that the user has installed. This work is done and added to the official AppRecommender repository.
During the GSoC program, more contributions were done with the AppRecommender project helping the system to improve the recommendations, installation and configurations to help Debian package.
The following link contains my commits on AppRecommender:
https://github.com/tassia/AppRecommender/commits/master?author=LucianoPC
During the period destined to students get to know the community of the project, I talked with the Debian community about my project to get feedback and ideas. When talking to the Debian community on the IRC channels, we came up with the idea of using the popularity-contest data to improve the recommendations. I talked with my mentors, who approved the idea, then we increased the project scope to use the popularity-contest data to improve the AppRecommender recommendations.
The popularity-contest has several privacy political terms, then we did a research and published, on the Debian Planet, a post that explains why we need the popularity-contest data to improve the recommendations and how we use this data. This post also contains an explanation about the risks and the measures taken to minimize them.
Then two activities were added to be made. One of them is to create a script to be added on popularity-contest. This script is destined to get the popularity-contest data, which is the users' packages, and generate clusters that group these packages analyzing similar users. The other activity is to add collaborative data into the AppRecommender, where this will download the clusters data and use it to improve the recommendations.
The popularity-contest cluster script was done and reviewed by my mentor, but was not integrated into popularity-contest yet. The usage of clusters data into AppRecommender has been already implemented, but still not added on official repository because it is waiting the cluster cript's acceptance into the popularity-contest. This work is not complete, but I will continue working with AppRecommender and Debian community, and with my mentors' help, I will finish this work.
The following link contains my commits on repository with the popularity-contest cluster script's feature, as well as other scripts that I used to improve my work, but the only script that will be sent to popularity-contest is the create_popcon_clusters.py:
https://github.com/TCC-AppRecommender/scripts/commits/master?author=LucianoPC
The following link contains my commits on repository with the AppRecommender collaborative data feature:
https://github.com/tassia/AppRecommender/commits/collaborative_data?author=LucianoPC
Google Drive folder with the patch:
https://drive.google.com/drive/folders/0BzmGBlxBo2G3Q3F5YjBpRl9yWUk?usp=sharing

22 Aug 2016 4:54pm GMT

Lars Wirzenius: Linux 25 jubilee symposium

I gave a talk about the early days of Linux at the jubilee symposium arranged by the University of Helsinki CS department. Below is an outline of what I meant to speak about, but the actual talk didn't follow it exactly. You can compare these to the video once it comes online.

22 Aug 2016 3:03pm GMT

Satyam Zode: Google Summer of Code 2016 : Final Report

Project Title : Improving diffoscope tool and reproducibility of Debian packages

Project details

This project aims to improve diffoscope tool and fix Debian packages which are unreproducible in Reproducible builds testing framework. diffoscope recursively unpack archives of many kinds and transform various binary formats into more human readable form to compare them. As a part of this project I worked on argument completion feature and ignoring .buildinfo feature. This project is a part of Reproducible Builds effort

Mentor and Co-Mentor

Project Discussion

Project Implementation

Challenges and Work Left

Future work

Acknowledgement

I would like to express my deepest gratitude to Lunar for mentoring me throughout Google Summer of Code program and for being cool. Lunar's deep knowledge regarding diffoscope and Python skills helped me a lot throughout the project and we literally had great discussions. I would also like to thank Debaian community and Google for giving me this opportunity. Special thanks to Reproducible Builds folks for all the guidance!

22 Aug 2016 2:02pm GMT

DebConf team: Proposing speakers for DebConf17 (Posted by DebConf17 team)

As you may already know, next DebConf will be held at Collège de Maisonneuve in Montreal from August 6 to August 12, 2017. We are already thinking about the conference schedule, and the content team is open to suggestions for invited speakers.

Priority will be given to speakers who are not regular DebConf attendees, who are more likely to bring diverse viewpoints to the conference.

Please keep in mind that some speakers may have very busy schedules and need to be booked far in advance. So, we would like to start inviting speakers in the middle of September 2016.

If you would like to suggest a speaker to invite, please follow the procedure described on the Inviting Speakers page of the DebConf wiki.


DebConf17 team

22 Aug 2016 1:44pm GMT

Vincent Sanders: Down the rabbit hole

My descent began with a user reporting a bug and I fear I am still on my way down.

Like Alice I headed down the hole. https://commons.wikimedia.org/wiki/File:Rabbit_burrow_entrance.jpg

The bug was simple enough, a windows bitmap file caused NetSurf to crash. Pretty quickly this was tracked down to the libnsbmp library attempting to decode the file. As to why we have a heavily used library for bitmaps? I am afraid they are part of every icon file and many websites still have favicons using that format.

Some time with a hex editor and the file format specification soon showed that the image in question was malformed and had a bad offset header entry. So I was faced with two issues, firstly that the decoder crashed when presented with badly encoded data and secondly that it failed to deal with incorrect header data.

This is typical of bug reports from real users, the obvious issues have already been encountered by the developers and unit tests formed to prevent them, what remains is harder to produce. After a debugging session with Valgrind and electric fence I discovered the crash was actually caused by running off the front of an allocated block due to an incorrect bounds check. Fixing the bounds check was simple enough as was working round the bad header value and after adding a unit test for the issue I almost moved on.

Almost...

american fuzzy lop are almost as cute as cats https://commons.wikimedia.org/wiki/File:Rabbit_american_fuzzy_lop_buck_white.jpg

We already used the bitmap test suite of images to check the library decode which was giving us a good 75% or so line coverage (I long ago added coverage testing to our CI system) but I wondered if there was a test set that might increase the coverage and perhaps exercise some more of the bounds checking code. A bit of searching turned up the american fuzzy lop (AFL) projects synthetic corpora of bmp and ico images.

After checking with the AFL authors that the images were usable in our project I added them to our test corpus and discovered a whole heap of trouble. After fixing more bounds checks and signed issues I finally had a library I was pretty sure was solid with over 85% test coverage.

Then I had the idea of actually running AFL on the library. I had been avoiding this because my previous experimentation with other fuzzing utilities had been utter frustration and very poor return on investment of time. Following the quick start guide looked straightforward enough so I thought I would spend a short amount of time and maybe I would learn a useful tool.

I downloaded the AFL source and built it with a simple make which was an encouraging start. The library was compiled in debug mode with AFL instrumentation simply by changing the compiler and linker environment variables.

$ LD=afl-gcc CC=afl-gcc AFL_HARDEN=1 make VARIANT=debug test
afl-cc 2.32b by <lcamtuf@google.com>
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: src/libnsbmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 751 locations (64-bit, hardened mode, ratio 100%).
AR: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/libnsbmp.a
COMPILE: test/decode_bmp.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 52 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp
afl-cc 2.32b by <lcamtuf@google.com>
COMPILE: test/decode_ico.c
afl-cc 2.32b by <lcamtuf@google.com>
afl-as 2.32b by <lcamtuf@google.com>
[+] Instrumented 65 locations (64-bit, hardened mode, ratio 100%).
LINK: build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_ico
afl-cc 2.32b by <lcamtuf@google.com>
Test bitmap decode
Tests:606 Pass:606 Error:0
Test icon decode
Tests:392 Pass:392 Error:0
TEST: Testing complete


I stuffed the AFL build directory on the end of my PATH, created a directory for the output and ran afl-fuzz

afl-fuzz -i test/bmp -o findings_dir -- ./build-x86_64-linux-gnu-x86_64-linux-gnu-debug-lib-static/test_decode_bmp @@ /dev/null


The result was immediate and not a little worrying, within seconds there were crashes and lots of them! Over the next couple of hours I watched as the unique crash total climbed into the triple digits.

I was forced to abort the run at this point as, despite clear warnings in the AFL documentation of the demands of the tool, my laptop was clearly not cut out to do this kind of work and had become distressingly hot.

AFL has a visualisation tool so you can see what kind of progress it is making which produced a graph that showed just how fast it managed to produce crashes and how much the return plateaus after just a few cycles. Although it was finding a new unique crash every ten minutes or so when aborted.

I dove in to analyse the crashes and it immediately became obvious the main issue was caused when the test tool attempted allocations of absurdly large bitmaps. The browser itself uses a heuristic to determine the maximum image size based on used memory and several other values. I simply applied an upper bound of 48 megabytes per decoded image which fits easily within the fuzzers default heap limit of 50 megabytes.

The main source of "hangs" also came from large allocations so once the test was fixed afl-fuzz was re-run with a timeout parameter set to 100ms. This time after several minutes no crashes and only a single hang were found which came as a great relief, at which point my laptop had a hard shutdown due to thermal event!

Once the laptop cooled down I spooled up a more appropriate system to perform this kind of work a 24way 2.1GHz Xeon system. A Debian Jessie guest vm with 20 processors and 20 gigabytes of memory was created and the build replicated and instrumented.

AFL master node display

To fully utilise this system the next test run would utilise AFL in parallel mode. In this mode there is a single "master" running all the deterministic checks and many "secondary" instances performing random tweaks.

If I have one tiny annoyance with AFL, it is that breeding and feeding a herd of rabbits by hand is annoying and something I would like to see a convenience utility for.

The warren was left overnight with 19 instances and by morning had generated crashes again. This time though the crashes actually appeared to be real failures.

$ afl-whatsup sync_dir/
Summary stats
=============

Fuzzers alive : 19
Total run time : 5 days, 12 hours
Total execs : 214 million
Cumulative speed : 8317 execs/sec
Pending paths : 0 faves, 542 total
Pending per fuzzer : 0 faves, 28 total (on average)
Crashes found : 554 locally unique


All the crashing test cases are available and a simple file command immediately showed that all the crashing test files had one thing in common the height of the image was -2147483648 This seemingly odd number is actually meaningful to a programmer, it is the largest negative number which can be stored in a 32bit integer (INT32_MIN) I immediately examined the source code that processes the height in the image header.

if ((width <= 0) || (height == 0))          
return BMP_DATA_ERROR;
if (height < 0) {
bmp->reversed = true;
height = -height;
}


The bug is where the height is made a positive number and results in height being set to 0 after the existing check for zero and results in a crash later in execution. A simple fix was applied and test case added removing the crash and any possible future failure due to this.

Another AFL run has been started and after a few hours has yet to find a crash or non false positive hang so it looks like if there are any more crashes to find they are much harder to uncover.

Main lessons learned are:

I will of course be debugging any new crashes that occur and perhaps turning my sights to all the projects other unit tested libraries. I will also be investigating the generation of our own custom test corpus from AFL to replace the demo set, this will hopefully increase our unit test coverage even further.

Overall this has been my first successful use of a fuzzing tool and a very positive experience. I would wholeheartedly recommend using AFL to find errors and perhaps even integrate as part of a CI system.

22 Aug 2016 12:24pm GMT

Michal Čihař: Continuous integration on multiple platforms

Over the weekend I've played with continuous integration for Gammu to make it run on more platforms. I had to remember many things from the Windows world on the way and the solution is not yet complete, but the basic build is working, the only problematic part are external dependencies.

First of all we already have Linux builds on Travis CI. These cover compilation with both GCC and Clang compilers, hopefully covering most of the possible problems.

Recently I've added OS X builds on Travis CI, what was pretty much painless and worked out of the box.

The next major architecture to support is Windows. Once I've discovered AppVeyor I thought it might be the way to go. The have free plans for open-source projects (though it has only one parallel build compared to four provided by Travis CI).

As our build system is cross platform based on CMake, it should work pretty much out of the box, right? Well almost, tweaking the basics took some time (unfortunately there is no CMake support on AppVeyor, so you have to script it a bit).

The most painful things on the way:

You can check our current appveyor.yml in case you're going to try something similar. Current build results are on AppVeyor.

As a nice side effect, we now have up to date Windows binaries for Gammu.

Filed under: Debian English Gammu | 0 comments

22 Aug 2016 10:00am GMT

NOKUBI Takatsugu: The 9th typhoon looks like Debian swirl logo

According to my follower's tweet:

@kazken3 台風画像と水平反転したDebianマークが一致.. pic.twitter.com/ymBoRGz9ew

- kuromabo_(:3」∠)_ (@kuromabo) 2016年8月22日

The typhoon image and horizontal flipped Debian logo looks same.

22 Aug 2016 9:01am GMT