25 Oct 2016

feedPlanet Debian

Michal Čihař: New features on Hosted Weblate

Today, new version has been deployed on Hosted Weblate. It brings many long requested features and enhancements.

Adding project to watched got way simpler, you can now do it on the project page using watch button:

Watch project

Another feature which will be liked by project admins is that they can now change project metadata without contacting me. This works for both project and component level:

Project settings

And adding some fancy things, there is new badge showing status of translations into all languages. This is how it looks for Weblate itself:

Translation status

As you can see it can get pretty big for projects with many translations, but you get complete picture of the translation status in it.

You can find all these features in upcoming Weblate 2.9 which should be released next week. Complete list of changes in Weblate 2.9 is described in our documentation.

Filed under: Debian English phpMyAdmin SUSE Weblate | 0 comments

25 Oct 2016 4:00pm GMT

Jaldhar Vyas: Aaargh gcc 5.x You Suck

Aaargh gcc 5.x You Suck

I had to write a quick program today which is going to be run many thousands of times a day so it has to run fast. I decided to do it in c++ instead of the usual perl or javascript because it seemed appropriate and I've been playing around a lot with c++ lately trying to update my knowledge of its' modern features. So 200 LOC later I was almost done so I ran the program through valgrind a good habit I've been trying to instill. That's when I got a reminder of why I avoid c++.

==37698== HEAP SUMMARY:
==37698==     in use at exit: 72,704 bytes in 1 blocks
==37698==   total heap usage: 5 allocs, 4 frees, 84,655 bytes allocated
==37698== LEAK SUMMARY:
==37698==    definitely lost: 0 bytes in 0 blocks
==37698==    indirectly lost: 0 bytes in 0 blocks
==37698==      possibly lost: 0 bytes in 0 blocks
==37698==    still reachable: 72,704 bytes in 1 blocks
==37698==         suppressed: 0 bytes in 0 blocks

One of things I've learnt which I've been trying to apply more rigorously is to avoid manual memory management (news/deletes.) as much as possible in favor of modern c++ features such as std::unique_ptr etc. By my estimation there should only be three places in my code where memory is allocated and none of them should leak. Where do the others come from? And why is there a missing free (or delete.) Now the good news is that valgrind is saying that the memory is not technically leaking. It is still reachable at exit but that's ok because the OS will reclaim it. But this program will run a lot and I think it could still lead to problems over time such as memory fragmentation so I wanted to understand what was going on. Not to mention the bad aesthetics of it.

My first assumption (one which has served me well over the years) was to assume that I had screwed up somewhere. Or perhaps it could some behind the scenes compiler magic. It turned out to be the latter -- sort of as I found out only after two hours of jiggling code in different ways and googling for clues. That's when I found this Stack Overflow question which suggests that it is either a valgrind or compiler bug. The answer specifically mentions gcc 5.1. I was using Ubuntu LTS which has gcc 5.4 so I have just gone ahead and assumed all 5.x versions of gcc have this problem. Sure enough, compiling the same program on Debian stable which has gcc 4.9 gave this...

==6045== HEAP SUMMARY:
==6045==     in use at exit: 0 bytes in 0 blocks
==6045==   total heap usage: 3 allocs, 3 frees, 10,967 bytes allocated
==6045== All heap blocks were freed -- no leaks are possible

...Much better. The executable was substantially smaller too. The time was not a total loss however. I learned that valgrind is pronounced val-grinned (it's from Norse mythology.) not val-grind as I had thought. So I have that going for me which is nice.

25 Oct 2016 6:45am GMT

Russ Allbery: Review: Lord of Emperors

Review: Lord of Emperors, by Guy Gavriel Kay

Series: Sarantine Mosaic #2
Publisher: Eos
Copyright: 2000
Printing: February 2001
ISBN: 0-06-102002-8
Format: Mass market
Pages: 560

Lord of Emperors is the second half of a work that began with Sailing to Sarantium and is best thought of as a single book split for publishing reasons. You want to read the two together and in order.

As is typical for this sort of two-part work, it's difficult to review the second half without spoilers. I'll be more vague about the plot and the characters than normal, and will mark one bit that's arguably a bit of a spoiler (although I don't think it would affect the enjoyment of the book).

At the end of Sailing to Sarantium, we left Crispin in the great city, oddly and surprisingly entangled with some frighteningly powerful people and some more mundane ones (insofar as anyone is mundane in a Guy Gavriel Kay novel, but more on that in a bit). The opening of Lord of Emperors takes a break from the city to introduce a new people, the Bassanids, and a new character, Rustem of Karakek. While Crispin is still the heart of this story, the thread that binds the entirety of the Sarantine Mosaic together, Rustem is the primary protagonist for much of this book. I had somehow forgotten him completely since my first read of this series many years ago. I have no idea how.

I mentioned in my review of the previous book that one of the joys of reading this series is competence porn: watching the work of someone who is extremely good at what they do, and experiencing vicariously some of the passion and satisfaction they have for their work. Kay's handling of Crispin's mosaics is still the highlight of the series for me, but Rustem's medical practice (and Strumosus, and the chariot races) comes close. Rustem is a brilliant doctor by the standards of the time, utterly frustrated with the incompetence of the Sarantine doctors, but also weaving his own culture's belief in omens and portents into his actions. He's more reserved, more laconic than Crispin, but is another character with focused expertise and a deep internal sense of honor, swept unexpectedly into broader affairs and attempting to navigate them by doing the right thing in each moment. Kay fills this book with people like that, and it's compelling reading.

Rustem's entrance into the city accidentally sets off a complex chain of events that draws together all of the major characters of Sailing to Sarantium and adds a few more. The stakes are no less than war and control of major empires, and here Kay departs firmly from recorded history into his own creation. I had mentioned in the previous review that Justinian and Theodora are the clear inspirations for this story; that remains true, and many other characters are easy to map, but don't expect history to go here the way that it did in our world. Kay's version diverges significantly, and dramatically.

But one of the things I love the most about this book is its focus on the individual acts of courage, empathy, and ethics of each of the characters, even when those acts do not change the course of empires. The palace intrigue happens, and is important, but the individual acts of Kay's large cast get just as much epic narrative attention even if they would never appear in a history book. The most globally significant moment of the book is not the most stirring; that happens slightly earlier, in a chariot race destined to be forgotten by history. And the most touching moment of the book is a moment of connection between two people who would never appear in history, over the life of a third, that matters so much to the reader only because of the careful attention to individual lives and personalities Kay has shown over the course of a hundreds of pages.

A minor spoiler follows in the next paragraph, although I don't think it affects the reading of the book.

One brilliant part of Kay's fiction is that he doesn't have many villains, and goes to some lengths to humanize the actions of nearly everyone in the book. But sometimes the author's deep dislike of one particular character shows through, and here it's Pertennius (the clear analogue of Procopius). In a way, one could say the entirety of the Sarantine Mosaic is a rebuttal of the Secret History. But I think Kay's contrast between Crispin's art (and Scortius's, and Strumosus's) and Pertennius's history has a deeper thematic goal. I came away from this book feeling like the Sarantine Mosaic as a whole stands in contrast to a traditional history, stands against a reduction of people to dates and wars and buildings and governments. Crispin's greatest work attempts to capture emotion, awe, and an inner life. The endlessly complex human relationships shown in this book running beneath the political events occasionally surface in dramatic upheavals, but in Kay's telling the ones that stay below the surface are just as important. And while much of the other art shown in this book differs from Crispin's in being inherently ephemeral, it shares that quality of being the art of life, of complexity, of people in dynamic, changing, situational understanding of the world, exercising competence in some area that may or may not be remembered.

Kay raises to the level of epic the bits of history that don't get recorded, and, in his grand and self-conscious fantasy epic style, encourages the reader to feel those just as deeply as the ones that will have later historical significance. The measure of people, their true inner selves, is often shown in moments that Pertennius would dismiss and consider unworthy of recording in his history.

End minor spoiler.

I think Lord of Emperors is the best part of the Sarantine Mosaic duology. It keeps the same deeply enjoyable view of people doing things they are extremely good at while correcting some of the structural issues in the previous book. Kay continues to use a large cast, and continues to cut between viewpoint characters to show each event from multiple angles, but he has a better grasp of timing and order here than in Sailing to Sarantium. I never got confused about the timeline, thanks in part to more frequent and more linear scene cuts. And Lord of Emperors passes, with flying colors, the hardest test of a novel with a huge number of viewpoint characters: when Kay cuts to a new viewpoint, my reaction is almost always "yes, I wanted to see what they were thinking!" and almost never "wait, no, go back!".

My other main complaint about Sailing to Sarantium was the treatment of women, specifically the irresistibility of female sexual allure. Kay thankfully tones that down a lot here. His treatment of women is still a bit odd - one notices that five women seem to all touch the lives of the same men, and little room is left for Platonic friendship between the genders - but they're somewhat less persistently sexualized. And the women get a great deal of agency in this book, and a great deal of narrative respect.

That said, Lord of Emperors is also emotionally brutal. It's beautifully done, and entirely appropriate to the story, and Kay does provide a denouement that takes away a bit of the sting. But it's still very hard to read in spots if you become as invested in the characters and in the world as I do. Kay is writing epic that borders on tragedy, and uses his full capabilities as a writer to make the reader feel it. I love it, but it's not a book that I want to read too often.

As with nearly all Kay, the Sarantine Mosaic as a whole is intentional, deliberate epic writing, wearing its technique on its sleeve and making no apologies. There is constant foreshadowing, constant attempts to draw larger conclusions or reveal great principles of human nature, and a very open, repeated stress on the greatness and importance of events while they're being described. This works for me, but it doesn't work for everyone. If it doesn't work for you, the Sarantine Mosaic is unlikely to change your mind. But if you're in the mood for that type of story, I think this is one of Kay's best, and Lord of Emperors is the best half of the book.

Rating: 10 out of 10

25 Oct 2016 4:04am GMT

Gunnar Wolf: On the results of vote "gr_private2"

Given that I started the GR process, and that I called for discussion and votes, I feel somehow as my duty to also put a simple wrap-around to this process. Of course, I'll say many things already well-known to my fellow Debian people, but also non-debianers read this.

So, for further context, if you need to, please read my previous blog post, where I was about to send a call for votes. It summarizes the situation and proposals; you will find we had a nice set of messages in debian-vote@lists.debian.org during September; I have to thank all the involved parties, much specially to Ian Jackson, who spent a lot of energy summing up the situation and clarifying the different bits to everyone involved.

So, we held the vote; you can be interested in looking at the detailed vote statistics for the 235 correctly received votes, and most importantly, the results:

Results for gr_private2

First of all, I'll say I'm actually surprised at the results, as I expected Ian's proposal (acknowledge difficulty; I actually voted this proposal as my top option) to win and mine (repeal previous GR) to be last; turns out, the winner option was Iain's (remain private). But all in all, I am happy with the results: As I said during the discussion, I was much disappointed with the results to the previous GR on this topic - And, yes, it seems the breaking point was when many people thought the privacy status of posted messages was in jeopardy; we cannot really compare what I would have liked to have in said vote if we had followed the strategy of leaving the original resolution text instead of replacing it, but I believe it would have passed. In fact, one more surprise of this iteration was that I expected Further Discussion to be ranked higher, somewhere between the three explicit options. I am happy, of course, we got such an overwhelming clarity of what does the project as a whole prefer.

And what was gained or lost with this whole excercise? Well, if nothing else, we gain to stop lying. For over ten years, we have had an accepted resolution binding us to release the messages sent to debian-private given such-and-such-conditions... But never got around to implement it. We now know that debian-private will remain private... But we should keep reminding ourselves to use the list as little as possible.

For a project such as Debian, which is often seen as a beacon of doing the right thing no matter what, I feel being explicit about not lying to ourselves of great importance. Yes, we have the principle of not hiding our problems, but it has long been argued that the use of this list is not hiding or problems. Private communication can happen whenever you have humans involved, even if administratively we tried to avoid it.

Any of the three running options could have won, and I'd be happy. My #1 didn't win, but my #2 did. And, I am sure, it's for the best of the project as a whole.

25 Oct 2016 1:46am GMT

24 Oct 2016

feedPlanet Debian

Chris Lamb: Concorde

Today marks the 13th anniversary since the last passenger flight from New York arrived in the UK. Every seat was filled, a feat that had become increasingly rare for a plane that was a technological marvel but a commercial flop….

See also: A Rocket to Nowhere.

24 Oct 2016 6:59pm GMT

Reproducible builds folks: Reproducible Builds: week 78 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday October 16 and Saturday October 22 2016:

Media coverage

Upcoming events


In order to build packages reproducibly, you not only need identical sources but also some external definition of the environment used for a particular build. This definition includes the inputs and the outputs and, in the Debian case, are available in a $package_$architecture_$version.buildinfo file.

We anticipate the next dpkg upload to sid will create .buildinfo files by default. Whilst it's clear that we also need to teach dak to deal with them (#763822) its not actually clear how to handle .buildinfo files after dak has processed them and how to make them available to the world.

To this end, Chris Lamb has started development on a proof-of-concept .buildinfo server to see what issues arise. Source

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

99 package reviews have been added, 3 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues.

6 issue types have been added:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

diffoscope development



Our poll to find a good time for an IRC meeting is still running until Tuesday, October 25st; please reply as soon as possible.

We need a logo! Some ideas and requirements for a Reproducible Builds logo have been documented in the wiki. Contributions very welcome, even if simply by forwarding this information.

This week's edition was written by Chris Lamb & Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

24 Oct 2016 4:10pm GMT

Russ Allbery: Review: The Design of Everyday Things

Review: The Design of Everyday Things, by Don Norman

Publisher: Basic Books
Copyright: 2013
ISBN: 0-465-05065-4
Format: Trade paperback
Pages: 298

There are several editions of this book (the first under a different title, The Psychology of Everyday Things). This review is for the Revised and Expanded Edition, first published in 2013 and quite significantly revised compared to the original. I probably read at least some of the original for a class in human-computer interaction around 1994, but that was long enough ago that I didn't remember any of the details.

I'm not sure how much impact this book has had outside of the computer field, but The Design of Everyday Things is a foundational text of HCI (human-computer interaction) despite the fact that many of its examples and much of its analysis is not specific to computers. Norman's goal is clearly to write a book that's fundamental to the entire field of design; not having studied the field, I don't know if he succeeded, but the impact on computing was certainly immense. This is the sort of book that everyone ends up hearing about, if not necessarily reading, in college. I was looking forward to filling a gap in my general knowledge.

Having now read it cover-to-cover, would I recommend others invest the time? Maybe. But probably not.

There are several things this book does well. One of the most significant is that it builds a lexicon and a set of general principles that provide a way of talking about design issues. Lexicons are not the most compelling reading material (see also Design Patterns), but having a common language is useful. I still remember affordances from college (probably from this book or something else based on it). Norman also adds, and defines, signifiers, constraints, mappings, and feedback, and talks about the human process of building a conceptual model of the objects with which one is interacting.

Even more useful, at least in my opinion, is the discussion of human task-oriented behavior. The seven stages of action is a great systematic way of analyzing how humans perform tasks, where those actions can fail, and how designers can help minimize failure. One thing I particularly like about Norman's presentation here is the emphasis on the feedback cycle after performing a task, or a step in a task. That feedback, and what makes good or poor feedback, is (I think) an underappreciated part of design and something that too often goes missing. I thought Norman was a bit too dismissive of simple beeps as feedback (he thinks they don't carry enough information; while that's not wrong, I think they're far superior to no feedback at all), but the emphasis on this point was much appreciated.

Beyond these dry but useful intellectual frameworks, though, Norman seems to have a larger purpose in The Design of Everyday Things: making a passionate argument for the importance of design and for not tolerating poor design. This is where I think his book goes a bit off the rails.

I can appreciate the boosterism of someone who feels an aspect of creating products is underappreciated and underfunded. But Norman hammers on the unacceptability of bad design to the point of tedium, and seems remarkably intolerant of, and unwilling to confront, the reasons why products may be released with poor designs for their eventual users. Norman clearly wishes that we would all boycott products with poor designs and prize usability above most (all?) other factors in our decisions. Equally clearly, this is not happening, and Norman knows it. He even describes some of the reasons why not, most notably (and most difficultly) the fact that the purchasers of many products are not the eventual users. Stoves are largely sold to builders, not kitchen cooks. Light switches are laid out for the convenience of the electrician; here too, the motive for the builder to spend additional money on better lighting controls is unclear. So much business software is purchased by people who will never use it directly, and may have little or no contact with the people who do. These layers of economic separation result in deep disconnects of incentive structure between product manufacturers and eventual consumers.

Norman acknowledges this, writes about it at some length, and then seems to ignore the point entirely, returning to ranting about the deficiencies of obviously poor design and encouraging people to care more about design. This seems weirdly superficial in this foundational of a book. I came away half-convinced that these disconnects of incentive (and some related problems, such as the unwillingness to invest in proper field research or the elaborate, expensive, and lengthy design process Norman lays out as ideal) are the primary obstacle in the way of better-designed consumer goods. If that's the case, then this is one of the largest, if not the largest, obstacle in the way of doing good design, and I would have expected this foundational of a book to tackle it head-on and provide some guidance for how to fight back against this problem. But Norman largely doesn't.

There is some mention of this in the introduction. Apparently much of the discussion of the practical constraints on product design in the business world was added in this revised edition, and perhaps what I'm seeing is the limitations of attempting to revise an existing text. But that also implies that the original took an even harder line against poor design. Throughout, Norman is remarkably high-handed in his dismissal of bad design, focusing more on condemnation than on an investigation of why bad design might happen and what we, as readers, can learn from that process to avoid repeating it. Norman does provide extensive analysis of the design process and the psychology of human interaction, but still left me with the impression that he believes most design failures stem from laziness and stupidity. The negativity and frustration got a bit tedious by the middle of the book.

There's quite a lot here that someone working in design, particularly interface design, should be at least somewhat familiar with: affordances, signifiers, the importance of feedback, the psychological model of tasks and actions, and the classification of errors, just to name a few. However, I'm not sure this book is the best medium for learning those things. I found it a bit tedious, a bit too arrogant, and weirdly unconcerned with feasible solutions to the challenge of mismatched incentives. I also didn't learn that much from it; while the concepts here are quite important, most of them I'd picked up by osmosis from working in the computing field for twenty years.

In that way, The Design of Everyday Things reminded me a great deal of the Gang of Four's Design Patterns, even though it's a more readable book and less of an exercise in academic classification. The concepts presented are useful and important, but I'm not sure I can recommend the book as a book. It may be better to pick up the same concepts as you go, with the help of Internet searches and shorter essays.

Rating: 6 out of 10

24 Oct 2016 4:17am GMT

Dirk Eddelbuettel: Word Marathon Majors: Five Star Finisher!

A little over eight years ago, I wrote a short blog post which somewhat dryly noted that I had completed the five marathons constituting the World Marathon Majors. I had completed Boston, Chicago and New York during 2007, adding London and then Berlin (with a personal best) in 2008. The World Marathon Majors existed then, but I was not aware of a website. The organisation was aiming to raise the profile of the professional and very high-end aspect of the sport. But marathoning is funny as they let somewhat regular folks like you and me into the same race. And I always wondered if someone kept track of regular folks completing the suite...

I have been running a little less the last few years, though I did get around to complete the Illinois Marathon earlier this year (only tweeted about it and still have not added anything to the running section of my blog). But two weeks ago, I was once again handing out water cups at the Chicago Marathon, sending along two tweets when the elite wheelchair and elite male runners flew by. To the first, the World Marathon Majors account replied, which lead me to their website. Which in turn lead me to the Five Star Finisher page, and the newer / larger Six Star Finisher page now that Tokyo has been added.

And in short, one can now request one's record to be added (if they check out). So I did. And now I am on the Five Star Finisher page!

I don't think I'll ever surpass that as a runner. The table header and my row look like this:

Table header Dirk Eddelbuettel

If only my fifth / sixth grade physical education teacher could see that---he was one of those early running nuts from the 1970s and made us run towards / around this (by now enlarged) pond and boy did I hate that :) Guess it did have some long lasting effects. And I casually circled the lake a few years ago, starting much further away from my parents place. Once you are in the groove for distance...

But leaving that aside, running has been fun and I with some luck I may have another one or two marathons or Ragnar Relays left. The only really bad part about this is that I may have to get myself to Tokyo after all (for something that is not an ISM workshop) ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 Oct 2016 2:41am GMT

Daniel Silverstone: Gitano - Approaching Release - Deprecated commands

As mentioned previously I am working toward getting Gitano into Stretch. Last time we spoke about lace, on which a colleague and friend of mine (Richard Maw) did a large pile of work. This time I'm going to discuss deprecation approaches and building more capability out of fewer features.

First, a little background -- Gitano is written in Lua which is a deliberately small language whose authors spend more time thinking about what they can remove from the language spec than they do what they could add in. I first came to Lua in the 3.2 days, a little before 4.0 came out. (The authors provide a lovely timeline in case you're interested.) With each of the releases of Lua which came after 3.2, I was struck with how the authors looked to take a number of features which the language had, and collapse them into more generic, more powerful, smaller, fewer features.

This approach to design stuck with me over the subsequent decade, and when I began Gitano I tried to have the smallest number of core features/behaviours, from which could grow the power and complexity I desired. Gitano is, at its core, a set of files in a single format (clod) stored in a consistent manner (Git) which mediate access to a resource (Git repositories). Some of those files result in emergent properties such as the concept of the 'owner' of a repository (though that can simply be considered the value of the project.owner property for the repository). Indeed the concept of the owner of a repository is a fiction generated by the ACL system with a very small amount of collusion from the core of Gitano. Yet until recently Gitano had a first class command set-owner which would alter that one configuration value.

[gitano]  set-description ---- Set the repo's short description (Takes a repo)
[gitano]         set-head ---- Set the repo's HEAD symbolic reference (Takes a repo)
[gitano]        set-owner ---- Sets the owner of a repository (Takes a repo)

Those of you with Gitano installations may see the above if you ask it for help. Yet you'll also likely see:

[gitano]           config ---- View and change configuration for a repository (Takes a repo)

The config command gives you access to the repository configuration file (which, yes, you could access over git instead, but the config command can be delegated in a more fine-grained fashion without having to write hooks). Given the config command has all the functionality of the three specific set-* commands shown above, it was time to remove the specific commands.


If you had automation which used the set-description, set-head, or set-owner commands then you will want to switch to the config command before you migrate your server to the current or any future version of Gitano.

In brief, where you had:

ssh git@gitserver set-FOO repo something

You now need:

ssh git@gitserver config repo set project.FOO something

It looks a little more wordy but it is consistent with the other features that are keyed from the project configuration, such as:

ssh git@gitserver config repo set cgitrc.section Fooble Section Name

And, of course, you can see what configuration is present with:

ssh git@gitserver config repo show

Or look at a specific value with:

ssh git@gitserver config repo show specific.key

As always, you can get more detailed (if somewhat cryptic) help with:

ssh git@gitserver help config

Next time I'll try and touch on the new PGP/GPG integration support.

24 Oct 2016 2:24am GMT

Francois Marier: Tweaking Referrers For Privacy in Firefox

The Referer header has been a part of the web for a long time. Websites rely on it for a few different purposes (e.g. analytics, ads, CSRF protection) but it can be quite problematic from a privacy perspective.

Thankfully, there are now tools in Firefox to help users and developers mitigate some of these problems.


In a nutshell, the browser adds a Referer header to all outgoing HTTP requests, revealing to the server on the other end the URL of the page you were on when you placed the request. For example, it tells the server where you were when you followed a link to that site, or what page you were on when you requested an image or a script. There are, however, a few limitations to this simplified explanation.

First of all, by default, browsers won't send a referrer if you place a request from an HTTPS page to an HTTP page. This would reveal potentially confidential information (such as the URL path and query string which could contain session tokens or other secret identifiers) from a secure page over an insecure HTTP channel. Firefox will however include a Referer header in HTTPS to HTTPS transitions unless network.http.sendSecureXSiteReferrer (removed in Firefox 52) is set to false in about:config.

Secondly, using the new Referrer Policy specification web developers can override the default behaviour for their pages, including on a per-element basis. This can be used both to increase or reduce the amount of information present in the referrer.

Legitimate Uses

Because the Referer header has been around for so long, a number of techniques rely on it.

Armed with the Referer information, analytics tools can figure out:

Another place where the Referer is useful is as a mitigation against cross-site request forgeries. In that case, a website receiving a form submission can reject that form submission if the request originated from a different website.

It's worth pointing out that this CSRF mitigation might be better implemented via a separate header that could be restricted to particularly dangerous requests (i.e. POST and DELETE requests) and only include the information required for that security check (i.e. the origin).

Problems with the Referrer

Unfortunately, this header also creates significant privacy and security concerns.

The most obvious one is that it leaks part of your browsing history to sites you visit as well as all of the resources they pull in (e.g. ads and third-party scripts). It can be quite complicated to fix these leaks in a cross-browser way.

These leaks can also lead to exposing private personally-identifiable information when they are part of the query string. One of the most high-profile example is the accidental leakage of user searches by healthcare.gov.

Solutions for Firefox Users

While web developers can use the new mechanisms exposed through the Referrer Policy, Firefox users can also take steps to limit the amount of information they send to websites, advertisers and trackers.

In addition to enabling Firefox's built-in tracking protection by setting privacy.trackingprotection.enabled to true in about:config, which will prevent all network connections to known trackers, users can control when the Referer header is sent by setting network.http.sendRefererHeader to:

It's also possible to put a limit on the maximum amount of information that the header will contain by setting the network.http.referer.trimmingPolicy to:

or using the network.http.referer.XOriginTrimmingPolicy option (added in Firefox 52) to only restrict the contents of referrers attached to cross-origin requests.

Site owners can opt to share less information with other sites, but they can't share any more than what the user trimming policies allow.

Another approach is to disable the Referer when doing cross-origin requests (from one site to another). The network.http.referer.XOriginPolicy preference can be set to:


If you try to remove all referrers (i.e. network.http.sendRefererHeader = 0, you will most likely run into problems on a number of sites, for example:

The first two have been worked-around successfully by setting network.http.referer.spoofSource to true, an advanced setting which always sends the destination URL as the referrer, thereby not leaking anything about the original page.

Unfortunately, the last two are examples of the kind of breakage that can only be fixed through a whitelist (an approach supported by the smart referer add-on) or by temporarily using a different browser profile.

My Recommended Settings

As with my cookie recommendations, I recommend strengthening your referrer settings but not disabling (or spoofing) it entirely.

While spoofing does solve many the breakage problems mentioned above, it also effectively disables the anti-CSRF protections that some sites may rely on and that have tangible user benefits. A better approach is to limit the amount of information that leaks through cross-origin requests.

If you are willing to live with some amount of breakage, you can simply restrict referrers to the same site by setting:

network.http.referer.XOriginPolicy = 2

or to sites which belong to the same organization (i.e. same ETLD/public suffix) using:

network.http.referer.XOriginPolicy = 1

This prevent leaks to third-parties while giving websites all of the information that they can already see in their own server logs.

On the other hand, if you prefer a weaker but more compatible solution, you can trim cross-origin referrers down to just the scheme, hostname and port:

network.http.referer.XOriginTrimmingPolicy = 2

I have not yet found user-visible breakage using this last configuration. Let me know if you find any!

24 Oct 2016 12:00am GMT

23 Oct 2016

feedPlanet Debian

Carl Chenet: PyMoneroWallet: the Python library for the Monero wallet

Do you know the Monero crytocurrency? It's a cryptocurrency, like Bitcoin, focused on the security, the privacy and the untracabily. That's a great project launched in 2014, today called XMR on all cryptocurrency exchange platforms (like Kraken or Poloniex).

So what's new? In order to work with a Monero wallet from some Python applications, I just wrote a Python library to use the Monero wallet: PyMoneroWallet


Using PyMoneroWallet is as easy as:

$ python3
>>> from monerowallet import MoneroWallet
>>> mw = MoneroWallet()
>>> mw.getbalance()
{'unlocked_balance': 2262265030000, 'balance': 2262265030000}

Lots of features are included, you should have a look at the documentation of the monerowallet module to know them all, but quickly here are some of them:

And so on. Have a look at the complete documentation for extensive available functions.

UPDATE: I'm trying to launch a crowdfunding of the PyMoneroWallet project. Feel free to comment in this thread of the official Monero forum to let them know you think that PyMoneroWallet is a great idea 😉

Feel free to contribute to this starting project to help spreading the Monero use by using the PyMoneroWallet project with your Python applications 🙂

23 Oct 2016 10:00pm GMT

Vincent Sanders: Rabbit of Caerbannog

Subsequent to my previous use of American Fuzzy Lop (AFL) on the NetSurf bitmap image library I applied it to the gif library which, after fixing the test runner, failed to produce any crashes but did result in a better test corpus improving coverage above 90%

I then turned my attention to the SVG processing library. This was different to the bitmap libraries in that it required parsing a much lower density text format and performing operations on the resulting tree representation.

The test program for the SVG library needed some improvement but is very basic in operation. It takes the test SVG, parses it using libsvgtiny and then uses the parsed output to write out an imagemagick mvg file.

The libsvg processing uses the NetSurf DOM library which in turn uses an expat binding to parse the SVG XML text. To process this with AFL required instrumenting not only the XVG library but the DOM library. I did not initially understand this and my first run resulted in a "map coverage" indicating an issue. Helpfully the AFL docs do cover this so it was straightforward to rectify.

Once the test program was written and environment set up an AFL run was started and left to run. The next day I was somewhat alarmed to discover the fuzzer had made almost no progress and was running very slowly. I asked for help on the AFL mailing list and got a polite and helpful response, basically I needed to RTFM.

I must thank the members of the AFL mailing list for being so helpful and tolerating someone who ought to know better asking dumb questions.

After reading the fine manual I understood I needed to ensure all my test cases were as small as possible and further that the fuzzer needed a dictionary as a hint to the file format because the text file was of such low data density compared to binary formats.

Rabbit of Caerbannog. Death awaits you with pointy teeth

I crafted an SVG dictionary based on the XML one, ensured all the seed SVG files were as small as possible and tried again. The immediate result was thousands of crashes, nothing like being savaged by a rabbit to cause a surprise.

Not being in possession of the appropriate holy hand grenade I resorted instead to GDB and electric fence. Unlike the bitmap library crashes memory bounds issues simply did not feature in the crashes.Instead they mainly centered around actual logic errors when constructing and traversing the data structures.

For example Daniel Silverstone fixed an interesting bug where the XML parser binding would try and go "above" the root node in the tree if the source closed more tags than it opened which resulted in wild pointers and NULL references.

I found and squashed several others including dealing with SVG which has no valid root element and division by zero errors when things like colour gradients have no points.

I find it interesting that the type and texture of the crashes completely changed between the SVG and binary formats. Perhaps it is just the nature of the textural formats that causes this although it might be due to the techniques used to parse the formats.

Once all the immediately reproducible crashes were dealt with I performed a longer run. I used my monster system as previously described and ran the fuzzer for a whole week.

Summary stats

Fuzzers alive : 10
Total run time : 68 days, 7 hours
Total execs : 9268 million
Cumulative speed : 15698 execs/sec
Pending paths : 0 faves, 2501 total
Pending per fuzzer : 0 faves, 250 total (on average)
Crashes found : 9 locally unique

After burning almost seventy days of processor time AFL found me another nine crashes and possibly more importantly a test corpus that generates over 90% coverage.

A useful tool that AFL provides is afl-cmin. This reduces the number of test files in a corpus to only those that are required to exercise all the code paths reached by the test set. In this case it reduced the number of files from 8242 to 2612

afl-cmin -i queue_all/ -o queue_cmin -- test_decode_svg @@ 1.0 /dev/null
corpus minimization tool for afl-fuzz by

[+] OK, 1447 tuples recorded.
[*] Obtaining traces for input files in 'queue_all/'...
Processing file 8242/8242...
[*] Sorting trace sets (this may take a while)...
[+] Found 23812 unique tuples across 8242 files.
[*] Finding best candidates for each tuple...
Processing file 8242/8242...
[*] Sorting candidate list (be patient)...
[*] Processing candidates and writing output files...
Processing tuple 23812/23812...
[+] Narrowed down to 2612 files, saved in 'queue_cmin'.

Additionally the actual information within the test files can be minimised with the afl-tmin tool. This must be run on each file individually and can take a relatively long time. Fortunately with GNU parallel one can run many of these jobs simultaneously which merely required another three days of CPU time to process. The resulting test corpus weighs in at a svelte 15 Megabytes or so against the 25 Megabytes before minimisation.

The result is yet another NetSurf library significantly improved by the use of AFL both from finding and squashing crashing bugs and from having a greatly improved test corpus to allow future library changes with a high confidence there will not be any regressions.

23 Oct 2016 9:27pm GMT

Jaldhar Vyas: What I Did During My Summer Vacation

Thats So Raven

If I could sum up the past year in one word, that word would be distraction. There have been so many strange, confusing or simply unforseen things going on I have had trouble focusing like never before.

For instance, on the opposite side of the street from me is one of Jersey City's old resorvoirs. It's not used for drinking water anymore and the city eventually plans on merging it into the park on the other side. In the meantime it has become something of a wildlife refuge. Which is nice except one of the newly settled critters was a bird of prey -- the consensus is possibly some kind of hawk or raven. Starting your morning commute under the eyes of a harbinger of death is very goth and I even learned to deal with the occasional piece of deconstructed rodent on my doorstep but nighttime was a big problem. For contrary to popular belief, ravens do not quoth "nevermore" but "KRRAAAA". Very loudly. Just as soon as you have drifted of to sleep. Eventually my sleep-deprived neighbors and I appealed to the NJ division of enviromental protection to get it removed but by the time they were ready to swing into action the bird had left for somewhere more congenial like Transylvania or Newark.

Or here are some more complete wastes of time: I go the doctor for my annual physical. The insurance company codes it as Adult Onset Diabetes by accident. One day I opened the lid of my laptop and there's a "ping" sound and a piece of the hinge flies off. Apparently that also severed the connection to the screen and naturally the warranty had just expired so I had to spend the next month tethered to an external monitor until I could afford to buy a new one. Mix in all the usual social, political, family and work drama and you can see that it has been a very trying time for me.


I have managed to get some Debian work done. On Dovecot, my principal package, I have gotten tremendous support from Apollon Oikonomopolous who I belatedly welcome as a member of the Dovecot maintainer team. He has been particularly helpful in fixing our systemd support and cleaning out a lot of the old and invalid bugs. We're in pretty good shape for the freeze. Upstream has released an RC of 2.2.26 and hopefully the final version will be out in the next couple of days so we can include it in Stretch. We can always use more help with the package so let me know if you're interested.


Most of the action has been going on without me but I've been lending support and sponsoring whenever I can. We have several new DDs and DMs but still no one north of the Vindhyas I'm afraid.

Debian Perl Group

gregoa did a ping of inactive maintainers and I regretfully had to admit to myself that I wasn't going to be of use anytime soon so I resigned. Perl remains my favorite language and I've actually been more involved in the meetings of my local Perlmongers group so hopefully I will be back again one day. And I still maintain the Perl modules I wrote myself.


May have gained a recruit.

*Stricly speaking it should be called Debian-People-Who-Dont-Think-Faults-in-One-Moral-Domain-Such-As-For-Example-Axe-Murdering-Should-Leak-Into-Another-Moral-Domain-Such-As-For-Example-Debian but come on, that's just silly.

23 Oct 2016 5:01am GMT

22 Oct 2016

feedPlanet Debian

Ingo Juergensmann: Automatically update TLSA records on new Letsencrypt Certs

I've been using DNSSEC for some quite time now and it is working quite well. When LetsEncrypt went public beta I jumped on the train and migrated many services to LE-based TLS. However there was still one small problem with LE certs:

When there is a new cert, all of the old TLSA resource records are not valid anymore and might give problems to strict DNSSEC checking clients. It took some while until my pain was big enough to finally fix it by some scripts.

There are at least two scripts involved:

1) dnssec.sh
This script does all of my DNSSEC handling. You can just do a "dnssec.sh enable-dnssec domain.tld" and everything is configured so that you only need to copy the appropriate keys into the webinterface of your DNS registry.

host:~/bin# dnssec.sh
No parameter given.
Usage: dnsec.sh MODE DOMAIN

MODE can be one of the following:
enable-dnssec : perform all steps to enable DNSSEC for your domain
edit-zone     : safely edit your zone after enabling DNSSEC
create-dnskey : create new dnskey only
load-dnskey   : loads new dnskeys and signs the zone with them
show-ds       : shows DS records of zone
zoneadd-ds    : adds DS records to the zone file
show-dnskey   : extract DNSKEY record that needs to uploaded to your registrar
update-tlsa   : update TLSA records with new TLSA hash, needs old and new TLSA hashes as additional parameters

For updating zone-files just do a "dnssech.sh edit-zone domain.tld" to add new records and such and the script will take care e.g. of increasing the serial of the zone file. I find this very convenient, so I often use this script for non-DNSSEC-enabled domains as well.

However you can spot the command line option "update-tlsa". This option needs the old and the new TLSA hashes beside the domain.tld parameter. However, this option is used from the second script:

2) check_tlsa.sh
This is a quite simple Bash script that parses the domains.txt from letsencrypt.sh script, looking up the old TLSA hash in the zone files (structured in TLD/domain.tld directories), compare the old with the new hash (by invoking tlsagen.sh) and if there is a difference in hashes, call dnssec.sh with the proper parameters:

set -e
for i in `cat /etc/letsencrypt.sh/domains.txt | awk '{print $1}'` ; do
        domain=`echo $i | awk 'BEGIN {FS="."} ; {print $(NF-1)"."$NF}'`
        #echo -n "Domain: $domain"
        TLD=`echo $i | awk 'BEGIN {FS="."}; {print $NF}'`
        #echo ", TLD: $TLD"
        OLDTLSA=`grep -i "in.*tlsa" /etc/bind/${TLD}/${domain} | grep ${i} | head -n 1 | awk '{print $NF}'`
        if [ -n "${OLDTLSA}" ] ; then
                #echo "--> ${OLDTLSA}"
                # Usage: tlsagen.sh cert.pem host[:port] usage selector mtype
                NEWTLSA=`/path/to/tlsagen.sh $LEPATH/certs/${i}/fullchain.pem ${i} 3 1 1 | awk '{print $NF}'`
                #echo "==> $NEWTLSA"
                if [ "${OLDTLSA}" != "${NEWTLSA}" ] ; then
                        /path/to/dnssec.sh update-tlsa ${domain} ${OLDTLSA} ${NEWTLSA} > /dev/null
                        echo "TLSA RR update for ${i}"

So, quite simple and obviously a quick hack. For sure someone else can write a cleaner and more sophisticated implementation to do the same stuff, but at least it works for meTM. Use it on your own risk and do whatever you want with these scripts (licensed under public domain).

You can invoke check_tlsa.sh right after your crontab call for letsencrypt.sh. In a more sophisticated way it should be fairly easy to invoke these scripts from letsencrypt.sh post hooks as well.
Please find the files attached to this page (remove the .txt extension after saving, of course).

Attachment Size
check_tlsa.sh.txt 812 bytes
dnssec.sh.txt 3.88 KB

22 Oct 2016 10:29pm GMT

Matthieu Caneill: Debugging 101

While teaching this semester a class on concurrent programming, I realized during the labs that most of the students couldn't properly debug their code. They are at the end of a 2-year cursus, know many different programming languages and frameworks, but when it comes to tracking down a bug in their own code, they often lacked the basics. Instead of debugging for them I tried to give them general directions that they could apply for the next bugs. I will try here to summarize the very first basic things to know about debugging. Because, remember, writing software is 90% debugging, and 10% introducing new bugs (that is not from me, but I could not find the original quote).

So here is my take at Debugging 101.

Use the right tools

Many good tools exist to assist you in writing correct software, and it would put you behind in terms of productivity not to use them. Editors which catch syntax errors while you write them, for example, will help you a lot. And there are many features out there in editors, compilers, debuggers, which will prevent you from introducing trivial bugs. Your editor should be your friend; explore its features and customization options, and find an efficient workflow with them, that you like and can improve over time. The best way to fix bugs is not to have them in the first place, obviously.

Test early, test often

I've seen students writing code for one hour before running make, that would fail so hard that hundreds of lines of errors and warnings were outputted. There are two main reasons doing this is a bad idea:

I recommend to test your code (compilation and execution) every few lines of code you write. When something breaks, chances are it will come from the last line(s) you wrote. Compiler errors will be shorter, and will point you to the same place in the code. Once you get more confident using a particular language or framework, you can write more lines at once without testing. That's a slow process, but it's ok. If you set up the right keybinding for compiling and executing from within your editor, it shouldn't be painful to test early and often.

Read the logs

Spot the places where your program/compiler/debugger writes text, and read it carefully. It can be your terminal (quite often), a file in your current directory, a file in /var/log/, a web page on a local server, anything. Learn where different software write logs on your system, and integrate reading them in your workflow. Often, it will be your only information about the bug. Often, it will tell you where the bug lies. Sometimes, it will even give you hints on how to fix it.

You may have to filter out a lot of garbage to find relevant information about your bug. Learn to spot some keywords like error or warning. In long stacktraces, spot the lines concerning your files; because more often, your code is to be blamed, rather than deeper library code. grep the logs with relevant keywords. If you have the option, colorize the output. Use tail -f to follow a file getting updated. There are so many ways to grasp logs, so find what works best with you and never forget to use it!

Print foobar

That one doesn't concern compilation errors (unless it's a Makefile error, in that case this file is your code anyway).

When the program logs and output failed to give you where an error occured (oh hi Segmentation fault!), and before having to dive into a memory debugger or system trace tool, spot the portion of your program that causes the bug and add in there some print statements. You can either print("foo") and print("bar"), just to know that your program reaches or not a certain place in your code, or print(some_faulty_var) to get more insights on your program state. It will give you precious information.

stderr >> "foo" >> endl;
my_db.connect(); // is this broken?
stderr >> "bar" >> endl;

In the example above, you can be sure it is the connection to the database my_db that is broken if you get foo and not bar on your standard error.

(That is an hypothetical example. If you know something can break, such as a database connection, then you should always enclose it in a try/catch structure).

Isolate and reproduce the bug

This point is linked to the previous one. You may or may not have isolated the line(s) causing the bug, but maybe the issue is not always raised. It can depend on many other things: the program or function parameters, the network status, the amount of memory available, the decisions of the OS scheduler, the user rights on the system or on some files, etc. More generally, any assumption you made on any external dependency can appear to be wrong (even if it's right 99% of the time). According to the context, try to isolate the set of conditions that trigger the bug. It can be as simple as "when there is no internet connection", or as complicated as "when the CPU load of some external machine is too high, it's a leap year, and the input contains illegal utf-8 characters" (ok, that one is fucked up; but it surely happens!). But you need to reliably be able to reproduce the bug, in order to be sure later that you indeed fixed it.

Of course when the bug is triggered at every run, it can be frustrating that your program never works but it will in general be easier to fix.


Always read the documentation before reaching out for help. Be it man, a book, a website or a wiki, you will find precious information there to assist you in using a language or a specific library. It can be quite intimidating at first, but it's often organized the same way. You're likely to find a search tool, an API reference, a tutorial, and many examples. Compare your code against them. Check in the FAQ, maybe your bug and its solution are already referenced there.

You'll rapidly find yourself getting used to the way documentation is organized, and you'll be more and more efficient at finding instantly what you need. Always keep the doc window open!

Google and Stack Overflow are your friends

Let's be honest: many of the bugs you'll encounter have been encountered before. Learn to write efficient queries on search engines, and use the knowledge you can find on questions&answers forums like Stack Overflow. Read the answers and comments. Be wise though, and never blindly copy and paste code from there. It can be as bad as introducing malicious security issues into your code, and you won't learn anything. Oh, and don't copy and paste anyway. You have to be sure you understand every single line, so better write them by hand; it's also better for memorizing the issue.

Take notes

Once you have identified and solved a particular bug, I advise to write about it. No need for shiny interfaces: keep a list of your bugs along with their solutions in one or many text files, organized by language or framework, that you can easily grep.

It can seem slightly cumbersome to do so, but it proved (at least to me) to be very valuable. I can often recall I have encountered some buggy situation in the past, but don't always remember the solution. Instead of losing all the debugging time again, I search in my bug/solution list first, and when it's a hit I'm more than happy I kept it.

Further reading degugging

Remember this was only Debugging 101, that is, the very first steps on how to debug code on your own, instead of getting frustrated and helplessly stare at your screen without knowing where to begin. When you'll write more software, you'll get used to more efficient workflows, and you'll discover tools that are here to assist you in writing bug-free code and spotting complex bugs efficiently. Listed below are some of the tools or general ideas used to debug more complex software. They belong more to a software engineering course than a Debugging 101 blog post. But it's good to know as soon as possible these exist, and if you read the manuals there's no reason you can't rock with them!

Don't hesitate to comment on this, and provide your debugging 101 tips! I'll be happy to update the article with valuable feedback.

Happy debugging!

22 Oct 2016 10:00pm GMT

Iain R. Learmonth: The Domain Name System

As I posted yesterday, we released PATHspider 1.0.0. What I didn't talk about in that post was an event that occured only a few hours before the release.

Everything was going fine, proofreading of the documentation was in progress, a quick git push with the documentation updates and… CI FAILED!?! Our CI doesn't build the documentation, only tests the core code. I'm planning to release real soon and something has broken.

Starting to panic.

irl@orbiter# ./tests.sh
Ran 16 tests in 0.984s


This makes no sense. Maybe I forgot to add a dependency and it's been broken for a while? I scrutinise the dependencies list and it all looks fine.

In fairness, probably the first thing I should have done is look at the build log in Jenkins, but I've never had a failure that I couldn't reproduce locally before.

It was at this point that I realised there was something screwy going on. A sigh of relief as I realise that there's not a catastrophic test failure but now it looks like maybe there's a problem with the University research group network, which is arguably worse.

Being focussed on getting the release ready, I didn't realise that the Internet was falling apart. Unknown to me, a massive DDoS attack against Dyn, a major DNS host, was in progress. After a few attempts to debug the problem, I hardcoded a line into /etc/hosts, still believing it to be a localised issue.  github.com

I've just removed this line as the problem seems to have resolved itself for now. There are two main points I've taken away from this:

This afternoon I read a post by [tj] on the 57North Planet, and this is where I learnt what had really happened. He mentions multicast DNS and Namecoin as distributed name system alternatives. I'd like to add some more to that list:

Only the first of these is really a distributed solution.

My idea with ICMP Domain Name Messages is that you send an ICMP message to a webserver. Somewhere along the path, you'll hit either a surveillance or censorship middlebox. These middleboxes can provide value by caching any DNS replies that are seen so that an ICMP DNS request message will cause the message to not be forwarded but a reply is generated to provide the answer to the query. If the middlebox cannot generate a reply, it can still forward it to other surveillance and censorship boxes.

I think this would be a great secondary use for the NSA and GCHQ boxen on the Internet, clearly fits within the scope of "defending national security" as if DNS is down the Internet is kinda dead, plus it'd make it nice and easy to find the boxes with PATHspider.

22 Oct 2016 6:15pm GMT