## 25 Jul 2014

### Planet Debian

#### Richard Hartmann: Release Critical Bug report for Week 30

I have been asked to publish bug stats from time to time. Not exactly sure about the schedule yet, but I will try and stick to Fridays, as in the past; this is for the obvious reason that it makes historical data easier to compare. "Last Friday of each month" may or may not be too much. Time will tell.

The UDD bugs interface currently knows about the following release critical bugs:

• In Total: 1511
• Affecting Jessie: 431 That's the number we need to get down to zero before the release. They can be split in two big categories:
• Affecting Jessie and unstable: 383 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
• 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
• 319 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
• Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
• 0 bugs are in packages that are unblocked by the release team.
• 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

25 Jul 2014 9:58pm GMT

#### Juliana Louback: Extending an xTuple Business Object

xTuple is in my opinion incredibly well designed; the code is clean and the architecture ahderent to a standardized structure. All this makes working with xTuple software quite a breeze.

I wanted to integrate JSCommunicator into the web-based xTuple version. JSCommunicator is a SIP communication tool, so my first step was to create an extension for the SIP account data. Luckily for me, the xTuple development team published an awesome tutorial for writing an xTuple extension.

xTuple cleverly uses model based business objects for the various features available. This makes customizing xTuple very straightforward. I used the tutorial mentioned above for writing my extension, but soon noticed my goals were a little different. A SIP account has 3 data fields, these being the SIP URI, the account password and an optional display name. xTuple currently has a business object in the core code for a User Account and it would make a lot more sense to simply add my 3 fields to this existing business object rather than create another business object. The tutorial very clearly shows how to extend a business object with another business object, but not how to extend a business object with only new fields (not a whole new object).

Now maybe I'm just a whole lot slower than most people, but I had a ridiculously had time figuring this out. Mind you, this is because I'm slow, because the xTuple documentation and code is understandable and as self-explanatory as it gets. I think it just takes a bit to get used to. Either way, I thought this just might be useful to others so here is how I went about it.

Setup

First you'll have to set up your xTuple development environment and fork the xtuple and xtuple-extesions repositories as shown in this handy tutorial. A footnote I'd like to add is please verify that your version of Vagrant (and anything else you install) is the one listed in the tutorial. I think I spent like two entire days or more on a wild goose (bug) chase trying to set up my environment when the cause of all the errors was that I somehow installed an older version of Vagrant - 1.5.4 instead of 1.6.3. Please don't make the same mistake I did. Actually if for some reason you get the following error when you try using node:

<<ERROR 2014-07-10T23:52:46.948Z>> Unrecoverable exception. Cannot call method 'extend' of undefined

at /home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:37:39

at Object.<anonymous> (/home/vagrant/dev/xtuple/lib/backbone-x/source/model.js:1364:3)
...



chances are, you have the wrong version. That's what happened to me. The Vagrant Virtual Development Environment automatically installs and configures everything you need, it's ready to go. So if you find yourself installing and updating and apt-gets and etc, you probably did something wrong.

Coding

So by now we should have the Vagrant Virtual Development Environment set up and the web app up and running and accessible at localhost:8443. So far so good.

Disclaimer: You will note that much of this is similar - or rather, nearly identical - to xTuple's tutorial but there are some small but important differences and a few observations I think might be useful. Other Disclaimer: I'm describing how I did it, which may or may not be 'up to snuff'. Works for me though.

Schema

First let's make a schema for the table we will create with the new custom fields. Be sure to create the correct directory stucture, aka /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>/database/source or in my case /path/to/xtuple-extensions/source/sip_account/database/source, and create the file create_sa_schema.sql, 'sa' is the name of my schema. This file will contain the following lines:

do $$/* Only create the schema if it hasn't been created already */ var res, sql = "select schema_name from information_schema.schemata where schema_name = 'sa'", res = plv8.execute(sql); if (!res.length) { sql = "create schema sa; grant all on schema sa to group xtrole;" plv8.execute(sql); }$$ language plv8;



Of course, feel free to replace 'sa' with your schema name of choice. All the code described here can be found in my xtuple-extensions fork, on the sip_ext branch.

Table

We'll create a table containing your custom fields and a link to an existing table - the table for the existing business object you want to extend. If you're wondering why make a whole new table for a few extra fields, here's a good explanation, the case in question is adding fields to the Contact business object.

You need to first figure out what table you want to link to. This might not be uber easy. I think the best way to go about it is to look at the ORMs. The xTuple ORMs are a JSON mapping between the SQL tables and the object-oriented world above the database, they're .json files found at path/to/xtuple/node_modules/xtuple/enyo-client/database/orm/models for the core business objects and at path/to/xtuplenyo-client/extensions/source/<EXTENSION NAME>/database/orm/models for exension business objects. I'll give two examples. If you look at contact.json you will see that the Contact business object refers to the table "cntct". Look for the "type": "Contact" on the line above, so we know it's the "Contact" business object. In my case, I wanted to extend the UserAccount and UserAccountRelation business objects, so check out user_account.json. The table listed for UserAccount is xt.usrinfo and the table listed for UserAccountRelation is xt.usrlite. A closer look at the sql files for these tables (usrinfo.sql and usrlite.sql) revealed that usrinfo is in fact a view and usrlite is 'A light weight table of user information used to avoid punishingly heavy queries on the public usr view'. I chose to refer to xt.usrlite - that or I received error messages when trying the other table names.

Now I'll make the file /path/to/xtuple-extensions/source/sip_account/database/source/usrlitesip.sql, to create a table with my custom fields plus the link to the urslite table. Don't quote me on this, but I'm under the impression that this is the norm for naming the sql file joining tables: the name of the table you are referring to ('usrlite' in this case) and your extension's name. Content of usrlitesip.sql:

select xt.create_table('usrlitesip', 'sa');

select xt.add_column('usrlitesip','usrlitesip_id', 'serial', 'primary key', 'sa');

comment on table sa.usrlitesip is 'Joins User with SIP account';



Breaking it down, line 1 creates the table named 'usrlitesip' (no duh), line 2 is for the primary key (self-explanatory). You can then add any columns you like, just be sure to add one that references the table you want to link to. I checked usrlite.sql and saw the primary key is usr_username, be sure to use the primary key of the table you are referencing.

You can check what you made by executing the .sql files like so:

$cd /path/to/xtuple-extensions/source/sip_account/database/source$ psql -U admin -d dev -f create_sa_schema.sql
$psql -U admin -d dev -f usrlitesip.sql   After which you will see the table with the columns you created if you enter: $ psql -U admin -d dev -c "select * from sa.usrlitesip;"



Now create the file /path/to/xtuple-extensions/source/sip_account/database/source/manifest.js to put the files together and in the right order. It should contain:

{
"name": "sip_account",
"version": "1.4.1",
"comment": "Sip Account extension",
"dependencies": ["crm"],
"databaseScripts": [
"create_sa_schema.sql",
"usrlitesip.sql",
"register.sql"
]
}



I think the "name" has to be the same you named your extension directory as in /path/to/xtuple-extensions/source/<YOUR EXTENSION NAME>. I think the "comment" can be anything you like and you want your "loadOrder" to be high so it's the last thing installed (as it's an add on.) So far we are doing exactly what's instructed in the xTuple tutorial. It's repetitive, but I think you can never have too many examples to compare to. In "databaseScripts" you will list the two .sql files you just created for the schema and the table, plus another file to be made in the same directory named register.sql.

I'm not sure why you have to make the register.sql or even if you indeed have to. If you leave the file empty, there will be a build error, so put a ';' in the register.sql or remove the line "register.sql" from manifest.js as I think for now we are good without it.

Now let's update the database with our new extension:

$cd /path/to/xtuple$ ./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account
$psql -U admin -d dev -c "select * from xt.ext;"   That last command should display a table with a list of extensions; the ones already in xtuple like 'crm' and 'billing' and some others plus your new extension, in this case 'sip_account'. When you run build_app.js you'll probably see a message along the lines of "<Extension name> has no client code, not building client code" and that's fine because yeah, we haven't worked on the client code yet. ORM Here's where things start getting different. So ORMs link your object to an SQL table. But we DON'T want to make a new business object, we want to extend an existing business object, so the ORM we will make will be a little different than the xTuple tutorial. Steve Hackbarth kindly explained this new business object/existing business object ORM concept here. First we'll create the directory /path/to/xtuple-extensions/source/sip_account/database/orm/ext, according to xTuple convention. ORMs for new business objects would be put in /path/to/xtuple-extensions/source/sip_account/database/orm/models. Now we'll create the .json file /path/to/xtuple-extensions/source/sip_account/database/orm/ext/user_account.jscon for our ORM. Once again, don't quote me on this, but I think the name of the file should be the name of the business object you are extending, as is done in the turorial example extending the Contact object. In our case, UserAccount is defined in user_account.json and that's what I named my extension ORM too. Here's what you should place in it:  1 [ 2 { 3 "context": "sip_account", 4 "nameSpace": "XM", 5 "type": "UserAccount", 6 "table": "sa.usrlitesip", 7 "isExtension": true, 8 "isChild": false, 9 "comment": "Extended by Sip", 10 "relations": [ 11 { 12 "column": "usrlitesip_usr_username", 13 "inverse": "username" 14 } 15 ], 16 "properties": [ 17 { 18 "name": "uri", 19 "attr": { 20 "type": "String", 21 "column": "usrlitesip_uri", 22 "isNaturalKey": true 23 } 24 }, 25 { 26 "name": "displayName", 27 "attr": { 28 "type": "String", 29 "column": "usrlitesip_name" 30 } 31 }, 32 { 33 "name": "sipPassword", 34 "attr": { 35 "type": "String", 36 "column": "usrlitesip_password" 37 } 38 } 39 ], 40 "isSystem": true 41 }, 42 { 43 "context": "sip_account", 44 "nameSpace": "XM", 45 "type": "UserAccountRelation", 46 "table": "sa.usrlitesip", 47 "isExtension": true, 48 "isChild": false, 49 "comment": "Extended by Sip", 50 "relations": [ 51 { 52 "column": "usrlitesip_usr_username", 53 "inverse": "username" 54 } 55 ], 56 "properties": [ 57 { 58 "name": "uri", 59 "attr": { 60 "type": "String", 61 "column": "usrlitesip_uri", 62 "isNaturalKey": true 63 } 64 }, 65 { 66 "name": "displayName", 67 "attr": { 68 "type": "String", 69 "column": "usrlitesip_name" 70 } 71 }, 72 { 73 "name": "sipPassword", 74 "attr": { 75 "type": "String", 76 "column": "usrlitesip_password" 77 } 78 } 79 ], 80 "isSystem": true 81 } 82 ]   Note the "context" is my extension name, because the context + nameSpace + type combo has to be unique. We already have a UserAccount and UserAccountRelation object in the "XM" namespace in the "xtuple" context in the original user_account.json, now we will have a UserAccount and UserAccountRelation object in the "XM" namespace in the "sip_account" conext. What else is important? Note that "isExtension" is true on lines 7 and 47 and the "relations" item contains the "column" of the foreign key we referenced. This is something you might want to verify: "column" (lines 12 and 52) is the name of the attribute on your table. When we made a reference to the primary key usr_usrname from the xt.usrlite table we named that column usrlitesip_usr_usrname. But the "inverse" is the attribute name associated with the original sql column in the original ORM. Did I lose you? I had a lot of trouble with this silly thing. In the original ORM that created a new UserAccount business object, the primary key attribute is named "username", as can be seen here. That is what should be used for the "inverse" value. Not the sql column name (usr_username) but the object attribute name (username). I'm emphasizing this because I made that mistake and if I can spare you the pain I will. If we rebuild our extension everything should come along nicely, but you won't see any changes just yet in the web app because we haven't created the client code. Client Create the directory /path/to/xtuple-extensions/source/sip_account/client which is where we'll keep all the client code. Extend Workspace View I want the fields I added to show up on the form to create a new User Account, so I need to extend the view for the User Account workspace. I'll start by creating a directory /path/to/xtuple-extensions/source/sip_account/client/views and in it creating a file named 'workspace.js' containing this code: XT.extensions.sip_account.initWorkspace = function () { var extensions = [ {kind: "onyx.GroupboxHeader", container: "mainGroup", content: "_sipAccount".loc()}, {kind: "XV.InputWidget", container: "mainGroup", attr: "uri" }, {kind: "XV.InputWidget", container: "mainGroup", attr: "displayName" }, {kind: "XV.InputWidget", container: "mainGroup", type:"password", attr: "sipPassword" } ]; XV.appendExtension("XV.UserAccountWorkspace", extensions); };   So I'm initializing my workspace and creating an array of items to add (append) to view XV.UserAccountWorkspace. The first 'item' is this onyx.GroupboxHeader which is a pretty divider for my new form fields, the kind you find in the web app at Setup > User Accounts, like 'Overview'. I have no idea what other options there are for container other than "mainGroup", so let's stick to that. I'll explain content: "_sipAccount".loc() in a bit. Next I created three input fields of the XV.InputWidget kind. This also confused me a bit as there are different kinds of input to be used, like dropdowns and checkboxes. The only advice I can give is snoop around the webapp, find an input you like and look up the corresponding workspace.js file to see what was used. What we just did is (should be) enough for the new fields to show up on the User Account form. But before we see things change, we have to package the client. Create the file /path/to/xtuple-extensions/source/sip_account/client/views/package.js. This file is needed to 'package' groups of files and indicates the order the files should be loaded (for more on that, see this). For now, all the file will contain is: enyo.depends( "workspace.js" );   You also need to package the 'views' directory containing workspace.js, so create the file Create the file /path/to/xtuple-extensions/source/sip_account/client/package.js and in it show that the directory 'views' and its contents must be part of the higher level package: enyo.depends( "views" );   I like to think of it as a box full of smaller boxes. This will sound terrible, but apparently you also need to create the file /path/to/xtuple-extensions/source/sip_account/client/core.js containing this line: XT.extensions.icecream = {};   I don't know why. As soon as I find out I'll be sure to inform you. As we've added a file to the client directory, be sure to update /path/to/xtuple-extensions/source/sip_account/client/package.js so it included the new file: enyo.depends( "core.js", "views" );   Translations Remember "_sipAccount".loc()" in our workspace.js file? xTuple has great internationalization support and it's easy to use. Just create the directory and file /path/to/xtuple-extensions/source/sip_account/client/en/strings.js and in it put key-value pairs for labels and their translation, like this: (function () { "use strict"; var lang = XT.stringsFor("en_US", { "_sipAccount": "Sip Account", "_uri": "Sip URI", "_displayName": "Display Name", "_sipPassword": "Password" }); if (typeof exports !== 'undefined') { exports.language = lang; } }());   So far I included all the labels I used in my Sip Account form. If you write the wrong label (key) or forget to include a corresponding key-value pair in strings.js, xTuple will simply name your lable "_lableName", underscore and all. Now build your extension and start up the server: $ cd /path/to/xtuple
$./scripts/build_app.js -d dev -e ../xtuple-extensions/source/sip_account$ node node-datasource/main.js



If the server is already running, just stop it and restart it to reflect your changes.

Now if you go to Setup > User Accounts and click the "+" button, you should see a nice little addition to the form with a 'Sip Account' divider and three new fields. Nice, eh?

Extend Parameters

Currently you can search your User Accounts list using any of the User Account fields. It would be nice to be able to search with the Sip account fields we added as well. To do that, let's create the directory /path/to/xtuple-extensions/source/sip_account/client/widgets and there create the file parameter.js to extend XV.UserAccountListParameters. One again, you'll have to look this up. In the xTuple code you'll find the application's parameter.js in /path/to/xtuple/enyo-client/application/source/widgets. Search for the business object you are extending (for example, XV.UserAccount) and look for some combination of the business object name and 'Parameters'. If there's more than one, try different ones. Not a very refined method, but it worked for me. Here's the content of our parameter.js:

XT.extensions.sip_account.initParameterWidget = function () {

var extensions = [
{name: "uri", label: "_uri".loc(), attr: "uri", defaultKind: "XV.InputWidget"},
{name: "displayName", label: "_displayName".loc(), attr: "displayName", defaultKind: "XV.InputWidget"}
];

XV.appendExtension("XV.UserAccountListParameters", extensions);
};



Node that I didn't include a search field for the password attribute for obvious reasons. Now once again, we package this new code addition by creating a /path/to/xtuple-extensions/source/sip_account/client/widgets/package.js file:

enyo.depends(
"parameter.js"
);



We also have to update /path/to/xtuple-extensions/source/sip_account/client/package.js:

enyo.depends(
"core.js",
"widgets"
"views"
);



Rebuild the extension (and restart the server) and go to Setup > User Accounts. Press the magnifying glass button on the upper left side of the screen and you'll see many options for filtering the User Accounts, among them the SIP Uri and Display Name.

Extend List View

You might want your new fields to show up on the list of User Accounts. I figured out a way to do this that looks strange and kind of incorrect, but it's apparently working.

Create the file /path/to/xtuple-extensions/source/sip_account/client/views/list.js and add the following:

enyo.kind({
name: "XV.UserAccountList",
kind: "XV.List",
label: "_userAccounts".loc(),
collection: "XM.UserAccountRelationCollection",
parameterWidget: "XV.UserAccountListParameters",
query: {orderBy: [
]},
components: [
{kind: "XV.ListItem", components: [
{kind: "FittableColumns", components: [
{kind: "XV.ListColumn", classes: "short", components: [
{kind: "XV.ListAttr", attr: "username", isKey: true}
]},
{kind: "XV.ListColumn", classes: "short", components: [
{kind: "XV.ListAttr", attr: "propername"}
]},
{kind: "XV.ListColumn", classes: "last", components: [
{kind: "XV.ListAttr", attr: "uri"}
]}
]}
]}
]
});

XV.registerModelList("XM.UserAccountRelation", "XV.UserAccountList");



This is actually what's in /path/to/xtuple/enyo-client/application/source/views/list.js - the entire highlighted part. All I did was add this to "components" after line 18:

  {kind: "XV.ListColumn", classes: "last", components: [
{kind: "XV.ListAttr", attr: "uri"}
]}



I found this at random after a lot of trial and error. It's strange because if you encapsulate that code with

XT.extensions.sip_account.initList = function () {
//Code here
};



as is done with parameter.js and workspace.js (and in the xTuple tutorial you are supposed to do that with a new business object), it doesn't work. I have no idea why. This might be 'wrong' or against xTuple coding norms; I will find out and update this post ASAP. But it does work this way. * shrugs *

That said, as we've created the list.js file, we need to ad it to our package by editing /path/to/xtuple-extensions/source/sip_account/client/views/package.js:

enyo.depends(
"list.js",
"workspace.js"
);



That's all. Rebuild the app and restart your server and when you select Setup > User Accounts in the web app you should see the Sip URI displayed on the User Accounts that have the Sip Account data. Add a new User Account to try this out.

25 Jul 2014 3:06pm GMT

#### Steve Kemp: The selfish programmer

Once upon a time I wrote a piece of software for scheduling the classes available to a college.

There was a bug in the scheduler: Students who happened to be named 'Steve Kemp' had a significantly higher chance (>=80% IIRC) of being placed in lessons where the class makeup was more than 50% female.

This bug was never fixed. Which was nice, because I spent several hours both implementing and disguising this feature.

I'm was a bad coder when I was a teenager.

These days I'm still a bad coder, but in different ways.

25 Jul 2014 1:16pm GMT

#### Wouter Verhelst: Multiarchified eID libraries for Debian

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

• run dpkg --add-architecture i386 if you haven't yet enabled multiarch
• Install the eid-archive package, as usual
• Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
• run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
• run apt-get update
• run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
• run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
• run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

25 Jul 2014 11:44am GMT

#### Tim Retout: London.pm's July 2014 tech meeting

Last night, I went to the London.pm tech meeting, along with a couple of colleagues from CV-Library. The talks, combined with the unusually hot weather we're having in the UK at the moment, combined with my holiday all last week, make it feel like I'm at a software conference. :)

The highlight for me was Thomas Klausner's talk about OX (and AngularJS). We bought him a drink at the pub later to pump him for information about using Bread::Board, with some success. It was worth the long, late commute back to Southampton.

All very enjoyable, and I hope they have more technical meetings soon. I'm planning to attend the London Perl Workshop later in the year.

25 Jul 2014 7:36am GMT

#### Gunnar Wolf: Nice read: «The Fasinatng … Frustrating … Fascinating History of Autocorrect»

A long time ago, I did some (quite minor!) work on natural language parsing. Most of what I got was the very basic rudiments on what needs to be done to begin with. But I like reading some texts on the subject every now and then.

I am also a member of the ACM - Association for Computing Machinery. Most of you will be familiar with it, it's one of the main scholarly associations for the field of computing. One of the basic perks of being an ACM member is the subscription to a very nice magazine, Communications of the ACM. And, of course, although I enjoy the physical magazine, I like reading some columns and articles as they appear along the month using the RSS feeds. They also often contain pointers to interesting reads on other media - As happened today. I found quite a nice article, I think, worth sharing with whoever thinks I have interesting things to say.

They published a very short blurb titled The Fasinatng … Frustrating … Fascinating History of Autocorrect. I was somewhat skeptical reading it links to an identically named article, published in Wired. But gave it a shot, anyway...

The article follows a style that's often abused and not very amusing, but I think was quite well done: The commented interview. Rather than just drily following through an interview, the writer tells us a story about that interview. And this is the story of Gideon Lewis-Kraus interviewing Dean Hachamovitch, the creator of the much hated (but very much needed) autocorrect feature that appeared originally in Microsoft Word.

The story of Hachamovitch's work (and its derivations, to the much maligned phone input predictors) over the last twenty-something years is very light to read, very easy to enjoy. I hope you find it as interesting as I did.

25 Jul 2014 3:18am GMT

## 24 Jul 2014

### Planet Debian

#### Craig Small: PHP uniqid() not always a unique ID

For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one.

Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in.

To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found.

At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said "Are these numbers supposed to be the same?"

The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups?

The problem is with the uniqid() function. It is basically a hex representation of the time. Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It's why the problem happened, but not all the time.

The stat error was another symptom of this bug, what would happen was:

• Child 1 starts the poll, temp filename abc123
• Child 2 starts the poll in the same microsecond, temp filename is also abc123
• Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result
• Child 1 finishes, grabs the details, deletes the temporary file
• Child 2 loops, tries to run stat but finds no file

Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random.

A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent's scheduler makes sure of that.) The initial responses is that it is looking good.

24 Jul 2014 12:17pm GMT

#### Martin Pitt: vim config for Markdown+LaTeX pandoc editing

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I've always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That's how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX -------------------------------------------

function s:MDSettings()
noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR>
noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR>
noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR>

" adjust syntax highlighting for LaTeX parts
"   inline formulas:
syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$"
"   environments:
syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement
"   commands:
syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement
endfunction

autocmd FileType markdown :call <SID>MDSettings()


That gives me "good enough" (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

24 Jul 2014 9:38am GMT

#### Matthew Palmer: First Step with Clojure: Terror

$sudo apt-get install -y leiningen [...]$ lein new scratch
[...]
$cd scratch$ lein repl
Transferring 5K from central
Transferring 4K from central
Transferring 3311K from central
[...]



Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I'm asking for is what I'm getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that's no safety net - if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there's no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I've seen Debian take a kicking more than once because packages aren't individually signed, or because packages aren't served over HTTPS. But at least Debian's packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there's no way for me (or anyone) to trust the signature - the signature was made by a key that's signed by one other key, which itself has no signatures. If I were an attacker, it wouldn't be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it's pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn't hard to do, and there's no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they're a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a "maven.org distribution root" key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn't be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn't be perfect, but don't make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don't get to ignore it just because of that, when you're creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.

1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that's just icing on the shit cake.

2. At one point in the past, my then-employer (a hosting provider) blocked Fastly's caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn't help me to distinguish between "I'm a professionally-run distributed proxy" and "I'm a pwned box here to hammer your site into the ground").

3. Pretty much all of the new breed of so-called CDNs aren't actually pro-actively distributing content, they're just proxies. That isn't a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they're providing "advanced" capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

24 Jul 2014 12:30am GMT

## 23 Jul 2014

### Planet Debian

#### Russ Allbery: WebAuth 4.6.1

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

23 Jul 2014 10:59pm GMT

#### Lior Kaplan: Testing PHPNG on Debian/Ubuntu

We (at Zend) want to help people get more involved in testing PHPNG (PHP next generation), so we're started to provide binaries for it, although it's still a branch on top of PHP's master branch. See more details about PHPNG on Zeev Suraski's blog post.

The binaries (64bit) are compatible with Debian testing/unstable and Ubuntu Trusty (14.04) and up. The mod_php is built for Apache 2.4 which all three flavors have.

The repository is at http://repos.zend.com/zend-server/early-access/phpng/

Installation instructions:

# wget http://repos.zend.com/zend.key -O- 2> /dev/null | apt-key add -
# echo "deb http://repos.zend.com/zend-server/early-access/phpng/ trusty zend" > /etc/apt/sources.list.d/phpng.list
# apt-get update
# apt-get install php5

For the task of providing these binaries, I had a pleasure of combining my experience as a member of the Debian PHP team and a Debian Developer with stuff more internal to the PHP development process. Using the already existing Debian packaging enabled me to test more builds scenarios easily (and report problems accoredingly). Hopefully this could also be translated back into providing more experimental packages for Debian and making sure Debian packages are ready for the PHP released after PHP 5.6.

Filed under: Debian GNU/Linux, PHP

23 Jul 2014 9:01pm GMT

#### Petter Reinholdtsen: 98.6 percent done with the Norwegian draft translation of Free Culture

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

23 Jul 2014 8:40pm GMT

#### Michael Prokop: Book Review: The Docker Book

Docker is an open-source project that automates the deployment of applications inside software containers. I'm responsible for a docker setup with Jenkins integration and a private docker-registry setup at a customer and pre-ordered James Turnbull's "The Docker Book" a few months ago.

Recently James - he's working for Docker Inc - released the first version of the book and thanks to being on holidays I already had a few hours to read it AND blog about it. (Note: I've read the Kindle version 1.0.0 and all the issues I found and reported to James have been fixed in the current version already, jey.)

The book is very well written and covers all the basics to get familiar with Docker and in my opinion it does a better job at that than the official user guide because of the way the book is structured. The book is also a more approachable way for learning some best practices and commonly used command lines than going through the official reference (but reading the reference after reading the book is still worth it).

I like James' approach with "ENV REFRESHED_AT $TIMESTAMP" for better controlling the cache behaviour and definitely consider using this in my own setups as well. What I wasn't aware is that you can directly invoke "docker build$git_repos_url" and further noted a few command line switches I should get more comfortable with. I also plan to check out the Automated Builds on Docker Hub.

There are some references to further online resources, which is relevant especially for the more advanced use cases, so I'd recommend to have network access available while reading the book.

What I'm missing in the book are best practices for running a private docker-registry in a production environment (high availability, scaling options,…). The provided Jenkins use cases are also very basic and nothing I personally would use. I'd also love to see how other folks are using the Docker plugin, the Docker build step plugin or the Docker build publish plugin in production (the plugins aren't covered in the book at all). But I'm aware that this are fast moving parts and specialised used cases - upcoming versions of the book are already supposed to cover orchestration with libswarm, developing Docker plugins and more advanced topics, so I'm looking forward to further updates of the book (which you get for free as existing customer, being another plus).

Conclusion: I enjoyed reading the Docker book and can recommend it, especially if you're either new to Docker or want to get further ideas and inspirations what folks from Docker Inc consider best practices.

23 Jul 2014 8:16pm GMT

#### Tanguy Ortolo: GNU/Linux graphic sessions: suspending your computer

Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:

\$ dbus-send --system --print-reply \
--dest='org.freedesktop.UPower' \
/org/freedesktop/UPower org.freedesktop.UPower.Suspend
Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with
signature "" on interface "org.freedesktop.UPower" doesn't exist


The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:

%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \
/usr/sbin/pm-suspend-hybrid, \
/usr/sbin/pm-hibernate


That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.

23 Jul 2014 12:45pm GMT

#### Francesca Ciceri: Adventures in Mozillaland #3

Yet another update from my internship at Mozilla, as part of the OPW.

A brief one, this time, sorry.

### Bugs, Bugs, Bugs, Bacon and Bugs

I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.

Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).

So, big shout-out for MattN for a very useful comment!

### Community

After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th. The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!

See you on Friday! :)

23 Jul 2014 11:04am GMT

#### Steinar H. Gunderson: The sad state of Linux Wi-Fi

I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer "this is just crap".

I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:

• Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz-this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
• Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello "iwl-agn-rs", I'm looking at you.)
• Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
• Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. "there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine", which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.

With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.

Augh.

23 Jul 2014 10:45am GMT