01 Sep 2014

feedPlanet Debian

Lars Wirzenius: 45

45 today. I should stop being childish, but I don't wanna.

01 Sep 2014 9:10pm GMT

Dirk Eddelbuettel: littler 0.2.0

We are happy to announce a new release of littler. A few minor things have changes since the last release: max-heap image

Full details are provided at the ChangeLog

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here. A fresh package will got to Debian's incoming queue shortly as well.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

01 Sep 2014 6:13pm GMT

Joseph Bisch: Debconf Wrapup

Debconf14 was the first Debconf I attended. It was an awesome experience.

Debconf14 started with a Meet and Greet before the Welcome Talk. I got to meet people and find out what they do for Debian. I also got to meet other GSoC students that I had only previously interacted with online. During the Meet and Greet I also met one of my mentors for GSoC, Zack. Later in the conference I met another of my mentors, Piotr. Previously I only interacted with Zack and Piotr online.

On Monday we had the OpenPGP Keysigning. I got to meet people and exchange information so that we could later sign keys. Then on Tuesday I gave my talk about debmetrics as part of the larger GSoC talks.

During the conference I mostly attended talks. Then on Wednesday we had the daytrip. I went hiking at Multnomah Falls, had lunch at Rooster Rock State Park, and then went to Vista House.

Later in the conference, Zack and I did some work on debmetrics. We looked at the tests, which had some issues. I was able to fix most of the issues with the tests while I was there at Debconf. We also moved the debmetrics repository under the qa category of repositories. Previously it was a private repository.

01 Sep 2014 6:02pm GMT

Jo Shields: Xamarin Apt and Yum repos now open for testing

Howdy y'all

Two of the main things I've been working on since I started at Xamarin are making it easier for people to try out the latest bleeding-edge Mono, and making it easier for people on older distributions to upgrade Mono without upgrading their entire OS.

Public Jenkins packages

Every time anyone commits to Mono git master or MonoDevelop git master, our public Jenkins will try and turn those into packages, and add them to repositories. There's a garbage collection policy - currently the 20 most recent builds are always kept, then the first build of the month for everything older than 20 builds.

Because we're talking potentially broken packages here, I wrote a simple environment mangling script called mono-snapshot. When you install a Jenkins package, mono-snapshot will also be installed and configured. This allows you to have multiple Mono versions installed at once, for easy bug bisecting.

directhex@marceline:~$ mono --version
Mono JIT compiler version 3.6.0 (tarball Wed Aug 20 13:05:36 UTC 2014)
directhex@marceline:~$ . mono-snapshot mono
[mono-20140828234844]directhex@marceline:~$ mono --version
Mono JIT compiler version 3.8.1 (tarball Fri Aug 29 07:11:20 UTC 2014)

The instructions for setting up the Jenkins packages are on the new Mono web site, specifically here. The packages are built on CentOS 7 x64, Debian 7 x64, and Debian 7 i386 - they should work on most newer distributions or derivatives.

Stable release packages

This has taken a bit longer to get working. The aim is to offer packages in our Apt/Yum repositories for every Mono release, in a timely fashion, more or less around the same time as the Mac installers are released. Info for setting this up is, again, on the new website.

Like the Jenkins packages, they are designed as far as I am able to cleanly integrate with different versions of major popular distributions - though there are a few instances of ABI breakage in there which I have opted to fix using one evil method rather than another evil method.

Please note that these are still at "preview" or "beta" quality, and shouldn't be considered usable in major production environments until I get a bit more user feedback. The RPM packages especially are super new, and I haven't tested them exhaustively at this point - I'd welcome feedback.

I hope to remove the "testing!!!" warning labels from these packages soon, but that relies on user feedback to my xamarin.com account preferably (jo.shields@)

01 Sep 2014 4:46pm GMT

Juliana Louback: Debconf 2014 and How I Became a Debian Contributor

Part 1 - Debconf 2014

This year I went to my first Debconf, which took place in Portland, OR during the last week of August 2014. All in all I have to rate my experience as very enlightening and in the end quite fun.

First of all, it was a little daunting to go to a conference in 1 - A city I'd never been to before; 2 - A conference with 300+ people, only 3 of which I knew and even then I only knew them virtually. Not to mention I was in the presence of some extremely brilliant and known contirbutors in the Debian community which was somewhat intimidating. Just to give you an idea, Linus Torvalds showed up for a Q&A session last Friday morning! Jealous? Actually I missed that too. It was kind of a last minute thing, booked for coincidentally the exact time I'd be flying out of Portland. I found out about it much too late. But luckily for me and maybe you, the session was filmed and can be seen here. Isn't that a treat?

Point made, there are lots of really talented people there, both techies and non-techies. It's easy to feel you're out of league, at least I did. But I'd highly encourage you to ignore such feelings if you're ever in the same situation. Debian has been built on for a long time now, but although a lot has been done, a lot still needs to be done. The Debian community is very welcoming of new contributors and users, regardless of the level of expertise. So far I haven't been snubbed by anyone. To the contrary, all my interactions with the Debian community members has been extremely positive.

So go ahead and attend the meetings and presentations, even if you think it's not your area of expertise. Debconf was organized (or at least this one was) as a series of talks, meet ups and ad hoc sessions, some of which occured simultaneously. The sessions were all about different components of the Debian universe, from presenting new features to overviews of accomplishments to discussing issues and how to fix them. A schedule with the location and description of each session was posted on the Debconf wiki. Sometimes none of the sessions at a certain time was on a topic I knew very much about. But I'd sit in anyways. There's no rule to attending the sessions, no 'minimum qualifications' required. You'll likely learn something new and you just might find out there is something you can do to contribute. There are also hackathons that are quite the thing or so I heard. Or you could walk about and meet new people, do some networking.

I have to say networking was the highlight of the Debconf for me. Remember I said I knew about 3 people who were at the conference? Well, I had actually just corresponded with those people. I didn't really know them. So on my first day I spent quite some time shyly peeking at people's name tags, trying to recognize someone I had 'met' over email or IRC. But with 300 or so people at the conference, I was unsuccessful. So I finally gave up on that strategy and walked up to a random person, stuck out my hand and said, "Hi. My name is Juliana. This is my first Debconf. What's your name and what do you do for Debian?" This may not be according to protocol, but it worked for me. I got to meet lots of people that way, met some Debian contributos from my home country (Brazil), some from my current city (NYC), and yet others that had similar interests as I do who I might work with in the near future. For example, I love Machine Learning, I'm currently beginning my graduate studies on that track. Several Debian contributors offered to introduce me to a well known Machine Learning researcher and Debian contributor who is in NYC. Others had tried out JSCommunicator and had lots of suggestions for new features and fixes, or wanted to know more about the project and WebRTC in general. Also, not everyone there is a super experienced Debian contributor or user. There are a lot of newbies like me.

I got to do a quick 20-min presentation and demo of the work I had done on JSCommunicator during GSoC 2014. Oh my goodness that was nerve-wracking, but not half as painful as I expected. My mentor (Daniel Pocock) wisely suggested that when confronted with a question I didn't know how to answer, to redirect the question to to the audience. Chances are, there is someone there that knows the answer. If not, it will at least spark a good discussion.

When meeting new people at Debian, a question almost everyone asked is "How did you start working with/for Debian?". So I thought it would be a good topic to post about.

Part 2 - How I Became a Debian Contributor

Sometime in late October of 2013 (I think) I received an email from one of my professors at UNIRIO forwarding a description of the Outreach Program for Women. OPW is a program organized by the GNOME which endeavors to get more women involved in FOSS. OPW is similar to Google Summer of Code; you work remotely from home, guided by an assigned mentor. Debian was one of the 8 participating organizations that year. There was a list of project proposals which I perused, a few of them caught my eye and these projects were all Debian. I'd already been a fan of FOSS before. I had used the Ubuntu and Debian OS, I'd migrated to GIMP from Photoshop and to Open Office from Microsoft Office, for example. I'd strongly advocated the use of some of my prefered open source apps and programs to my friends and family. But I hadn't ever contributed to a FOSS project.

There's no time like the present, so I reached out the the mentor responsible for one of the projects I was interested in, Daniel Pocock. Daniel guided me through making a small contribution to a FOSS project, which serves as a token demonstration of my abilities and is part of the application process. I added a small feature to JMXetric and suggested a fix for an issue in the xTuple project. Actually, I had forgotten about this. Recently I made another contribution to xTuple, it's funny to see things come full circle. I also had to write a profile-ish description of my experience and how I intended on contributing during OPW on the Debian wiki, if you'd like you can check it out here.

I wouldn't follow my example to a T, because in the end I didn't make the OPW selection. Actually, I take that back. The fact I wasn't chosen for OPW that year doesn't mean I was incompetent or incapable of making a valuable contribution. OPW and GSoC do not have unlimited resources; they can't include everyone they'd like to. They receive thousands of proposals from very talented engineers and not everyone can participate at a given moment. But even though I wasn't selected, like I said, I could still pitch in. It's good to keep in mind that people usually aren't paid to contribute to FOSS. It's usually volunteer based, which I think is one of the beauties of the FOSS community and in my opinion one of the causes of it's success and great quality. People contribute because they want to, not because they have to.

I will say I was a little disappointed at not being chosen. But after being reassured that this 'rejection' wasn't due to any lack on my part, I decided to continue contributing to the Debian project I'd applied to. I was begining the final semester of my undergraduate studies which included writing a thesis. To be able to focus on my thesis and graduate on time, I'd stopped working temporarily and was studying full time. But I didn't want to lose practice and contributing to a FOSS project is a great way to stay in coding shape while doing something useful. So continue contributing I did.

It paid off. I gained experience, added value to a FOSS project and I think my previous contributions added weight to the application I later made for GSoC 2014. I passed this time. To be honest, I really wasn't counting on it. Actually, I was certain I wouldn't pass for some reason - insecure much? But with GSoC I wasn't too anxious about it as I was with the OPW application because by then, I was already 'hooked'. I'd learned about all the benefits of becoming a FOSS contributor and I wasn't stopping anytime soon. I had every intention of still working on my FOSS project with or without GSoC. GSoC 2014 ended a week ago (August 18th 2014). There's a list of things I still want to do with JSCommunicator and you can be sure I'll keep working on them.

P.S. This is not to say that programs like OPW and GSoC aren't amazing programs. Try it out if you can, it's really a great experience.

01 Sep 2014 10:06am GMT

Christian Perrier: Bug #760000

René Mayorga reported Debian bug #760000 on Saturday August 30th, against the pyfribidi package.

Bug #750000 was reported as of May 31th: nearly exactly 3 months for 10,000 bugs. The bug rate increased a little bit during the last weeks, probably because of the freeze approaching.

We're therefore getting more clues about the time when bug #800000 for which we have bets. will be reported. At current rate, this should happen in one year. So, the current favorites are Knuth Posern or Kartik Mistry. Still, David Prévot, Andreas Tille, Elmar Heeb and Rafael Laboissiere have their chances, too, if the bug rate increases (I'll watch you guys: any MBF by one of you will be suspect...:-)).

01 Sep 2014 6:02am GMT

31 Aug 2014

feedPlanet Debian

Junichi Uekawa: I was staring at qemu source for a while last month.

I was staring at qemu source for a while last month. There's a lot of things that I don't understand about the codebase. There's a race but it's hard to tell why a SIGSEGV was received.

31 Aug 2014 10:14pm GMT

Tim Retout: Website revamp

This weekend I moved my blog to a different server. This meant I could:

I've tested it, and it's working. I'm hoping that I can swap out the Node.js modules one-by-one for the Debian-packaged versions.

31 Aug 2014 10:04pm GMT

Stefano Zacchiroli: debsources hacking

Debsources now has a HACKING file

Here at DebConf14 I have given a few talks. The second one has been a technical talk about recent and future developments on Debsources. Both the talk slides and video are available.

After the talk, various DebConf participants have approached me and started hacking on Debsources, which is awesome! As a result of their work, new shiny features will probably be announced shortly. Stay tuned.

When discussing with new contributors (hi Luciano, Raphael!), though, it quickly became clear that getting started with Debsources hacking wasn't particularly easy. In particular, doing a local deployment for testing purposes might be intimidating, due to the need of having a (partial) source mirror and whatnot. To fix that, I have now written a HACKING file for Debsources, which you can find at top-level in the Git repo.

Happy Debsources hacking!

31 Aug 2014 8:02pm GMT

Thorsten Alteholz: My Debian activities in August 2014

FTP assistant

By pure chance I was able to accept 237 packages, the same number as last month. 33 times I contacted the maintainer to ask a question about a package and 55 times I had to reject a package. The reject number increased a bit as I also worked on packages that already got a note but had not been fully processed. In contrast I only filed three serious bugs this month.

Currently there are about 200 packages still waiting in the NEW queue As the freeze for Jessie comes closer every day, I wonder whether all of them can be processed in time. So I don't mind if every maintainer checks the package again and maybe uploads an improved version that can be processed faster.

Squeeze LTS

This was my second month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian

All in all I got assigned a workload of 16.5h for August. I spent these hours to upload new versions of

As last month I prepared these uploads on the basis of the corresponding DSAs for Wheezy. For these packages backporting the Wheezy patches to Squeeze was rather easy.

I also had a look at python-django and eglibc. Although the python-django patches apply now, the package fails some tests and these issues need some further investigation. In case of eglibc, my small pbuilder didn't have enough resources and trying to build the package resulted in a full disk after more than three hours of work.

For PHP5 Ondřej Surý (the real maintainer) suggested to use point releases of upstream instead of applying only patches. I am curious about how much effort is needed for this approach. Stay tuned, next month you will be told more details!

Anyway, this is still a lot of fun and I hope I can finish python-django, eglibc and php5 in September.

Other packages

This month my meep packages plus mpb have been part of a small hdf5 transition. All five packages needed a small patch and a new upload. As the patch was already provided by Gilles Filippini, this was done rather quickly.

Support

If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at donation@alteholz.eu if you prefer another way to donate. Every kind of support is most appreciated.

31 Aug 2014 4:42pm GMT

Ritesh Raj Sarraf: apt-offline 1.4

apt-offline 1.4 has been released [1]. This is a minor bug fix release. In fact, one feature, offline bug reports (--bug-reports), has been dropped for now.

The Debian BTS interface seems to have changed over time and the older debianbts.py module (that used the CGI interface) does not seem to work anymore. The current debbugs.py module seems to have switched to the SOAP interface.

There are a lot of changes going on personally, I just haven't had the time to spend. If anyone would like to help, please reach out to me. We need to use the new debbugs.py module. And it should be cross-platform.

Also, thanks to Hans-Christoph Steiner for providing the bash completion script.

[1] https://alioth.debian.org/projects/apt-offline/

AddThis:

Share page with AddThis

Categories:

Keywords:

31 Aug 2014 4:41pm GMT

Russell Coker: Links August 2014

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that's designed to address some of the failings of SimCity type games [3]. I haven't played it yet but the page describing it is interesting.

Carlos Buento's article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google's experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don't end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It's apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There's a lot of work in keeping up with that.

Racialicious.com has an interesting article about "Moff's Law" about discussion of media in which someone says "why do you have to analyze it" [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it's a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Related posts:

  1. Links August 2013 Mark Cuban wrote an interesting article titled "What Business is...
  2. Links February 2014 The Economist has an interesting and informative article about the...
  3. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...

31 Aug 2014 1:55pm GMT

Steve Kemp: A diversion - The National Health Service

Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.

Actually there are four such services in the UK, only one of which has this name:

  • The national health service (England)
  • Health and Social Care in Northern Ireland.
  • NHS Scotland.
  • NHS Wales.

In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.

(Differences? In Scotland you get eye-tests for free, in England you pay.)

My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.

The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)

Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.

Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!

So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.

I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.

Adults come to hospital mostly because they're sick, injured, or drunk.

Children come to hospital mostly because their parents are paranoid.

A child has a rash? Doctors are closed? Lets go to the emergency ward!

A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!

I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.

Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.

Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".

In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.

This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.

And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.

31 Aug 2014 11:51am GMT

Lucas Nussbaum: Debian trivia

After an intensive evening of brainstorming by the 5th floor cabal, I am happy to release the very first version of the Debian Trivia, modeled after the famous TCP/IP Drinking Game. Only the questions are listed here - maybe they should go (with the answers) into a package? Anyone willing to co-maintain? Any suggestions for additional questions?

31 Aug 2014 8:42am GMT

Alexander Wirt: cgit on alioth.debian.org

Recently I was doing some work on the alioth infrastructure like fixing things or cleaning up things.

One of the more visible things I done was the switch from gitweb to cgit. cgit is a lot of faster and looks better than gitweb.

The list of repositories is generated every hour. The move also has the nice effect that user repositories are available via the cgit index again.

I don't plan to disable the old gitweb, but I created a bunch of redirect rules that - hopefully - redirect most use cases of gitweb to the equivalent cgit url.

If I broke something, please tell me, if I missed a common use case, please tell me. You can usually reach me on #alioth@oftc or via mail (formorer@d.o)

People also asked me to upload my cgit package to Debian, the package is now waiting in NEW. Thanks to Nicolas Dandrimont (olasd) we also have a patch included that generates proper HTTP returncodes if repos doesn't exist.

31 Aug 2014 4:52am GMT

30 Aug 2014

feedPlanet Debian

Francois Marier: Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l
256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

30 Aug 2014 9:45pm GMT