08 Dec 2013
My ASUS Transformer TF101 had suddenly started flickering in all sorts of funny colors some weeks ago. As tapping it gently on the table in the right angle made the problem go away temporarily, it was clear the problem was about a loose cable, or some other hardware connection issue.
As I needed to go on a business trip the other day, I didn't look up the warranty expiration day until later that week. Then, Murphy struck: the tablet was now 2 years + 1 day old! Calling ASUS, some friendly guy there suggested I still tried to get ASUS to accept it for warranty, because the tablet had been with them last year for 5 days, so if they added that, it would still be within the warranty period. I filled out the RMA form, but one hour later the reply was they rejected it because it was out of warranty. Another guy on the phone then said they would probably only do the adding if it had been with them for maybe 10 days, or actually really 30 days, or whatever.
Putting the case back together was actually harder than disassembling it because some plastic bits got stuck, but now everything is back to normal.
08 Dec 2013 7:36pm GMT
There has been a lot of discussion recently about the concept of a Basic Income (Wikipedia), largely due to the efforts to change the Swiss constitution to provide a Basic Income . The concept of a Basic Income is that residents get a fixed payment without having to be sick, disabled, looking for work, or eligible for other forms of social security.
A Basic Income wouldn't replace all other forms of social security, one of the most obvious examples is that sick people will often need money for medical care in additional to living expenses. Also I believe that it shouldn't be means tested in any way. I think that one of the problems with current payment schemes is that there are complex eligibility criteria which require effort for the applicant and for government agencies to prevent accidental or fraudulent over-payment. The tax rates could be raised slightly to make it revenue neutral.
In Australia the main form of social security for unemployed people at the moment is called "Newstart" . Currently Newstart payments range from a maximum of $501 per fortnight for a single person ($13,026 per annum), to a maximum of $699.90 per fortnight for someone who is a carer.
The Newstart payments start to decrease if the recipient earns more than $62 per fortnight. The minimum wage in Australia is $16.37 per hour for permanent work or $20.30 for casual work . So if someone works for more than 3 hours at a casual rate (and I can't imagine 4 hours a fortnight being anything other than casual) then their Newstart payments will decrease. The payment decreases are fairly significant, for every dollar that is earned about 50 cents will be deducted from the payments. That's a great incentive to either avoid opportunities to do part-time work or to do cash-only work that's outside the tax system.
The most obvious way of implementing a Basic Income would be to replace Newstart. Then anyone who is in that situation would be free to just not get a job - which would be OK IMHO as people who don't want to work probably wouldn't do a good job if the government forced them to get a job. People who are unemployed who want to work could work as much as they want and scale up according to what their employer asks and how much money they need.
Currently the full-time minimum wage is $622.20 per week (I'm not sure exactly how they get that from $16.37). That's almost 2.5* the Newstart allowance for a single person (but less than twice the Newstart allowance for a carer). While Newstart (and the other forms of social security) don't provide a great income, it seems that the difference between Newstart and the minimum wage isn't that great - particularly when you consider that working involves some expenses for travel etc. There doesn't seem to be a great financial incentive for someone to leave Newstart and get a minimum wage job.
People Who Want Social Security
Some people think it's great to get government payments while others find it embarrassing to need such payments and won't necessarily apply if they are eligible. I think that the current system of forcing people to apply for social security is a way of discouraging people who find themselves unexpectedly in a difficult situation but doesn't discourage people who are happy not to work. This seems to effectively reduce the incidence of payments to the people who most tax-payers would regard as the most worthy recipients.
Charles Stross wrote about some ideas related to this . He suggests that as the workforce participation has been steadily reducing due to technology we should move to a social model that isn't based around working to live but working to buy luxuries that aren't covered by the Basic Income.
One of the many economic changes related to a Basic Income is that the minimum wage could be smaller than it might otherwise be. For example if the minimum wage was decreased by the same amount that the Basic Income provided then the minimum income would remain the same while employers would pay less, this would affect the viability of certain types of contract work web sites if they were subject to minimum wage laws (currently they just ignore the minimum wage laws by paying based on job completion instead of hours worked). I don't think that the minimum wage should decrease that much though, currently employers are able to run viable businesses with the minimum wage laws and I don't think that a Basic Income should be used as a way of helping corporations avoid paying their employees.
If we had a Basic Income then there's many ways that it could be used to stabilise the economy. If people could pay their rent even if they lost their job then a down-turn in one area of the economy wouldn't immediately affect other areas. Also if rent payments were deducted automatically from an account used to receive the Basic Income then landlords would be more likely to rent to poor people as they could be guaranteed to receive rent payments (it would be easy to have a contractual agreement for rent to take priority and have bank computers enforce that).
The Implementation Problem
I don't think that my idea would have any significant negative effects. It wouldn't decrease government revenues if tax was adjusted accordingly. It wouldn't make people stop working as people who don't want to work already avoid it. It would help people who are out of work to get work by reducing the barriers to entry in terms of paperwork and of unreasonable cuts to Newstart making it bad value to take part time work.
I think that the big problem with implementing it is people who want to prevent poor people from having opportunities. They want to reduce social security and minimum wages even though such changes will in the long run only give less tax revenue and greater expense in law enforcement. It seems rather ironic that such hostility often comes from people at the low end of the middle class whos jobs are most likely to be at risk from new technology.
As on-going technological development reduces the number of workers that are required to keep things running we need to have some form of payment to the people who aren't doing enough work to survive. A decent Basic Income is a much better option than giving Newstart payments and forcing a significant portion of the population into a degrading search for jobs that don't exist. As that's the inevitable future I think we should make political changes to deal with it sooner rather than later. However a Basic Income might be implemented now it's surely going to be a lot better than what might happen if we wait until the majority of the population are unemployed before doing something about it.
-  http://tinyurl.com/mwemen5
-  http://www.humanservices.gov.au/customer/services/centrelink/newstart-allowance
-  http://www.fairwork.gov.au/PAY/NATIONAL-MINIMUM-WAGE/pages/default.aspx
-  http://tinyurl.com/mjq3dwl
08 Dec 2013 12:17pm GMT
I previously wrote about the financial value of a university degree , my general conclusion is that the value is decreasing for most fields of employment that don't have a legal requirement for a degree. In the past I wrote about some ideas for a home university , basically extending the home-schooling concept to a university level.
I recently read John Scalzi's post about being poor , many of the comments address the difficulty of getting to college and how it impacts career possibilities. From reading that it seems that my ideas about a "home university" are mostly based around what middle-class people can afford. Also getting a job afterwards will probably be a lot easier for someone who was born into the middle classes.
It seems to me that a large part of the problem with the university system is the expectation that they will both provide for academic research and train people for jobs. Dr. David Helfand has some great ideas for running a university to give the higher education that a university is supposed to provide rather than the work training that most universities actually provide . His ideas aren't theoretical, they have been implemented and proven to work. Note that Dr Helfand's talk starts slowly, the second half is the best (for those of you with short attention spans). The fact that most people think of a university degree in terms of getting a job seems to be a failure of the university system to fulfill it's original aim.
If Dr Helfand's ideas take off then it would really address the problem of universities not educating people. But that still leaves the issue of job training.
Is a Degree Mandatory?
I think that to some degree people expect that a university degree is necessary job training even when it isn't. I wonder what would happen if it was generally agreed that the right thing to do was to search for a job between the end of high school and the start of university, then anyone who got a suitable offer could defer their university course and see what career success they could achieve without it. When I was at school the general idea was that after completing year 12 everyone just had a holiday until the start of university as the entire point of school was to get into university. While hiring managers prefer candidates who have degrees they also prefer to hire people who will accept a lower salary, so hiring an 18yo with no degree may give better value than a 21yo who has a degree.
I believe that making university degrees more accessible has reduced inequality which is a good thing. But making degrees mandatory (which is widely believed by high school students and thus is the situation that they have to deal with) contributes to greater inequality. While university doesn't cost much by middle-class standards it is still expensive for poor people.
If a university degree wasn't considered to be mandatory then the number of people employed to teach at a university level would be smaller. This would hopefully mean that the average skill of university lecturers would increase (I hope that the least skillful lecturers would be the ones to find work elsewhere).
-  http://etbe.coker.com.au/2012/06/14/financial-value-degree/
-  http://etbe.coker.com.au/2007/10/04/ideas-for-a-home-university/
-  http://whatever.scalzi.com/2005/09/03/being-poor/
-  http://tedxtalks.ted.com/video/Designing-a-university-for-the
08 Dec 2013 11:39am GMT
The recent news that openSUSE considers btrfs safe for users prompted me to consider using it. And indeed I did. I was already familiar with zfs, so considered this a good opportunity to experiment with btrfs.
btrfs makes an intriguing filesystem for all sorts of workloads. The benefits of btrfs and zfs are well-documented elsewhere. There are a number of features btrfs has that zfs lacks. For instance:
- The ability to shrink a device that's a member of a filesystem/pool
- The ability to remove a device from a filesystem/pool entirely, assuming enough free space exists elsewhere for its data to be moved over.
- Asynchronous deduplication that imposes neither a synchronous performance hit nor a heavy RAM burden
- Copy-on-write copies down to the individual file level with cp --reflink
- Live conversion of data between different profiles (single, dup, RAID0, RAID1, etc)
- Live conversion between on-the-fly compression methods, including none at all
- Numerous SSD optimizations, including alignment and both synchronous and asynchronous TRIM options
- Proper integration with the VM subsystem
- Proper support across the many Linux architectures, including 32-bit ones (zfs is currently only flagged stable on amd64)
- Does not require excessive amounts of RAM
The feature set of ZFS that btrfs lacks is well-documented elsewhere, but there are a few odd btrfs missteps:
- There is no way to see how much space subvolume/filesystem is using without turning on quotas. Even then, it is cumbersome and not reported with df like it should be.
- When a maxmium size for a subvolume is set via a quota, it is not reported via df; applications have no idea when they are about to hit the maximum size of a filesystem.
btrfs would be fine if it worked reliably. I should say at the outset that I have never lost any data due to it, but it has caused enough kernel panics that I've lost count. I several times had a file that produced a panic when I tried to delete it, several times when it took more than 12 hours to unmount a btrfs filesystem, behaviors where hardlink-heavy workloads take days longer to complete than on zfs or ext4, and that's just the ones I wrote about. I tried to use btrfs balance to change the metadata allocation on the filesystem, and never did get it to complete; it seemed to go into an endless I/O pattern after the first 1GB of metadata and never got past that. I didn't bother trying the live migration of data from one disk to another on this filesystem.
I wanted btrfs to work. I really, really did. But I just can't see it working. I tried it on my laptop, but had to turn of CoW on my virtual machine's disk because of the rm bug. I tried it on my backup devices, but it was unusable there due to being so slow. (Also, the hardlink behavior is broken by default and requires btrfstune -r. Yipe.)
At this point, I don't think it is really all that worth bothering with. I think the SuSE decision is misguided and ill-informed. btrfs will be an awesome filesystem. I am quite sure it will, and will in time probably displace zfs as the most advanced filesystem out there. But that time is not yet here.
In the meantime, I'm going to build a Debian Live Rescue CD with zfsonlinux on it. Because I don't ever set up a system I can't repair.
08 Dec 2013 5:53am GMT
07 Dec 2013
I spent yesterday at the very enjoyable Big Data Summit held at the University of Illinois Research Park at the edge of the University of Illinois at Urbana-Champaign. campus. My (short) presentation was part of a panel session on R and Big Data which Doug Simpson of the UIUC Statistics department had put together very well. We heard from a vendor / technology provider with Christopher Nguyen from Adatao talking about their "Big R", from industry with Andy Stevens talking about a number of some real-life challenges with big data at John Deere, from academia with Jonathon Greenberg talking about R and HPC for geospatial research and I added a few short comments and links about R, HPC and Rcpp. My few slides are now up on my talks / presentations page. Overall, a good day with a number of interesting presentations and of course a number of engaging hallway discussions.
07 Dec 2013 9:47pm GMT
Lumicall is now offering free calls from browser to mobile.
The whole service is powered by free software using open standards.
- The person receiving the call must have the open source Lumicall app on their phone (Android and Cyanogenmod phones supported)
- The person making the call just goes to http://webrtc.lumicall.org and dials the number in international format. For example, for the UK mobile 07123 45678, you need to dial +44712345678
Various open source projects have made this possible, in particular:
Feedback and discussion
Please come and join us on the mailing list for any of the third-party projects that are involved. Please also join the Free real-time communications list sponsored by the FSF Europe for any general discussion about the future of free communications with free software.
WebRTC Conference this week
I'll be presenting some of my own work with WebRTC at the WebRTC Conference and Exhibition 2013 in Paris this week. Various other free software developers are also in the program, including Ludovic Dubost from xWiki and Emil Ivov from Jitsi
07 Dec 2013 8:14pm GMT
In this event, I've talked about "local community" for Debian, a bit (PDF/ODF are in Debian Wiki).
Probably you know, most of Debian contributors are in Euro/America(North and South), not in Asia. But there are lots of talented people. It means: there is huge possibility for Debian :)
I hope we Asian Debian people unite and publish its community work more, and do "DebConf in Asia" - in the future.
07 Dec 2013 3:49pm GMT
Thanks to Steffen Ullrich, this bug is now fixed in LWP::UserAgent and LWP::Protocol::https repositories.
In Debian, I've updated libwww-perl 6.05-2 and liblwp-protocol-https-perl 6.04-2 to include the same patches. This fix is now available in Debian unstable.
See my previous blog for more details on this story.
All the best
07 Dec 2013 8:52am GMT
06 Dec 2013
I just realised a lot of my projects are deployed in the same way:
- They run under runit.
- They operate directly from git clones.
This includes both Apache-based projects, and node.js projects.
I'm sure I could generalize this, and do clever things with git-hooks. Right now for example I have run-scripts which look like this:
#!/bin/sh # # /etc/service/blogspam.js/run - Runs the blogspam.net API. # # update the repository. git pull --update --quiet # install dependencies, if appropriate. npm install # launche exec node server.js
It seems the only thing that differs is the name of the directory and the remote git clone URL.
With a bit of scripting magic I'm sure you could push applications to a virgin Debian installation and have it do the right thing.
I think the only obvious thing I'm missing is a list of Debian dependencies. Perhaps adding soemthing like the packages.json file I could add an extra step:
apt-get update -qq apt-get install --yes --force-yes $(cat packages.apt)
Making deployments easy is a good thing, and consistency helps..
06 Dec 2013 9:13pm GMT
It has been a while since I managed to publish the last interview, but the Debian Edu / Skolelinux community is still going strong, and yesterday we even had a new school administrator show up on #debian-edu to share his success story with installing Debian Edu at their school. This time I have been able to get some helpful comments from the creator of Knoppix, Klaus Knopper, who was involved in a Skolelinux project in Germany a few years ago.
Who are you, and how do you spend your days?
I am Klaus Knopper. I have a master degree in electrical engineering, and is currently professor in information management at the university of applied sciences Kaiserslautern / Germany and freelance Open Source software developer and consultant.
All of this is pretty much of the work I spend my days with. Apart from teaching, I'm also conducting some more or less experimental projects like the Knoppix GNU/Linux live system (Debian-based like Skolelinux), ADRIANE (a blind-friendly talking desktop system) and LINBO (Linux-based network boot console, a fast remote install and repair system supporting various operating systems).
How did you get in contact with the Skolelinux / Debian Edu project?
The credit for this have to go to Kurt Gramlich, who is the German coordinator for Skolelinux. We were looking for an all-in-one open source community-supported distribution for schools, and Kurt introduced us to Skolelinux for this purpose.
What do you see as the advantages of Skolelinux / Debian Edu?
- Quick installation,
- works (almost) out of the box,
- contains many useful software packages for teaching and learning,
- is a purely community-based distro and not controlled by a single company,
- has a large number of supporters and teachers who share their experience and problem solutions.
What do you see as the disadvantages of Skolelinux / Debian Edu?
- Skolelinux is - as we had to learn - not easily upgradable to the next version. Opposed to its genuine Debian base, upgrading to a new version means a full new installation from scratch to get it working again reliably.
- Skolelinux is based on Debian/stable, and therefore always a little outdated in terms of program versions compared to Edubuntu or similar educational Linux distros, which rather use Debian/testing as their base.
- Skolelinux has some very self-opinionated and stubborn default configuration which in my opinion adds unnecessary complexity and is not always suitable for a schools needs, the preset network configuration is actually a core definition feature of Skolelinux and not easy to change, so schools sometimes have to change their network configuration to make it "Skolelinux-compatible".
- Some proposed extensions, which were made available as contribution, like secure examination mode and lecture material distribution and collection, were not accepted into the mainline Skolelinux development and are now not easy to maintain in the future because of Skolelinux somewhat undeterministic update schemes.
- Skolelinux has only a very tiny number of base developers compared to Debian.
For these reasons and experience from our project, I would now rather consider using plain Debian for schools next time, until Skolelinux is more closely integrated into Debian and becomes upgradeable without reinstallation.
Which free software do you use daily?
GNU/Linux with LXDE desktop, bash for interactive dialog and programming, texlive for documentation and correspondence, occasionally LibreOffice for document format conversion. Various programming languages for teaching.
Which strategy do you believe is the right one to use to get schools to use free software?
Strong arguments are
- Knowledge is free, and so should be methods and tools for teaching and learning.
- Students can learn with and use the same software at school, at home, and at their working place without running into license or conversion problems.
- Closed source or proprietary software hides knowledge rather than exposing it, and proprietary software vendors try to bind customers to certain products. But teachers need to teach science, not products.
- If you have everything you for daily work as open source, what would you need proprietary software for?
06 Dec 2013 8:50am GMT
I've been using POV-Ray off and on for the past decade or so. I've never been extremely talented with graphical stuff, but I've always liked playing around with it; and POV-Ray, with its turing-complete scene description language, appeals to me as a programmer. I've used it when I needed to do some animation; for instance, I created the FOSDEM 2013 and DebConf13 "wait screen" animations for the video team with FOSDEM.
One particular downside of POV-Ray has always been the fact that their license was a custom non-free one. This was a historical accident (POV-Ray has existed for a long time, since before the popularization of FLOSS), and AIUI, the relicensing was impossible for various reasons. However, a rewrite of POV-Ray (as version 3.7) has been in the making for quite a while.
Today, I noticed two things: first, POV-Ray 3.7 was released (under the AGPLv3, thereby becoming Free Software); and second, as of the 3.7 release, the POV-Ray is put into a git repository and available on github.
Also, apart from it being free software now, POV-Ray 3.7 has a few new features as well. Most importantly among those (at least in my opinion), POV-Ray 3.7 is a multithreaded application, in contrast to POV-Ray 3.6 and before which wasn't.
Building it seemed to have some issues with the versions of a few things that are in Debian unstable; but for one of these a fix has already been merged, and for the other a merge request is out.
Now to decide whether I should package it...
06 Dec 2013 8:00am GMT
Gunnar Wolf: For people in Mexico: Workshop next Wednesday! Video editing from the command line (by Chema Serralde, @joseserralde)
(Yes, yes... Maybe I should post in Spanish.. But hey, gotta keep consistecy in my blog!)
General, public, open invitation
Are you in Mexico City, or do you plan to be next Wednesday (December 11)?
Are you interested in video edition? In Free Software?
I will have the pleasure to host at home the great Chema Serralde, a good friend, and a multifacetic guru both in the technical and musical areas. He will present a workshop: Video editing from the command line.
I asked Chema for an outline of his talk, but given he is a busy guy, I will basically translate the introduction he prepared for this same material in FSL Vallarta, held two weeks ago.
With the help of the commandline, you can become a multimedia guru. We will edit a video using just a terminal. This skill will surprise your friends - and your couple.
But the most important is that this knowledge is just an excuse to understand step by step what does a video CODEC mean, what is a FORMAT, and how video and audio editors work; by using this knowledge, you will be able to set the basis for multimedia editing, without the promises and secrets of propietary editors.
How much does my file weigh and why? How to improve a video file's quality? Why cannot I read my camera's information from GNU/Linux?
By the end of this workshop, we well see how some libraries help you develop your first audio and video application, what are their main APIs and uses.
Everybody is welcome to come for free, no questions asked, no fees collected. I can offer coffee for all, but if you want anything else to eat/drink, you are welcome to bring it.
We do require you to reserve and confirm your place (mail me to my usual mail address). We have limited space, and I must set an absolute quota of 10 participants.
Some people hide their address... Mine is quite publicly known: Av. Copilco 233, just by Parque Hugo Margain, on the Northern edge of UNAM (Metro Copilco).
The course starts at 16:00, and lasts... As long as we make it last ;-)
So, that said... See you there! :-D
[update]: Chema sent me the list of topics he plans to cover. Copy/pasting from his mail, in Spanish:
TALLER RELÁMPAGO DE EDICIÓN AUDIOVISUAL EN LÍNEA DE COMANDO
José María Serralde Ruiz, facilitador
- Editando como cavernícola.
- Manipulación básica de archivos multimedia en entornos POSIX.
- Sé un Bash VJ (videojockey)
- Vaciando y entubando
- Editando como científico.
- Encabezados y fourcc
- 3 familias de CODECS de vídeo y sus patentes
- 3 famlias de CODECS de audio y sus patentes
- Muxers, demuxers y muxes.
- Editando como artista.
- Cajas de herramientas en software libre para procesamiento de vídeo.
- Procesamiento en tiempo real de vídeo (el que se crea artista pierde)
- Derritiendo vídeo, audio con calcetines MELT + SOX
(sistemas operativos POSIX, windouseros acercarse con el afán de repensar sus vidas): mplayer, avconv/ffmpeg (libavcodec), melt, sox, imagemagick
06 Dec 2013 1:56am GMT
05 Dec 2013
This Monday, I attended a workshop on Multi-party Off the Record Messaging and Deniability hosted by the Calyx Institute. The discussion was a combination of legal and technical people, looking at how the characteristics of this particular technology affect (or do not affect) the law.
This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections.
Off the Record Messaging (OTR) is a way to secure instant messaging (e.g. jabber/XMPP, gChat, AIM).
The two most common characteristics people want from a secure instant messaging program are:
- Each participant should be able to know specifically who the other parties are on the chat.
- The content of the messages should only be intelligible to the parties involved with the chat; it should appear opaque or encrypted to anyone else listening in. Note that confidentiality effectively depends on authentication -- if you don't know who you're talking to, you can't make sensible assertions about confidentiality.
As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality).
But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".
- Forward secrecy
- Assuming the parties communicating are operating in good faith, forward secrecy offers protection against a special kind of adversary: one who logs the encrypted chat, and subsequently steals either party's long-term secret key. Without forward secrecy, such an adversary would be able to discover the content of the messages, violating the confidentiality characteristic. With forward secrecy, this adversary is be stymied and the messages remain confidential.
- Deniability only comes into play when one of the parties is no longer operating in good faith (e.g. their computer is compromised, or they are collaborating with an adversary). In this context, if Alice is chatting with Bob, she does not want Bob to be able to cryptographically prove to anyone else that she made any of the specific statements in the conversation. This is the focus of Monday's discussion.
To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."
The traditional two-party OTR protocol has offered both forward secrecy and deniability for years. But deniability in particular is a challenging characteristic to provide for group chat which is the domain of Multi-Party OTR (mpOTR). You can read some past discussion about the challenges of deniability in mpOTR (and why it's harder when there are more than two people chatting) from the otr-users mailing list.
If you're not doing anything wrong...
The discussion was well-anchored by a comment from another participant who cheekily asked "If you're not doing anything wrong, why do you need to hide your chat at all, let alone be able to deny it?"
The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed:
- laws themselves are not always just (e.g. consider chat communications between an interracial couple in the USA before 1967, if instant messaging had existed at the time), and
- law enforcement (or a legal adversary in civil litigation) may have a different understanding or interpretation of the law than you do (e.g. consider chat communications between a corporate or government whistleblower and a journalist).
In these situations, people confront real risk from the law. If we care about these people, we need to figure out if we can build systems to help them reduce that legal risk (of course we also need to fix broken laws, and the legal environment in general, but those approaches were out of scope for this discussion).
The Legal Utility of Deniability
Monday's meeting was called specifically because it wasn't clear how much real-world usefulness there is in the "deniability" characteristic, and whether this feature is worth the development effort and implementation tradeoffs required. In particular, the group was interested in deniability's utility in legal contexts; many (most?) people in the room were lawyers, and it's also not clear that deniability has much utility outside of a formal legal setting. If your adversary isn't constrained by some rule of law, they probably won't care at all whether there is a cryptographic proof or not that you wrote a particular message (In retrospect, one possible exception is exposure in the media, but we did not discuss that scenario).
Places of possible usefulness
So where might deniability come in handy during civil litigation or a criminal trial? Presumably the circumstance is that a piece of a chat log is offered as incriminating evidence, and the defendant is trying to deny something that they appear to have said in the log.
This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury.
In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power.
In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury).
The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems.
These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me.
What about the arguments against its utility?
The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are.
Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces.
Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice).
Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk.
I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature?
Other kinds of deniability
Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript.
There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else.
If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them.
My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it.
Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability.
Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on.
As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself.
My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment.
Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion.
05 Dec 2013 11:14pm GMT
Today I should have been heading down to York, to attend the Bytemark Christmas party. Instead I'm here in Edinburgh, because wind/storms basically shutdown the rail network in Scotland for the morning.
Technically I could have probably made it, but only belatedly and only at a huge cost to my sanity. The train-station was insane with stranded people, and there seemed no guarantee the recently-revived service would continue.
So instead I'm sulking at home.
I had a lot of other things scheduled to do in York/London today/tomorrow, for reasons that will become apparent next week, so to say I'm annoyed is an understatement.
In happier news I'm not dead.
Walking to work this morning was horrific, there was so much wind 70-100mph, that I counldn't actually cross a bridge, on Ocean Drive, because I just kept getting blown into the road. (Yeah, that's a road that is very close to the coast. Driving wind. Horrible rain. Storming sea. Fun.)
I ended up retracing my steps, and taking a detour. (PS. My boots leaked.)
Not a good day. Enjoy some software instead - a trivial HTTP / XMPP bridge.
05 Dec 2013 3:27pm GMT
Releasing the shift key is hard.
05 Dec 2013 12:32pm GMT
04 Dec 2013
I'm sure this isn't an original thought of mine, but it just popped into my head and I think it's something of a "fundamental truth" that all software developers need to keep in mind:
Writing software is easy. The hard part is writing software that works.
All too often, we get so caught up in the rush of building something that we forget that it has to work - and, all too often, we fail in some fundamental fashion, whether it's "doesn't satisfy the user's needs" or "you just broke my $FEATURE!" (which is the context I was thinking of).
04 Dec 2013 11:45pm GMT