31 Aug 2015

feedPlanet Debian

Matthew Garrett: Working with the kernel keyring

The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It's convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they're locked down there's no way for even root to modify them.

But there's a corner case that can be somewhat confusing here, and it's one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be "possessed" by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes - if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don't want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn't create a new login session - when you're working with sudo, you're still working with key posession that's tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you're trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0's user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you'll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0's keyring (because the permissions are 0x3f3f0000), we don't possess it - the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There's a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we'll be part of another session and won't be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0's user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user's session keyring, because that's readable/writable by the unprivileged user - they'd be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately - UID 0 can read, modify and delete the key, other users can't.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down - rkt will then refuse to run any images unless they're signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments

31 Aug 2015 5:18pm GMT

Martín Ferrari: Romania

It's been over 2 years since I decided to start a new, nomadic life. I had the idea of blogging about this experience as it happened, but not only I am incredibly lazy when it comes to writing, most of the time I have been too busy just enjoying this lifestyle!

The TL;DR version of these last 2 years:

And now, I am in Cluj-Napoca, Romania.

View from my window
View from my window

31 Aug 2015 6:59am GMT

Martín Ferrari: IkiWiki

I haven't posted in a very long time. Not only because I suck at this, but also because IkiWiki decided to stop working with OpenID, so I can't use the web interface any more to post.. Very annoying.

Already spent a good deal of time trying to find a solution, without any success.. I really don't want to migrate to another software again, but this is becoming a showstopper for me.

31 Aug 2015 4:55am GMT

Russ Allbery: Review: Through Struggle, The Stars

Review: Through Struggle, The Stars, by John J. Lumpkin

Series: Human Reach #1
Publisher: John J. Lumpkin
Copyright: July 2011
ISBN: 1-4611-9544-6
Format: Kindle
Pages: 429

Never let it be said that I don't read military SF. However, it can be said that I read books and then get hellishly busy and don't review them for months. So we'll see if I can remember this well enough to review it properly.

In Lumpkin's future world, mankind has spread to the stars using gate technology, colonizing multiple worlds. However, unlike most science fiction futures of this type, it's not all about the United States, or even the United States and Russia. The great human powers are China and Japan, with the United States relegated to a distant third. The US mostly maintains its independence from either, and joins the other lesser nations and UN coalitions to try to pick up a few colonies of its own. That's the context in which Neil and Rand join the armed services: the former as a pilot in training, and the latter as an army grunt.

This is military SF, so of course a war breaks out. But it's a war between Japan and China: improved starship technology and the most sophisticated manufacturing in the world against a much larger economy with more resources and a larger starting military. For reasons that are not immediately clear, and become a plot point later on, the United States president immediately takes an aggressive tone towards China and pushes the country towards joining the war on the side of Japan.

Most of this book is told from Neil's perspective, following his military career during the outbreak of war. His plans to become a pilot get derailed as he gets entangled with US intelligence agents (and a bad supervisor). The US forces are not in a good place against China, struggling when they get into ship-to-ship combat, and Neil's ship goes on a covert mission to attempt to complicate the war with political factors. Meanwhile, Rand tries to help fight off a Chinese invasion of one of the rare US colony worlds.

Through Struggle, The Stars does not move quickly. It's over 400 pages, and it's a bit surprising how little happens. What it does instead is focus on how the military world and the war feels to Neil: the psychological experience of wanting to serve your country but questioning its decisions, the struggle of working with people who aren't always competent but who you can't just make go away, the complexities of choosing a career path when all the choices are fraught with politics that you didn't expect to be involved in, and the sheer amount of luck and random events involved in the progression of one's career. I found this surprisingly engrossing despite the slow pace, mostly because of how true to life it feels. War is not a never-ending set of battles. Life in a military ship has moments when everything is happening far too quickly, but other moments when not much happens for a long time. Lumpkin does a great job of reflecting that.

Unfortunately, I thought there were two significant flaws, one of which means I probably won't seek out further books in this series.

First, one middle portion of the book switches away from Neil to follow Rand instead. The first part of that involves the details of fighting orbiting ships with ground-based lasers, which was moderately interesting. (All the technological details of space combat are interesting and thoughtfully handled, although I'm not the sort of reader who will notice more subtle flaws. But David Weber this isn't, thankfully.) But then it turns into a fairly generic armed resistance story, which I found rather boring.

It also ties into the second and more serious flaw: the villain. The overall story is constructed such that it doesn't need a personal villain. It's about the intersection of the military and politics, and a war that may be ill-conceived but that is being fought anyway. That's plenty of conflict for the story, at least in my opinion. But Lumpkin chose to introduce a specific, named Chinese character in the villain role, and the characterization is... well.

After he's humiliated early in the story by the protagonists, Li Xiao develops an obsession with killing them, for his honor, and then pursues them throughout the book in ways that are sometimes destructive to the Chinese war efforts. It's badly unrealistic compared to the tone of realism taken by the rest of the story. Even if someone did become this deranged, it's bizarre that a professional military (and China's forces are otherwise portrayed as fairly professional) would allow this. Li reads like pure caricature, and despite being moderately competent apart from his inexplicable (but constantly repeated) obsession, is cast in a mustache-twirling role of personal villainy. This is weirdly out of place in the novel, and started irritating me enough that it took me out of the story.

Through Struggle, The Stars is the first book of a series, and does not resolve much by the end of the novel. That plus its length makes the story somewhat unsatisfying. I liked Neil's development, and liked him as a character, and those who like the details of combat mixed with front-lines speculation about politics may enjoy this. But a badly-simplified mustache-twirling victim and some extended, uninteresting bits mar the book enough that I probably won't seek out the sequels.

Followed by The Desert of Stars.

Rating: 6 out of 10

31 Aug 2015 3:54am GMT

30 Aug 2015

feedPlanet Debian

Andrew Cater: Rescuing a Windows 10 failed install using GParted Live on CD

Windows 10 is here, for better or worse. As the family sysadmin, I've been tasked to update the Windows machines: ultimately, failure modes are not well documented and I needed Free software and several hours to recover a vital machine.

The "free upgrade for users of prior Windows versions" is a limited time offer for a year from launch. Microsoft do not offer licence keys for the upgrade: once a machine has updated to Windows 10 and authenticated on the 'Net, then a machine can be re-installed and will be regarded by Microsoft as pre-authorised. Users don't get the key at any point.

Although Microsoft have pushed the fact that this can be done through Windows Update, there is the option to use Microsoft's Media Creation tool to do the upgrade directly on the Windows machine concerned. This would be necessary to get the machine to upgrade and register before a full clean install of Windows 10 from media.

This Media Creation Tool failed for me on three machines with "Unable to access System reserved partition'

All the machines have SSDs from various makers: a colleague suggested that resizing the partition might enable the upgrade to continue. Of the machines that failed, all were running Windows 7 - two were running using BIOS, one using UEFI boot on a machine with no Legacy/CSM mode.

Using GParted live .iso - itself based on Debian Live - allowed me to resize the System partition from 100MiB to 200MiB by moving the Windows partition but Windows became unbootable.

In two cases, I was able to boot from DVD Windows installation media and make Windows bootable again at which point the Microsoft Media Creation Tool could be used to install Windows 10

The UEFI boot machine proved more problematic: I had to create a Windows 7 System Repair disk and repair Windows booting before Windows 10 could proceed.

My Windows-using colleaague had used only Windows-based recovery disks: using Linux tools allowed me to repair Windows installations I couldn't boot

30 Aug 2015 8:09pm GMT

Antonio Terceiro: DebConf15, testing debian packages, and packaging the free software web

This is my August update, and by the far the coolest thing in it is Debconf.


I don't get tired of saying it is the best conference I ever attended. First it's a mix of meeting both new people and old friends, having the chance to chat with people whose work you admire but never had a chance to meet before. Second, it's always quality time: an informal environment, interesting and constructive presentations and discussions.

This year the venue was again very nice. Another thing that was very nice was having so many kids and families. This was no coincidence, since this was the first DebConf in which there was organized childcare. As the community gets older, this a very good way of keeping those who start having kids from being alienated from the community. Of course, not being a parent yet I have no idea how actually hard is to bring small kids to a conference like DebConf. ;-)

I presented two talks:

There was also the now traditional Ruby BoF, where discussed the state and future of the Ruby ecosystem in Debian; and an in promptu Ruby packaging workshop where we introduced the basics of packaging in general, and Ruby packaging specifically.

Besides shak, I was able to hack on a few cool things during DebConf:

Miscellaneous updates

30 Aug 2015 7:12pm GMT

DebConf team: DebConf15: Farewell, and thanks for all the Fisch (Posted by DebConf Team)

A week ago, we concluded our biggest DebConf ever! It was a huge success.

Handwritten feedback note

We are overwhelmed by the positive feedback, for which we're very grateful. We want to thank you all for participating in the talks; speakers and audience alike, in person or live over the global Internet - it wouldn't be the fantastic DebConf experience without you!

Many of our events were recorded and streamed live, and are now available for viewing, as are the slides and photos.

To share a sense of the scale of what all of us accomplished together, we've compiled a few statistics:

Our very own designer Valessio Brito made a lovely video of impressions and images of the conference.

Your browser does not support the video tag.

We're collecting impressions from attendees as well as links to press articles, including Linux Weekly News coverage of specific sessions of DebConf. If you find something not yet included, please help us by adding links to the wiki.

DebConf15 group photo (by Aigars Mahinovs)

We tried a few new ideas this year, including a larger number of invited and featured speakers than ever before.

On the Open Weekend, some of our sponsors presented their career opportunities at our job fair, which was very well attended.

And a diverse selection of entertainment options provided the necessary breaks and ample opportunity for socialising.

On the last Friday, the Oscar-winning documentary "Citizenfour" was screened, with some introductory remarks by Jacob Appelbaum and a remote address by its director, Laura Poitras, and followed by a long Q&A session by Jacob.

DebConf15 was also the first DebConf with organised childcare (including a Teckids workshop for kids of age 8-16), which our DPL Neil McGovern standardised for the future: "it's a thing now," he said.

The participants used the week before the conference for intensive work, sprints and workshops, and throughout the main conference, significant progress was made on Debian and Free Software. Possibly the most visible was the endeavour to provide reproducible builds, but the planning of the next stable release "stretch" received no less attention. Groups like the Perl team, the diversity outreach programme and even DebConf organisation spent much time together discussing next steps and goals, and hundreds of commits were made to the archive, as well as bugs closed.

DebConf15 was an amazing conference, it brought together hundreds of people, some oldtimers as well as plenty of new contributors, and we all had a great time, learning and collaborating with each other, says Margarita Manterola of the organiser team, and continues: The whole team worked really hard, and we are all very satisfied with the outcome. Another organiser, Martin Krafft adds: We mainly provided the infrastructure and space. A lot of what happened during the two weeks was thanks to our attendees. And that's what makes DebConf be DebConf.

Photo of hostel staff wearing DebConf15 staff t-shirts (by Martin Krafft)

Our organisation was greatly supported by the staff of the conference venue, the Jugendherberge Heidelberg International, who didn't take very long to identify with our diverse group, and who left no wishes untried. The venue itself was wonderfully spacious and never seemed too full as people spread naturally across the various conference rooms, the many open areas, the beergarden, the outside hacklabs and the lawn.

The network installed specifically for our conference in collaboration with the nearby university, the neighbouring zoo, and the youth hostel provided us with a 1 Gbps upstream link, which we managed to almost saturate. The connection will stay in place, leaving the youth hostel as one with possibly the fastest Internet connection in the state.

And the kitchen catered high-quality food to all attendees and their special requirements. Regional beer and wine, as well as local specialities, were provided at the bistro.

DebConf exists to bring people together, which includes paying for travel, food and accomodation for people who could not otherwise attend. We would never have been able to achieve what we did without the support of our generous sponsors, especially our Platinum Sponsor Hewlett-Packard. Thank you very much.

See you next year in Cape Town, South Africa!

The DebConf16 logo with white background

30 Aug 2015 6:24pm GMT

Philipp Kern: Automating the 3270 part of a Debian System z install

If you try to install Debian on System z within z/VM you might be annoyed at the various prompts it shows before it lets you access the network console via SSH. We can do better. From within CMS copy the default EXEC and default PARMFILE:


Now edit DEBAUTO EXEC A and replace the DEBIAN in 'PUNCH PARMFILE DEBIAN * (NOHEADER' with DEBAUTO. This will load the alternate kernel parameters file into the card reader, while still loading the original kernel and initrd files.

Replace PARMFILE DEBAUTO A's content with this (note the 80 character column limit):

ro locale=C
s390-netdevice/choose_networktype=qeth s390-netdevice/qeth/layer2=true
netcfg/get_ipaddress=<IPADDR> netcfg/get_netmask=
netcfg/get_gateway=<GW> netcfg/get_nameservers=<FIRST-DNS>
netcfg/confirm_static=true netcfg/get_hostname=debian

Replace <IPADDR>, <GW>, and <FIRST-DNS> to suit your local network config. You might also need to change the netmask, which I left in for clarity about the format. Adjust the device address of your OSA network card. If it's in layer 3 mode (very likely) you should set layer2=false. Note that mixed case matters, hence you will want to SET CASE MIXED in xedit.

Then there are the two URLs that need to be changed. The authorized_keys_url file contains your SSH public key and is fetched unencrypted and unauthenticated, so be careful what networks you traverse with your request (HTTPS is not supported by debian-installer in Debian).

preseed/url is needed for installation parameters that do not fit the parameters file - there is an upper character limit that's about two lines longer than my example. This is why this example only contains the bare minimum for the network part, everything else goes into this preseeding file. It file can optionally be protected with a MD5 checksum in preseed/url/checksum.

Both URLs need to be very short. I thought that there was a way to specify a line continuation, but in my tests I was unable to produce one. Hence it needs to fit on one line, including the key. You might want to use an IPv4 as the hostname.

To skip the initial boilerplate prompts and to skip straight to the user and disk setup you can use this as preseed.cfg:

d-i debian-installer/locale string en_US
d-i debian-installer/country string US
d-i debian-installer/language string en
d-i time/zone US/Eastern
d-i mirror/country manual
d-i mirror/http/mirror string httpredir.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string

I'm relatively certain that the DASD disk setup part cannot be automated yet. But the other bits of the installation should be preseedable just like on non-mainframe hardware.

30 Aug 2015 5:36pm GMT

Dirk Eddelbuettel: RcppGSL 0.3.0

A new version of RcppGSL just arrived on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using our Rcpp package.

Following on the heels of an update last month we updated the package (and its vignette) further. One of the key additions concern memory management: Given that our proxy classes around the GSL vector and matrix types are real C++ object, we can monitor their scope and automagically call free() on them rather then insisting on the user doing it. This renders code much simpler as illustrated below. Dan Dillon added const correctness over a series on pull request which allows us to write more standard (and simply nicer) function interfaces. Lastly, a few new typedef declaration further simply the use of the (most common) double and int vectors and matrices.

Maybe a code example will help. RcppGSL contains a full and complete example package illustrating how to write a package using the RcppGSL facilities. It contains an example of computing a column norm -- which we blogged about before when announcing an much earlier version. In its full glory, it looks like this:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

extern "C" SEXP colNorm(SEXP sM) {

  try {

        RcppGSL::matrix<double> M = sM;     // create gsl data structures from SEXP
        int k = M.ncol();
        Rcpp::NumericVector n(k);           // to store results

        for (int j = 0; j < k; j++) {
            RcppGSL::vector_view<double> colview = gsl_matrix_column (M, j);
            n[j] = gsl_blas_dnrm2(colview);
        M.free() ;
        return n;                           // return vector

  } catch( std::exception &ex ) {
        forward_exception_to_r( ex );

  } catch(...) {
        ::Rf_error( "c++ exception (unknown reason)" );
  return R_NilValue; // -Wall

We manually translate the SEXP coming from R, manually cover the try and catch exception handling, manually free the memory etc pp.

Well in the current version, the example is written as follows:

#include <RcppGSL.h>
#include <gsl/gsl_matrix.h>
#include <gsl/gsl_blas.h>

// [[Rcpp::export]]
Rcpp::NumericVector colNorm(const RcppGSL::Matrix & G) {
    int k = G.ncol();
    Rcpp::NumericVector n(k);           // to store results
    for (int j = 0; j < k; j++) {
        RcppGSL::VectorView colview = gsl_matrix_const_column (G, j);
        n[j] = gsl_blas_dnrm2(colview);
    return n;                           // return vector

This takes full advantage of Rcpp Attributes automagically creating the interface and exception handler (as per the previous release), adds a const & interface, does away with the tedious and error-pronce free() and uses the shorter-typedef forms for RcppGSL::Matrix and RcppGSL::VectorViews using double variables. Now the function is short and concise and hence easier to read and maintain. The package vignette has more details on using RcppGSL.

The NEWS file entries follows below:

Changes in version 0.3.0 (2015-08-30)

  • The RcppGSL matrix and vector class now keep track of object allocation and can therefore automatically free allocated object in the destructor. Explicit x.free() use is still supported.

  • The matrix and vector classes now support const reference semantics in the interfaces (thanks to PR #7 by Dan Dillon)

  • The matrix_view and vector_view classes are reorganized to better support const arguments (thanks to PR #8 and #9 by Dan Dillon)

  • Shorthand forms such as Rcpp::Matrix have been added for double and int vectors and matrices including views.

  • Examples such as fastLm can now be written in a much cleaner and shorter way as GSL objects can appear in the function signature and without requiring explicit .free() calls at the end.

  • The included examples, as well as the introductory vignette, have been updated accordingly.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 Aug 2015 3:05pm GMT

Sven Hoexter: 1960 SubjectAlternativeNames on one certificate

tl;dr; You can add 1960+ SubjectAlternativeNames on one certificate and at least Firefox and Chrome are working fine with that. Internet Explorer failed but I did not investigate why.

So why would you want to have close to 2K SANs on one certificate? While we're working on adopting a more dynamic development workflow at my workplace we're currently bound to a central development system. From there we serve a classic virtual hosting setup with "projectname.username.devel.ourdomain.example" mapped on "/web/username/projectname/". That is 100% dynamic with wildcard DNS entries and you can just add a new project to your folder and use it directly. All of that is served from just a single VirtualHost.

Now our developers started to go through all our active projects to make them fit for serving via HTTPS. While we can verify the proper usage of https on our staging system where we've validating certificates, that's not the way you'd like to work. So someone approached me to look into a solution for our development system. Obvious choices like wildcard certificates do not work here because we've two dynamic components in the FQDN. So we would've to buy a wildcard certificate for every developer and we would've to create a VirtualHost entry for every new developer. That's expensive and we don't want all that additional work. So I started to search for documented limits on the number of SANs you can have on a certificate. The good news: there are none. The RFC does not define a limit. So much about the theory. ;)

Following Ivans excellent documentation I setup an internal CA and an ugly "find ... |sed ...|tr ..." one-liner later I had a properly formated openssl config file to generate a CSR with all 1960 "projectname.username..." SAN combinations found on the development system. Two openssl invocations (CSR generation and signing) later I had a signed certificate with 1960 SANs on it. I imported the internal CA I created in Firefox and Chrome, and to my surprise it worked.

Noteworthy: To sign with "openssl ca" without interactive prompts you've to use the "-batch" option.

I'm thinking about regenerating the certificate every morning so our developers just have to create a new project directory and within 24h serving via HTTPS would be enabled. The only thing I'm currently pondering about is how to properly run the CA in a corporate Windows world. We could of course ask the Windows guys to include it for everyone but then we would've to really invest time in properly running the CA. I'd like to avoid that hassle. So I'd guess we just stick to providing the CA for those developers who need it. This all or nothing model is a constant PITA, and you really do not want to get owned via your own badly managed CA. :(

Regarding Internet Explorer it jumped in my face with a strange error message that recommended to enable TLS 1.0, 1.1 and 1.2 in the options menu. Of course that's already enable. I'll try to take a look at the handshake next week, but I bet we've to accept for the moment that IE will not work with so many SANs. Would be interesting to try out Windows 10 with Spartan, but well I'm not that interested in Windows to invest more time on that front. Other TLS implementations, like Java, would be also interesting to test.

30 Aug 2015 1:58pm GMT

Ben Hutchings: Securing www.decadent.org.uk

Sven Hoexter replied to my previous entry to say that WoSign also provides free DV TLS certificates. What's more, they allow up to 10 alternate names, unlike StartSSL. So I've gone ahead with a new certificate for www.decadent.org.uk and other virtual servers including git.decadent.org.uk.

WoSign sensibly mandates a key length of 2048 bits, and together with the default TLS configuration for Apache in Debian 'jessie' this resulted in a A- rating from Qualys SSL Server Test.

I then disabled non-PFS and otherwise weak cipher suites in /etc/apache2/mods-enabled/ssl.conf:

SSLCipherSuite HIGH:!aNULL:!kRSA:!3DES

This resulted in an A rating. Finally, I added redirection of all plaintext HTTP connections to HTTP-S (which is easier than working out how to make the virtual server work with and without TLS, anyway). I enabled HSTS for each VirtualHost:

Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains"

This resulted in an A+ rating. These web sites will now be inaccessible to Java 6 and IE on Windows XP, but that's no great loss (1 in 1500 hits over the past few weeks).

30 Aug 2015 11:59am GMT

Russ Allbery: Review: Pound Foolish

Review: Pound Foolish, by Helaine Olen

Publisher: Penguin
Copyright: 2012, 2013
Printing: 2013
ISBN: 1-59184-679-X
Format: Trade paperback
Pages: 241

For at least the last six years, it's not been surprising news that the relationship between the average person and the US financial system is tense at best, and downright exploitative at worst. The collapse of the housing bubble revealed a morass of predatory lending practices, spectacularly poor (or spectacularly cynical) investment decisions, and out-of-control personal debt coupled with erosion of bankruptcy law. Despite this, there's always a second story in all discussions of the finances of the US population: the problem isn't with financial structures and products, but with us. We're too stupid, or naive, or emotional, or uninformed, or greedy, or unprincipled, or impatient. Finances are complicated, yes, but that just means we have to be more thoughtful. All of these complex financial products could have been used properly.

Helaine Olen's Pound Foolish is a systemtic, biting, well-researched, and pointed counter to that second story. The short summary of this book is that it's not us. We're being set up for failure, and there is a large and lucrative industry profiting off of that failure. And many (although not all) people in that industry know exactly what they're doing.

Pound Foolish is one of my favorite forms of non-fiction: long-form journalism. This is an investigative essay into the personal finance and investment industry, developed to book length. Readers of Michael Lewis will feel right at home. Olen doesn't have Lewis's skill with characterization and biography, but she makes up for it in big-picture clarity. She takes a wealth of individual facts about who is involved in personal finance, how they make money, what they recommend, and who profits, and develops it into a clear and coherent overview.

If you have paid any attention to US financial issues, you'll know some of this already. Frontline has done a great job of covering administrative fees in mutual funds. Lots of people have warned about annuities. The spectacular collapse of the home mortgage is old news now. But Olen does a great job of finding the connections between these elements and adding some less familiar ones, including an insightful and damning analysis of financial literacy campaigns and the widespread belief that these problems are caused by lack of consumer understanding. I've read and watched a lot of related material, including several full-book treatments of the mortgage crisis, so I think it's telling that I never got bored in the middle of Olen's treatment.

I find the deep US belief in the power of personal improvement fascinating. It feels like one of the defining characteristics of US culture, for both good and for ill. We're very good at writing compelling narratives of personal improvement, and sometimes act on them. We believe that everyone can and should improve themselves. But that comes coupled to a dislike and distrust of expertise, even when it is legitimate and earned (Hofstadter's Anti-Intellectualism in American Life develops this idea at length). And I believe we significantly overestimate the ability of individuals to act despite systems that are stacked against us, and significantly overestimate our responsibility for the inevitable results.

This was the main message I took from Pound Foolish: we desperately want to believe in the myth of personal control. We want to believe that our financial troubles are something we can fix through personal education, more will power, better decisions, or just the right investment. And so, we turn to gurus like Suze Orman and buy their mix of muddled financial advice and "tough love" that completely ignores broader social factors. We're easy marks for psychologically-trained investment sellers who mix fear, pressure, and a fantasy of inside knowledge and personal control. We're fooled by a narrative of empowerment and stand by while a working retirement system (guaranteed benefit pensions) is undermined and destroyed in favor of much riskier investment schemes... riskier for us, at least, but loaded with guaranteed profits for the people who "advise" us. And we cling to financial literacy campaigns that are funded by exactly the same credit card companies who fight tooth and nail against regulations that would require they provide simple, comprehensible descriptions of loan terms. One wonders if they support them precisely because they know they don't work.

Olen mentions, in passing, the Stanford marshmallow experiment, which is often used as a foundation for arguments about personal responsibility for financial outcomes, but she doesn't do a detailed critique. I wish she had, since I think it's such a good example of the theme of this book.

The Stanford marshmallow experiment was a psychological experiment from the late 1960s and early 1970s in delayed gratification. Children were put in a room in front of some treat (marshmallows, cookies, pretzels) and told that they could eat it if they wished. But if they didn't eat the treat before the monitor came back, they would get two of the treat instead. Long-term follow-up studies found that the children who refrained from eating the treat and got the reward had better life outcomes along multiple metrics: SAT scores, educational attainment, and others.

On the surface, this seems to support everything US culture loves to believe about the power of self-control, self-improvement, and the Protestant work ethic. People who can delay gratification and save for a future reward do better in the world. (The darker interpretation, also common since the experiment was performed on children, is that the ability to delay gratification has a genetic component, and some people are just doomed to make poor decisions due to their inability to exercise self-control.)

However, I can call the traditional interpretation into question with one simple question that the experimenters appeared not to ask: under what circumstances would taking the treat immediately be the rational and best choice?

One answer, of course, is when one does not trust the adult to produce the promised reward. If the adult might come back, take the treat away, and not give any treat, it's to the child's advantage to take the treat immediately. Even if the adult left the treat but wouldn't actually double it, it's to the child's advantage to take the treat immediately. The traditional interpretation assumes the child trusts the adults performing the experiment - a logical assumption for those of us whose childhood experience was that adults could generally be trusted and that promised rewards would materialize. If the child instead came from a chaotic family where adults weren't reliable, or just one where frequent unexpected money problems meant that promised treats often didn't materialize, the most logical assumption may be much different. One has to ask if such a background may have more to do with the measured long-term life outcomes than the child's unwillingness to trust in a future reward.

And this is one of the major themes of this book. Problems the personal finance industry attributes to our personal shortcomings (which they're happy to take our money to remedy) are often systematic, or at least largely outside of our control. We may already be making the most logical choices given our personal situations. We're in worse financial shape because we're making less money. Our retirements are in danger because our retirement systems were dismantled and replaced with risky and expensive alternatives. And where problems are attributed to our poor choices, one can find entire industries that focus on undermining our ability to make good choices: scaring us, confusing us, hiding vital information, and exploiting known weaknesses of human psychology to route our money to them.

These are not problems that can be solved by watching Suze Orman yell at us to stop buying things. These are systematic social problems that demand a sweeping discussion about regulation, automatic savings systems, and social insurance programs to spread risk and minimize the weaknesses of human psychology. Exactly the kind of discussion that the personal finance industry doesn't want us to have.

Those who are well-read in these topics probably won't find a lot new here. Those who aren't in the US will shake their heads at some of the ways that the US fails its citizens, although many of Olen's points apply even to countries with stronger social safety nets. But if you're interested in solid long-form journalism on this topic, backed by lots of data on just how badly a focus on personal accountability is working for us, I recommend Pound Foolish.

Rating: 8 out of 10

30 Aug 2015 1:04am GMT

29 Aug 2015

feedPlanet Debian

Tassia Camoes Araujo: Report from the MicroDebconf Brasília 2015

This was an event organized due to a coincidental meeting of a few DD's in the city of Brasilia on May 31st 2015. What a good thing when we can mix vacations, friends and Debian ;-)

Group photo

We called it Micro due to its short duration and planning phase, to be fair with other Mini DebConfs that take a lot more of organization. We also ended up having a translation sprint inside the event that attracted contributors from other cities.

Our main goal was to boost the local community and bring new contributors to Debian. And we definitely made it!

The meeting happened at University of Brasilia (UnB Gama). It started with a short presentation where each DD and Debian contributor presented their involvement with Debian and plans for the hacking session. This was an invitation for new contributors to choose the activities they were willing to engage, taking advantage of being guided by more experienced people.

Then we moved to smaller rooms where participants were split in different groups to work on each track: packaging, translation and community/contribution. We all came together later for the keysigning party.

Some of the highlights of the day:

For more details of what happened, you can read our full report.

The MicroDebconf wouldn't be possible without the support of prof. Paulo Meirelles from UnB Gama and all the LAPPIS team for the local organization and students mobilization. We also need to thank to Debian donnors, who covered the travel costs of one of our contributors.

Last but not least, thanks to our participants and the large Brazilian community who is giving a good example of team work. A similar meeting happened in July during the Free Software International Forum (FISL) and another one is already planned to happen in October as part of the LatinoWare.

I hope I can join those folks again in the near future!

29 Aug 2015 11:22pm GMT

Francois Marier: Letting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

29 Aug 2015 9:20pm GMT

Zlatan Todorić: Interviews with FLOSS developers: Elena Grandi

One of fresh additions to Debian family, and thus wider FLOSS family is Elena Grandi. She is from realms of Valhalla and is setting her footprint into the community. A hacker mindset, a Free software lover and a 3D printing maker. Elena has big dedication to make the world free and better place for all. She tries to push limits on personal level with much care and love, and FLOSS community will benefit from her work and way of life in future. So what has the Viking lady to say about FLOSS? Meet Elena "of Valhalla" Grandi.

Read more… (12 min remaining to read)

29 Aug 2015 2:23pm GMT

Norbert Preining: Kobo Glo and GloHD firmware 3.17.3 mega update (KSM, nickel patch, ssh, fonts)

I have updated my mega-update for Kobo to the latest firmware 3.17.3. Additionally, I have not built (and tested) updates for both Mark4 hardware (Glo) and Mark6 hardware (GloHD). Please see the previous post for details on what is included.

The only difference that is important is the update to KSM (Kobo Start Menu) version 8, which is still in testing phase (thus a warning: the layout and setup of KSM8 might change till release). This is an important update as all version up to V7 could create database corruptions (which I have seen several times!) when used with Calibre and the Kepub driver.

Kobo Logo

Other things that are included are as usual: Metazoa firmware patches - for the Glo (non HD) version I have activated the compact layout patch; koreader, pbchess, coolreader, the ssh part of kobohack, custom dictionaries support, and some side-loaded fonts. Again, for details please see the previous post

You can check for database corruption by selecting tools - nickel diverse.msh - db chk integrity.sh in the Kobo Start Menu. If it returns ok, then all is fine. Otherwise you might see problems.

I solved the corruption of my database by first dumping the database to an sql file, and reloading it into a new database. Assuming that you have the file KoboReader.sqlite, what I did is:

$ sqlite3  KoboReader.sqlite 
SQLite version 2015-07-29 20:00:57
Enter ".help" for usage hints.
sqlite> PRAGMA integrity_check;
*** in database main ***
Page 5237: btreeInitPage() returns error code 11
On tree page 889 cell 1: 2nd reference to page 5237
Page 4913 is never used
Page 5009 is never used
Error: database disk image is malformed
sqlite> .output foo.sql
sqlite> .dump
sqlite> .quit
$ sqlite3 KoboReader.sqlite-NEW
SQLite version 2015-07-29 20:00:57
Enter ".help" for usage hints.
sqlite> .read foo.sql
sqlite> .quit

The first part shows that the database is corrupted. Fortunately dumping succeeded and then reloading it into a new database, too. Finally I replaced (after backup) the sqlite on the device with the new database.


Mark6 - Kobo GloHD

firmware: Kobo 3.17.3 for GloHD

Mega update: Kobo-3.17.3-combined/Mark6/KoboRoot.tgz

Mark4 - Kobo Glo, Auro HD

firmware: Kobo 3.17.3 for Glo and AuroHD

Mega update: Kobo-3.17.3-combined/Mark4/KoboRoot.tgz


29 Aug 2015 12:04am GMT