13 Oct 2014

feedPlanet Twisted

Ashwini Oruganti: Open Source Retreat: Afterword

As I wrap up my time with Stripe's Open Source Retreat, I have quite some things to be thankful for. (Also, I just always wanted to give a thank-you speech.) So here goes:

I am grateful to Glyph Lefkowitz, Ying Li, and Christopher Armstrong for their long-suffering patience and support. Thanks also to Allen Short, who coaxed work out of me when I badly needed to feel capable of it.

I would like to thank Jessica McKellar, for being generally and specifically brilliant, for teaching me how to be a meticulous manager, and for helping me find my things when I lost them in a scary building.

I am grateful to Stripe, for giving me a place to be. And the monitor. To all the people there who showed me where the good coffee is hidden: thank you! Thanks too to those who went climbing with me, pretended to not hear me when I said I wanted to stop, and patiently helped me improve.

Thanks to friends near and far - for helping me deal with dead things, for teaching me all about alligators and crocodiles, and for helping me pronounce words correctly.

I am grateful to the authors of the good books I steal phrases from.

And lastly, I am grateful to my family, for spoiling me always.

13 Oct 2014 10:54pm GMT

07 Oct 2014

feedPlanet Twisted

Glyph Lefkowitz: Thank You Lennart

I (along with about 6 million other people, according to the little statistics widget alongside it) just saw this rather heartbreaking post from Lennart Poettering.

I have not had much occasion to interact with Lennart personally, and (like many people) I have experienced bugs in the software he has written. I have been frustrated by those bugs. I may not have always been charitable in my descriptions of his software. I have, however, always assumed good faith on his part and been happy that he continues making valuable contributions to the free software ecosystem. I haven't felt the need to make that clear in the past because I thought it was understood.

Apparently, not only is it not understood, there is active hostility directed against his participation. There is constant, aggressive, bad-faith attempts to get him to stop working on the important problems he is working on.

So, Lennart,

Thank you for your work on GNOME, for working on the problem of getting free software into the hands of normal people.

Thank you for furthering the cause of free software by creating PulseAudio, so that we can at least attempt to allow users to play sound on their Linux computers from multiple applications simultaneously without writing tedious configuration files.

Thank you for your work on SystemD, attempting to bring modern system-startup and service-management practices to the widest possible free software audience.

Thank you, Lennart, for putting up with all these vile personal attacks while you have done all of these things. I know you could have walked away; I'm sure that at times, you wanted to. Thank you for staying anyway and continuing to do the good work that you've done.

While the abuse is what prompted me to write this, I should emphasize that my appreciation is real. As a long-time user of Linux both on the desktop and in the cloud, I know that my life has been made materially better by Lennart's work.

This shouldn't be read as an endorsement of any specific technical position that Mr. Poettering holds. The point is that it doesn't have to be: this isn't about whether he's right or not, it's about whether we can have the discussion about whether he's right in a calm, civil, technical manner. In fact I don't agree with all of his technical choices, but I'm not going to opine about that here, because he's putting in the work and I'm not, and he's fighting many battles for software freedom (most of them against our so-called "allies") that I haven't been involved in.

The fact that he felt the need to write an article on the hideous state of the free software community is as sad as it is predictable. As a guest on a podcast recently, I praised the Linux community's technical achievements while critiquing its poisonous culture. Now I wonder if "critiquing" is strong enough; I wonder if I should have given any praise at all. We should all condemn this kind of bilious ad-hominem persecution.

Today I am saying "thank you" to Lennart because the toxicity in our communities is not a vague, impersonal force that we can discuss academically. It is directed at specific individuals, in an attempt to curtail their participation. We need to show those targetted people, regardless of how high-profile they are, or whether they're paid for their work or not, that they are not alone, that they have our gratitude. It is bullying, pure and simple, and we should not allow it to stand.

Software is made out of feelings. If we intend to have any more free software, we need to respect and honor those feelings, and, frankly speaking, stop trying to make the people who give us that software feel like shit.

07 Oct 2014 8:32am GMT

25 Sep 2014

feedPlanet Twisted

Moshe Zadka: A Problem With High School Math Education

When I studied math in high-school, I absolutely hated geometry. It pretty much made me give up on math entirely, until I renewed my fascination in university. Geometry and I had a hate/disgust relationship since. I am putting it out there so if you want to blame me for being biased, hey, I just made your day easier.

But it made me question: why are we even teaching geometry? Unlike the ancient Egyptians, few of us will need to collect land taxes on rapidly changing land sizes. The usual answer is that geometry is used to teach the art of mathematical proofs. There are lemmas-upon-lemmas building up to interesting results. They show how mathematical rigor is achieved, and what it is useful for.

This is bullshit.

It is bullshit because there is an algorithm one can run to manufacture a proof for every true theorem, or a counter-example for every non-theorem. True, understanding how the algorithm works takes about two years of math undergrad studies. The hand-waving proof is here: first, build up the proof that real number pairs as points and equations of lines as lines obey the Euclidean axioms. Then show that the theory of real numbers is complete. Then show the theory of real numbers is quantifier-free. QED.

When we are teaching kids geometry, we are teaching them tricks to solve something, when an algorithm would do it at well. We are teaching them to be slightly-worse than computers.

The proof above is actually incorrect: Euclidean geometry is missing an axiom for it to work. (The axiom is "there is no line and a triangle such that the line intersects the triangle in exactly one edge".) Euclid missed it, as have generations after him, because geometry is visually seductive: it is hard to verify proofs formally when the drawings look "correct". Teaching kids rigor through geometry is teaching them something not even Euclid achieved!

Is there an alternative? I think there is. The theory of finite sets is a beautiful little gem of mathematics. The axioms are the classic ZF axioms, with the infinity axiom inverted ("every one-to-one function from a set to itself is onto"). One can start building the theory the same as building the ZF theory: proving that ordinals exist, building up simple sets and the like - and then introduce the infinity axiom and its inversion, show a tiny taste of the infinity side of the theory, and then introduce the "finiteness axiom", and show how you can build the natural numbers on top of it.

For the coup-de-grace, finite set theory gives a more natural language to talk about Godel's incompleteness theorem, showing that we cannot have an algorithm for deciding questions about finite sets - or the natural numbers.

Achieving rigor is much easier: one uses Venn diagrams as intuitions, but a formal derivation as proof. It's beautiful, it's nice, and it is the actual basis of all math.

25 Sep 2014 6:42pm GMT

22 Sep 2014

feedPlanet Twisted

Twisted Matrix Laboratories: Twisted 14.0.1 & 14.0.2 Released

On behalf of Twisted Matrix Laboratories, I'm releasing Twisted 14.0.1, a security release for Twisted 14.0. It is strongly suggested that users of 14.0.0 upgrade to this release.

This patches a bug in Twisted Web's Agent, where BrowserLikePolicyForHTTPS would not honour the trust root given, and would use the system trust root instead. This would have broken, for example, attempting to pin the issuer for your HTTPS application because you only trust one issuer.

Note: on OS X, with the system OpenSSL, you still can't fully rely on this API for issuer pinning, due to modifications by Apple - please see https://hynek.me/articles/apple-openssl-verification-surprises/ for more details.

You can find the downloads at https://pypi.python.org/pypi/Twisted (or alternatively http://twistedmatrix.com/trac/wiki/Downloads). The NEWS file is also available at https://twistedmatrix.com/trac/browser/tags/releases/twisted-14.0.1/NEWS?format=raw.

Thanks for Alex Gaynor for discovering the bug, Glyph & Alex for developing a patch, and David Reid for reviewing it.

14.0.2 is a bugfix patch for distributors, that corrects a failing test in the patch for 14.0.1.

Twisted Regards,
HawkOwl

Edit, 22nd Sep 2014: CVE-2014-7143 has been assigned for this issue.

22 Sep 2014 1:49pm GMT

20 Sep 2014

feedPlanet Twisted

Glyph Lefkowitz: Ungineering

I am not an engineer.

I am a computer programmer. I am a software developer. I am a software author. I am a coder.

I program computers. I develop software. I write software. I code.

I'd prefer that you not refer to me as an engineer, but this is not an essay about how I'm going to heap scorn upon you if you do so. Sometimes, I myself slip and use the word "engineering" to refer to this activity that I perform. Sometimes I use the word "engineer" to refer to myself or my peers. It is, sadly, fairly conventional to refer to us as "engineers", and avoiding this term in a context where it's what everyone else uses is a constant challenge.

Nevertheless, I do not "engineer" software. Neither do you, because nobody has ever known enough about the process of creating software to "engineer" it.

According to dictionary.com, "engineering" is:

"the art or science of making practical application of the knowledge of pure sciences, as physics or chemistry, as in the construction of engines, bridges, buildings, mines, ships, and chemical plants."

When writing software, we typically do not apply "knowledge of pure sciences". Very little science is germane to the practical creation of software, and the places where it is relevant (firmware for hard disks, for example, or analytics for physical sensors) are highly rarified. The one thing that we might sometimes use called "science", i.e. computer science, is a subdiscipline of mathematics, and not a science at all. Even computer science, though, is hardly ever brought to bear - if you're a working programmer, what was the last project where you had to submit formal algorithmic analysis for any component of your system?

Wikipedia has a heaping helping of criticism of the terminology behind software engineering, but rather than focusing on that, let's see where Wikipedia tells us software engineering comes from in the first place:

The discipline of software engineering was created to address poor quality of software, get projects exceeding time and budget under control, and ensure that software is built systematically, rigorously, measurably, on time, on budget, and within specification. Engineering already addresses all these issues, hence the same principles used in engineering can be applied to software.

Most software projects fail; as of 2009, 44% are late, over budget, or out of specification, and an additional 24% are cancelled entirely. Only a third of projects succeed according to those criteria of being under budget, within specification, and complete.

What would that look like if another engineering discipline had that sort of hit rate? Consider civil engineering. Would you want to live in a city where almost a quarter of all the buildings were simply abandoned half-constructed, or fell down during construction? Where almost half of the buildings were missing floors, had rents in the millions of dollars, or both?

My point is not that the software industry is awful. It certainly can be, at times, but it's not nearly as grim as the metaphor of civil engineering might suggest. Consider this: despite the statistics above, is using a computer today really like wandering through a crumbling city where a collapsing building might kill you at any moment? No! The social and economic costs of these "failures" is far lower than most process consultants would have you believe. In fact, the cause of many such "failures" is a clumsy, ham-fisted attempt to apply engineering-style budgetary and schedule constraints to a process that looks nothing whatsoever like engineering. I have to use scare quotes around "failure" because many of these projects classified as failed have actually delivered significant value. For example, if the initial specification for a project is overambitious due to lack of information about the difficulty of the tasks involved, for example - an extremely common problem at the beginning of a software project - that would still be a failure according to the metric of "within specification", but it's a problem with the specification and not the software.

Certain missteps notwithstanding, most of the progress in software development process improvement in the last couple of decades has been in acknowledging that it can't really be planned very far in advance. Software vendors now have to constantly present works in progress to their customers, because the longer they go without doing that there is an increasing risk that the software will not meet the somewhat arbitrary goals for being "finished", and may never be presented to customers at all.

The idea that we should not call ourselves "engineers" is not a new one. It is a minority view, but I'm in good company in that minority. Edsger W. Dijkstra points out that software presents what he calls "radical novelty" - it is too different from all the other types of things that have come before to try to construct it by analogy to those things.

One of the ways in which writing software is different from engineering is the matter of raw materials. Skyscrapers and bridges are made of steel and concrete, but software is made out of feelings. Physical construction projects can be made predictable because the part where creative people are creating the designs - the part of that process most analagous to software - is a small fraction of the time required to create the artifact itself.

Therefore, in order to create software you have to have an "engineering" process that puts its focus primarily upon the psychological issue of making your raw materials - the brains inside the human beings you have acquired for the purpose of software manufacturing - happy, so that they may be efficiently utilized. This is not a common feature of other engineering disciplines.

The process of managing the author's feelings is a lot more like what an editor does when "constructing" a novel than what a foreperson does when constructing a bridge. In my mind, that is what we should be studying, and modeling, when trying to construct large and complex software systems.

Consequently, not only am I not an engineer, I do not aspire to be an engineer, either. I do not think that it is worthwhile to aspire to the standards of another entirely disparate profession.

This doesn't mean we shouldn't measure things, or have quality standards, or try to agree on best practices. We should, by all means, have these things, but we authors of software should construct them in ways that make sense for the specific details of the software development process.

While we are on the subject of things that we are not, I'm also not a maker. I don't make things. We don't talk about "building" novels, or "constructing" music, nor should we talk about "building" and "assembling" software. I like software specifically because of all the ways in which it is not like "making" stuff. Making stuff is messy, and hard, and involves making lots of mistakes.

I love how software is ethereal, and mistakes are cheap and reversible, and I don't have any desire to make it more physical and permanent. When I hear other developers use this language to talk about software, it makes me think that they envy something about physical stuff, and wish that they were doing some kind of construction or factory-design project instead of making an application.

The way we use language affects the way we think. When we use terms like "engineer" and "builder" to describe ourselves as creators, developers, maintainers, and writers of software, we are defining our role by analogy and in reference to other, dissimilar fields.

Right now, I think I prefer the term "developer", since the verb develop captures both the incremental creation and ongoing maintenance of software, which is so much a part of any long-term work in the field. The only disadvantage of this term seems to be that people occasionally think I do something with apartment buildings, so I am careful to always put the word "software" first.

If you work on software, whichever particular phrasing you prefer, pick one that really calls to mind what software means to you, and don't get stuck in a tedious metaphor about building bridges or cars or factories or whatever.

To paraphrase a wise man:

I am developer, and so can you.

20 Sep 2014 5:00am GMT

17 Sep 2014

feedPlanet Twisted

Glyph Lefkowitz: The Most Important Thing You Will Read On This Blog All Year

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

I have two PGP keys.

One, 16F13480, is signed by many people in the open source community.  It is a
4096-bit RSA key.

The other, 0FBC4A07, is superficially worse.  It doesn't have any signatures on
it.  It is only a 3072-bit RSA key.

However, I would prefer that you all use 0FBC4A07.

16F13480 lives encrypted on disk, and occasionally resident in memory on my
personal laptop.  I have had no compromises that I'm aware of, so I'm not
revoking the key - I don't want to lose all the wonderful trust I build up.  In
order to avoid compromising it in the future, I would really prefer to avoid
decrypting it any more often than necessary.

By contrast, aside from backups which I have not yet once had occasion to
access, 0FBC4A07 exists only on an OpenPGP smart card, it requires a PIN, it is
never memory resident on a general purpose computer, and is only plugged in
when I'm actively Doing Important Crypto Stuff.  Its likelyhood of future
compromise is *extremely* low.

If said smart card had supported 4096-bit keys I probably would have just put
the old key on the more secure hardware and called it a day.  Sadly, that is
not the world we live in.

Here's what I'd like you to do, if you wish to interact with me via GnuPG:

    $ gpg --recv-keys 0FBC4A07 16F13480
    gpg: requesting key 0FBC4A07 from hkp server keys.gnupg.net
    gpg: requesting key 16F13480 from hkp server keys.gnupg.net
    gpg: key 0FBC4A07: "Matthew "Glyph" Lefkowitz (OpenPGP Smart Card) <glyph@twistedmatrix.com>" 1 new signature
    gpg: key 16F13480: "Matthew Lefkowitz (Glyph) <glyph@twistedmatrix.com>" not changed
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    gpg: Total number processed: 2
    gpg:              unchanged: 1
    gpg:         new signatures: 1
    $ gpg --edit-key 16F13480
    gpg (GnuPG/MacGPG2) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.


    gpg: checking the trustdb
    gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
    gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
    gpg: next trustdb check due at 2015-08-18
    pub  4096R/16F13480  created: 2012-11-16  expires: 2016-04-12  usage: SC
                         trust: unknown       validity: unknown
    sub  4096R/0F3F064E  created: 2012-11-16  expires: 2016-04-12  usage: E
    [ unknown] (1). Matthew Lefkowitz (Glyph) <glyph@twistedmatrix.com>

    gpg> disable

    gpg> save
    Key not changed so no update needed.
    $

If you're using keybase, "keybase encrypt glyph" should be pointed at the
correct key.

Thanks for reading,

- -glyph
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQGcBAEBCgAGBQJUGRfNAAoJEH7CgSUPvEoHwg8L/0MHoG4FLzr1U3Ulu45sX/QO
VDUC4wJp4dpUKW2Yvjyw3LBYtFvsJfqUhM2oBURDPPgVfC5aOz7qevuBndlOYPB+
8dK//lPLZvYMAx2AlTGhz0wQokl0Cdlo+vK5E+Ex5oDJYhaPI9YPsSDbvynb6yhI
DK+EXRBtra7ev4hHDiucLGvqlSQnV+eOSijZfHgm6aBImfMUM7SM3UGFtE8oEJDE
XhTwW93L/c2epZOEFkfSLzQLlIcV5Ll2B6KOQLsdMuvgSkVX3NN+efuLFy4diD0U
HvL8nxxBpM98Jj+0PAucLbw4JwyAwEF6viEwXWiwngFTKeU60kUjUoSMFecMQuEz
IFqR7e9J7OaK9pMLrimwYtfCHVCx5WIXRJShuvcHhRCjwJb1N6rAffGOiwzkY+3w
mvpfEwQoC8F1wu2SOUpgQlvHUxxYbdibNTUvlaO74nyzE9fQZSXbZcGH2Skf9tHi
DjPhd2kLnyOzOy+BAOGcTKJ+ldOpbdmsnpcFTDA/MA==
=k0u9
-----END PGP SIGNATURE-----

17 Sep 2014 5:14am GMT

15 Sep 2014

feedPlanet Twisted

Glyph Lefkowitz: The Horizon

Sometimes, sea sickness is caused by a sort of confusion. Your inner ear can feel the motion of the world around it, but because it can't see the outside world, it can't reconcile its visual input with its proprioceptive input, and the result is nausea.

This is why it helps to see the horizon. If you can see the horizon, your eyes will talk to your inner ears by way of your brain, they will realize that everything is where it should be, and that everything will be OK.

As a result, you'll stop feeling sick.

photo credit: https://secure.flickr.com/people/reallyterriblephotographer/

I have a sort of motion sickness too, but it's not seasickness. Luckily, I do not experience nausea due to moving through space. Instead, I have a sort of temporal motion sickness. I feel ill, and I can't get anything done, when I can't see the time horizon.

I think I'm going to need to explain that a bit, since I don't mean the end of the current fiscal quarter. I realize this doesn't make sense yet. I hope it will, after a bit of an explanation. Please bear with me.

Time management gurus often advise that it is "good" to break up large, daunting tasks into small, achievable chunks. Similarly, one has to schedule one's day into dedicated portions of time where one can concentrate on specific tasks. This appears to be common sense. The most common enemy of productivity is procrastination. Procrastination happens when you are working on the wrong thing instead of the right thing. If you consciously and explicitly allocate time to the right thing, then chances are you will work on the right thing. Problem solved, right?

Except, that's not quite how work, especially creative work, works.

ceci n’est pas un task

I try to be "good". I try to classify all of my tasks. I put time on my calendar to get them done. I live inside little boxes of time that tell me what I need to do next. Sometimes, it even works. But more often than not, the little box on my calendar feels like a little cage. I am inexplicably, involuntarily, haunted by disruptive visions of the future that happen when that box ends.

Let me give you an example.

Let's say it's 9AM Monday morning and I have just arrived at work. I can see that at 2:30PM, I have a brief tax-related appointment. The hypothetical person doing my hypothetical taxes has an office that is a 25 minute hypothetical walk away, so I will need to leave work at 2PM sharp in order to get there on time. The appointment will last only 15 minutes, since I just need to sign some papers, and then I will return to work. With a 25 minute return trip, I should be back in the office well before 4, leaving me plenty of time to deal with any peripheral tasks before I need to leave at 5:30. Aside from an hour break at noon for lunch, I anticipate no other distractions during the day, so I have a solid 3 hour chunk to focus on my current project in the morning, an hour from 1 to 2, and an hour and a half from 4 to 5:30. Not an ideal day, certainly, but I have plenty of time to get work done.

The problem is, as I sit down in front of my nice, clean, empty text editor to sketch out my excellent programming ideas with that 3-hour chunk of time, I will immediately start thinking about how annoyed I am that I'm going to get interrupted in 5 and a half hours. It consumes my thoughts. It annoys me. I unconsciously attempt to soothe myself by checking email and getting to a nice, refreshing inbox zero. Now it's 9:45. Well, at least my email is done. Time to really get to work. But now I only have 2 hours and 15 minutes, which is not as nice of an uninterrupted chunk of time for a deep coding task. Now I'm even more annoyed. I glare at the empty window on my screen. It glares back. I spend 20 useless minutes doing this, then take a 10-minute coffee break to try to re-set and focus on the problem, and not this silly tax meeting. Why couldn't they just mail me the documents? Now it's 10:15, and I still haven't gotten anything done.

By 10:45, I manage to crank out a couple of lines of code, but the fact that I'm going to be wasting a whole hour with all that walking there and walking back just gnaws at me, and I'm slogging through individual test-cases, mechanically filling docstrings for the new API and for the tests, and not really able to synthesize a coherent, holistic solution to the overall problem I'm working on. Oh well. It feels like progress, albeit slow, and some days you just have to play through the pain. I struggle until 11:30 at which point I notice that since I haven't been able to really think about the big picture, most of the test cases I've written are going to be invalidated by an API change I need to make, so almost all of the morning's work is useless. Damn it, it's 2014, I should be able to just fill out the forms online or something, having to physically carry an envelope with paper in it ten blocks is just ridiculous. Maybe I could get my falcon to deliver it for me.

It's 11:45 now, so I'm not going to get anything useful done before lunch. I listlessly click on stuff on my screen and try to relax by thinking about my totally rad falcon until it's time to go. As I get up, I glance at my phone and see the reminder for the tax appointment.

Wait a second.

The appointment has today's date, but the subject says "2013". This was just some mistaken data-entry in my calendar from last year! I don't have an appointment today! I have nowhere to be all afternoon.

For pointless anxiety over this fake chore which never even actually happened, a real morning was ruined. Well, a hypothetical real morning; I have never actually needed to interrupt a work day to walk anywhere to sign tax paperwork. But you get the idea.

To a lesser extent, upcoming events later in the week, month, or even year bother me. But the worst is when I know that I have only 45 minutes to get into a task, and I have another task booked up right against it. All this trying to get organized, all this carving out uninterrupted time on my calendar, all of this trying to manage all of my creative energies and marshal them at specific times for specific tasks, annihilates itself when I start thinking about how I am eventually going to have to stop working on the seemingly endless, sprawling problem set before me.

The horizon I need to see is the infinite time available before me to do all the thinking I need to do to solve whatever problem has been set before me. If I want to write a paragraph of an essay, I need to see enough time to write the whole thing.

Sometimes - maybe even, if I'm lucky, increasingly frequently - I manage to fool myself. I hide my calendar, close my eyes, and imagine an undisturbed millennium in front of my text editor ... during which I may address some nonsense problem with malformed utf-7 in mime headers.

... during which I can complete a long and detailed email about process enhancements in open source.

... during which I can write a lengthy blog post about my productivity-related neuroses.

I imagine that I can see all the way to the distant horizon at the end of time, and that there is nothing between me and it except dedicated, meditative concentration.

That is on a good day. On a bad day, trying to hide from this anxiety manifests itself in peculiar and not particularly healthy ways. For one thing, I avoid sleep. One way I can always extend the current block of time allocated to my current activity is by just staying awake a little longer. I know this is basically the wrong way to go about it. I know that it's bad for me, and that it is almost immediately counterproductive. I know that ... but it's almost 1AM and I'm still typing. If I weren't still typing right now, instead of sleeping, this post would never get finished, because I've spent far too many evenings looking at the unfinished, incoherent draft of it and saying to myself, "Sure, I'd love to work on it, but I have a dentist's appointment in six months and that is going to be super distracting; I might as well not get started".

Much has been written about the deleterious effects of interrupting creative thinking. But what interrupts me isn't an interruption; what distracts me isn't even a distraction. The idea of a distraction is distracting; the specter of a future interruption interrupts me.

This is the part of the article where I wow you with my life hack, right? Where I reveal the one weird trick that will make productivity gurus hate you? Where you won't believe what happens next?

Believe it: the surprise here is that this is not a set-up for some choice productivity wisdom or a sales set-up for my new book. I have no idea how to solve this problem. The best I can do is that thing I said above about closing my eyes, pretending, and concentrating. Honestly, I have no idea even if anyone else suffers from this, or if it's a unique neurosis. If a reader would be interested in letting me know about their own experiences, I might update this article to share some ideas, but for now it is mostly about sharing my own vulnerability and not about any particular solution.

I can share one lesson, though. The one thing that this peculiar anxiety has taught me is that productivity "rules" are not revealed divine truth. They are ideas, and those ideas have to be evaluated exclusively on the basis of their efficacy, which is to say, on the basis of how much stuff that you want to get done that they help you get done.

For now, what I'm trying to do is to un-clench the fearful, spasmodic fist in my mind that gripping the need to schedule everything into these small boxes and allocate only the time that I "need" to get something specific done.

Maybe the only way I am going to achieve anything of significance is with opaque, 8-hour slabs of time with no more specific goal than "write some words, maybe a blog post, maybe some fiction, who knows" and "do something work-related". As someone constantly struggling to force my own fickle mind to accomplish any small part of my ridiculously ambitions creative agenda, it's really hard to relax and let go of anything which might help, which might get me a little further a little faster.

Maybe I should be trying to schedule my time into tiny little blocks. Maybe I'm just doing it wrong somehow and I just need to be harder on myself, madder at myself, and I really can get the blood out of this particular stone.

Maybe it doesn't matter all that much how I schedule my own time because there's always some upcoming distraction that I can't control, and I just need to get better at meditating and somehow putting them out of my mind without really changing what goes on my calendar.

Maybe this is just as productive as I get, and I'll still be fighting this particular fight with myself when I'm 80.

Regardless, I think it's time to let go of that fear and try something a little different.

15 Sep 2014 8:27am GMT

04 Sep 2014

feedPlanet Twisted

Glyph Lefkowitz: Techniques for Actually Distributed Development with Git

The Setup

I have a lot of computers. Sometimes I'll start a project on one computer and get interrupted, then later find myself wanting to work on that same project, right where I left off, on another computer. Sometimes I'm not ready to publish my work, and I might still want to rebase it a few times (because if you're not arbitrarily rewriting history all the time you're not really using Git) so I don't want to push to @{upstream} (which is how you say "upstream" in Git).

I would like to be able to use Git to synchronize this work in progress between multiple computers. I would like to be able to easily automate this synchronization so that I don't have to remember any special steps to sync up; one of the most focused times that I have available to get coding and writing work done is when I'm disconnected from the Internet, on a cross-country train or a plane trip, or sitting near a beach, for 5-10 hours. It is very frustrating to realize only once I'm settled in and unable to fetch the code, that I don't actually have it on my laptop because I was last doing something on my desktop. I would particularly like to be able to that offline time to resolve tricky merge conflicts; if there are two versions of a feature, I want to have them both available locally while I'm disconnected.

Completely Fake, Made-Up History That Is A Lie

As everyone knows, Git is a centralized version control system created by the popular website GitHub as a command-line client for its "forking" HTML API. Alternate central Git authorities have been created by other startups following in the wave following GitHub's success, such as BitBucket and GitLab.

It may surprise many younger developers to know that when GitHub first created Git, it was originally intended to be a distributed version control system, where it was possible to share code with no particular central authority!

Although the feature has been carefully hidden from the casual user, with a bit of trickery you can enable re-enable it!

Technique 0: Understanding What's Going On

It's a bit confusing to have to actually set up multiple computers to test these things, so one useful thing to understand is that, to Git, the place you can fetch revisions from and push revisions to is a repository. Normally these are identified by URLs which identify hosts on the Internet, but you can also just indicate a path name on your computer. So for example, we can simulate a "network" of three computers with three clones of a repository like this:

1
2
3
4
5
6
7
8
$ mkdir tmp
$ cd tmp/
$ mkdir a b c
$ for repo in a b c; do (cd $repo; git init); done
Initialized empty Git repository in .../tmp/a/.git/
Initialized empty Git repository in .../tmp/b/.git/
Initialized empty Git repository in .../tmp/c/.git/
$ 

This creates three separate repositories. But since they're not clones of each other, none of them have any remotes, and none of them can push or pull from each other. So how do we teach them about each other?

1
2
3
4
5
6
7
8
9
$ cd a
$ git remote add b ../b
$ git remote add c ../c
$ cd ../b
$ git remote add a ../a
$ git remote add c ../c
$ cd ../c
$ git remote add a ../a
$ git remote add b ../b

Now, you can go into a and type git fetch --all and it will fetch from b and c, and similarly for git fetch --all in b and c.

To turn this into a practical multiple-machine scenario, rather than specifying a path like ../b, you would specify an SSH URL as your remote URL, and turn SSH on on each of your machines ("Remote Login" in the Sharing preference pane on the mac, if you aren't familiar with doing that on a mac).

So, for example, if you have a home desktop tweedledee and a work laptop tweedledum, you can do something like this:

1
2
3
4
5
6
7
8
tweedledee:~ neo$ mkdir foo; cd foo
tweedledee:foo neo$ git init .
# ...
tweedledum:~ m_anderson$ mkdir bar; cd bar
tweedledum:bar m_anderson$ git init .
tweedledum:bar m_anderson$ git remote add tweedledee neo@tweedledee.local:foo
# ...
tweedledee:foo neo$ git remote add tweedledum m_anderson$@tweedledum.local:foo

I don't know the names of the hosts on your network. So, in order to make it possible for you to follow along exactly, I'll use the repositories that I set up above, with path-based remotes, in the following examples.

Technique 1 (Simple): Always Only Fetch, Then Merge

Git repositories are pretty boring without any commits, so let's create a commit:

1
2
3
4
5
6
7
$ cd ../a
$ echo 'some data' > data.txt
$ git add data.txt
$ git ci -m "data"
[master (root-commit) 8dc3db4] data
 1 file changed, 1 insertion(+)
 create mode 100644 data.txt

Now on our "computers" b and c, we can easily retrieve this commit:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ cd ../b/
$ git fetch --all
Fetching a
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../a
 * [new branch]      master     -> a/master
Fetching c
$ git merge a/master
$ ls
data.txt
$ cd ../c
$ ls
$ git fetch --all
Fetching a
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../a
 * [new branch]      master     -> a/master
Fetching b
From ../b
 * [new branch]      master     -> b/master
$ git merge b/master
$ ls
data.txt

If we make a change on b, we can easily pull it into a as well.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ cd ../b
$ echo 'more data' > data.txt 
$ git commit data.txt -m "more data"
[master f3d4165] more data
 1 file changed, 1 insertion(+), 1 deletion(-)
$ cd ../a
$ git fetch --all
Fetching b
remote: Counting objects: 5, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ../b
 * [new branch]      master     -> b/master
Fetching c
From ../c
 * [new branch]      master     -> c/master
$ git merge b/master
Updating 8dc3db4..f3d4165
Fast-forward
 data.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

This technique is quite workable, except for one minor problem.

The Minor Problem

Let's say you're sitting on your laptop and your battery is about to die. You want to push some changes from your laptop to your desktop. Your SSH key, however, is plugged in to your laptop. So you just figure you'll push from your laptop to your desktop. Unfortunately, if you try, you'll see something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ cd ../a/
$ echo 'even more data' >> data.txt 
$ git commit data.txt -m "even more data"
[master a9f3d89] even more data
 1 file changed, 1 insertion(+)
$ git push b master
Counting objects: 7, done.
Writing objects: 100% (3/3), 260 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: error: is denied, because it will make the index and work tree inconsistent
remote: error: with what you pushed, and will require 'git reset --hard' to match
remote: error: the work tree to HEAD.
remote: error: 
remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
remote: error: its current branch; however, this is not recommended unless you
remote: error: arranged to update its work tree to match what you pushed in some
remote: error: other way.
remote: error: 
remote: error: To squelch this message and still keep the default behaviour, set
remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To ../b
 ! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to '../b'

While you're reading this, you fail your will save and become too bored to look at a computer any more.

Too late, your battery is dead! Hopefully you didn't lose any work.

In other words: sometimes it's nice to be able to push changes as well.

Technique 1.1: The Manual Workaround

The problem that you're facing here is that b has its master branch checked out, and is therefore rejecting changes to that branch. Your commits have actually all been "uploaded" to b, and are present in that repository, but there is no branch pointing to them. Doing either of those configuration things that Git warns you about in order to force it to allow it is a bad idea, though; if your working tree and your index and your your commits don't agree with each other, you're just asking for trouble. Git is confusing enough as it is.

In order work around this, you can just push your changes in master on a to a diferent branch on b, and then merge it later, like so:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ git push b master:master_from_a
Counting objects: 7, done.
Writing objects: 100% (3/3), 260 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ../b
 * [new branch]      master -> master_from_a
$ cd ../b
$ git merge master_from_a 
Updating f3d4165..a9f3d89
Fast-forward
 data.txt | 1 +
 1 file changed, 1 insertion(+)
$ 

This works just fine, you just always have to manually remember which branches you want to push, and where, and where you're pushing from.

Technique 2: Push To Reverse Pull To Remote Remotes

The astute reader will have noticed at this point that git already has a way of tracking "other places that changes came from", they're called remotes! And in fact b already has a remote called a pointing at ../a. Once you're sitting in front of your b computer again, wouldn't you rather just have those changes already in the a remote, instead of in some branch you have to specifically look for?

What if you could just push your branches from a into that remote? Well, friend, I'm here today to tell you you can.

First, head back over to a...

1
$ cd ../a

And now, all you need is this entirely straightforward and obvious command:

1
$ git config remote.b.push '+refs/heads/*:refs/remotes/a/*'

and now, when you git push b from a, you will push those branches into b's "a" remote, as if you had done git fetch a while in b.

1
2
3
4
$ git push b
Total 0 (delta 0), reused 0 (delta 0)
To ../b
   8dc3db4..a9f3d89  master -> a/master

So, if we make more changes:

1
2
3
4
$ echo 'YET MORE data' >> data.txt
$ git commit data.txt -m "You get the idea."
[master c641a41] You get the idea.
 1 file changed, 1 insertion(+)

we can push them to b...

1
2
3
4
5
6
$ git push b
Counting objects: 5, done.
Writing objects: 100% (3/3), 272 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To ../b
   a9f3d89..c641a41  master -> a/master

and when we return to b...

1
$ cd ../b/

there's nothing to fetch, it's all been pre-fetched already, so

1
$ git fetch a

produces no output.

But there is some stuff to merge, so if we took b on a plane with us:

1
2
3
4
5
$ git merge a/master
Updating a9f3d89..c641a41
Fast-forward
 data.txt | 1 +
 1 file changed, 1 insertion(+)

we can merge those changes in whenever we please!

More importantly, unlike the manual-syncing solution, this allows us to push multiple branches on a to b without worrying about conflicts, since the a remote on b will always only be updated to reflect the present state of a and should therefore never have conflicts (and if it does, it's because you rewrote history and you should be able to force push with no particular repercussions).

Generalizing

Here's a shell function which takes 2 parameters, "here" and "there". "here" is the name of the current repository - meaning, the name of the remote in the other repository that refers to this one - and "there" is the name of the remote which refers to another repository.

1
2
3
4
5
6
function remoteremote () {
    local here="$1"; shift;
    local there="$1"; shift;

    git config "remote.$there.push" "+refs/heads/*:refs/remotes/$here/*";
}

In the above example, we could have used this shell function like so:

1
2
$ cd ../a
$ remoteremote a b

I now use this all the time when I check out a repository on multiple machines for the first time; I can then always easily push my code to whatever machine I'm going to be using next.

I really hope at least one person out there has equally bizarre usage patterns of version control systems and finds this post useful. Let me know!

Acknowledgements

Thanks very much to Tom Prince for the information that lead to this post being worth sharing, and Jenn Schiffer for teaching me that it is OK to write jokes sometimes, except about javascript which is very serious and should not be made fun of ever.

04 Sep 2014 5:44am GMT

12 Aug 2014

feedPlanet Twisted

Jonathan Lange: Isn't it Pythonic?

I really should be over this by now.

Here's how you test exceptions in Python 2.7 and later:

with self.assertRaises(SomeException) as cm:
    do_something()

the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)

(as recommended in the unittest documentation)

Here's how you test exceptions in earlier Pythons, using libraries like testtools.

the_exception = self.assertRaises(SomeException, do_something)
self.assertEqual(the_exception.error_code, 3)

I still like the second way. It's simpler, uses less code, and doesn't rely on new syntax. I'm sorry it didn't work out that way.

I guess this is how Barry feels about the diamond operator.

12 Aug 2014 11:00pm GMT

05 Aug 2014

feedPlanet Twisted

Ashwini Oruganti: TLS and Python: Stripe Open Source Retreat

Programmed to Receive

Earlier this year, Stripe announced an Open Source Retreat in which it offered to fund development on a couple of open source projects for 3 months. I sent in a proposal to work on a pure-python implementation of the TLS protocol and got selected. As a result, I am working on this full time for three months, since July 28. I just completed my first week of the Open Source Rereat, and this post is about that week.

Heavy Heads and Dim Sights

As Ivan Ristić says in his latest book, though the history is full of major cryptographic protocols with critical design flaws, there are even more examples of various implementation problems in well­known projects. Certificate validation flaws and connection authentication failures, insufficient error checking , basic constraints check failure, random number generation failures, protocol downgrade attacks, truncation attacks , and the more recent Heartbleed vulnerability are only a few examples.

The major cryptographic primitives are well understood and, given choice, no one attacks them first. But the primitives are seldom useful by themselves; they need to be combined into schemes and protocols and then implemented in code. These additional steps then become the main point of failure, which is what this project aims at avoiding, by means of a carefully designed Transport Layer Security (TLS) v1.2 implementation in PyCA's Cryptography library for Python. (Note that at present, this project is being worked on in a separate repository under the PyCA group.)

I spent the first couple of days of the retreat to set up the test infrastructure. Glyph, cyli, radix, Alex Gaynor, dreid and I spent a good part of the week discussing and hacking on the project.

One such really useful conversation was about declarative parsers. It was a fun discussion, and so I will summarize it for you.

For a well designed network protocol you should be able to ask two questions, "Are these bytes a valid message?" and "Is this message valid for my current state?"

So, when we talk about parsing a protocol, we're mostly talking about answering the first question. And when we talk about processing we're talking about answering the second question. An explicit state machine makes processing much simpler by specifying all the valid states and transitions and inputs that cause those transitions. A declarative parser makes parsing much simpler by specifying what a valid message looks like (rather than the steps you need to take to parse it). By saying what the protocol looks like, instead of how to parse it, you can more easily recognize and discard invalid inputs. And if you do all the message parsing before you try processing any messages it becomes easier to avoid strange state transitions in your processor, transitions that could lead to bugs.

The gist being that you don't want to breed weird machines. How you describe your protocol depends a lot on the tools you're using. We decided to use Construct to parse TLS record structures.

We also talked about the history of software licenses, software laws, and guerrilla warfare. That was interesting too, but I won't summarize it for you, because some things are only fun when discussed in person. (Sorry! Be here next time.)

Such a Lovely Place!

Stripe has an awesome office, and I've got the best desk. Really. And a nice big monitor. Oh and lots of coffee. People have been nice to be so far, and no one has given me a stink-eye for daring to speak of TLS.

My week was all the more fun thanks to all my friends in San Francisco who worked hard to make me feel at home, and ensured I feed myself well. I realized how much I tire everyone out when this Sunday Glyph exclaimed, "Uh, we'll be less energetic tod… OH THANK GOD SHE DOESN'T HAVE HER COMPUTER WITH HER!" (Apologies!)

Another huge chunk of thanks goes to all the folks of PyCA who regularly reviewed my code, making writing software more rewarding.

Up ahead in the distance…

Next up, I'll be focusing on the state machines in TLS. Along the way, I'd also be writing more TLS record parsers. You can follow the development on Github, and feel free to participate and contribute!

05 Aug 2014 7:04pm GMT

31 Jul 2014

feedPlanet Twisted

AmvTek Blog: Extending coverage of the Python serializers benchmark

Our previous attend to compare performances of Python implementations for protocol buffers and thrift serializations has generated interesting feedback and suggestions. The main request we received was to try to broaden the coverage of the previous benchmark so as to cover the full range of available options…

We are not there yet, but the benchmark now allows comparing 5 differents frameworks :

Comparing apples with oranges

Following our previous post, several persons suggested us to have a look at the capnp proto serialization system which is very similar in principle to thrift and protocol buffers we compared earlier. This similarity allowed us to get them covered by the previous benchmark in no time, and at first capnp proto performance looked astonishingly good.

We refrained to publish such results however, as something was not looking correct. If we were to believe what was reported, capnp deserialization time was not depending upon the size or type of the messages to be processed. Assuming we had misunderstood how to use the pycapnp extension package we were leveraging, we contacted Jason Paryani who supports it to ask if he could suggest anything.

Jason explained us that our results were not surprising him as with capnp real deserialization will take place only when message inner content is accessed. Jason also observed that our previous approach to time serialization was probably favoring capnp irrealistically as part of the serialization happens when the message content is set.

In short, to allow a fair comparison in between the different frameworks we wanted to cover and not be fooled by implementation choices made by library developers, Jason advised to revise our benchmarking approach replacing :

  • serialize by construct & serialize
  • deserialize by deserialize & traverse

The new approach is probably not a very good one if one want to establish the absolute performance of a single framework. For example, the full traversal requirement will unnecessarily harm the deserialization performance figure for json or msgpack library which deliver fully deserialized dictionaries in one shot.

We believe however that by having each benchmark performs same duties we render meaningfull comparisons possibles. We invit however any interested individual to review current benchmark and let us know what could be done to improve fairness of the comparisons we are trying to make.

Results Overview

You may run the benchmarks on your side and send us the results for publication on the GitHub project. The machines we are relying on are low end one… :)

The two kids of the block are thrift and the new entrant msgpack. There is no point in trying to departage those 2 winners as they are not playing in the same category (schema versus schemaless systems…).

31 Jul 2014 9:00pm GMT

Duncan McGreggor: OSCON 2014 Theme Song - Andrew Sorensen's Live Coding Keynote

Andrew Sorensen live-coding at OSCON 2014

Keynote

Shortly after Andrew Sorensen began the performance segment of his keynote at OSCON 2014, the #oscon Twitter topic began erupting with posts about the live coding session. Comments, retweets, and additional links persisted for that day and the next. In short, Andrew was a hit :-)

My first encounter with Andrew's work was a few years ago when I was getting back into Lisp. I was playing with generative music with Overtone (and then, a bit later, experimenting with SuperCollider, Hy, and Twisted) and came across his piece A Study in Keith. You might want to take a break from reading this port and watch that now ...

When Andrew started up his presentation, I didn't immediately recognize him. In fact, when the code was displayed on the big screens, I assumed it was Clojure until I looked closely and saw he was using (define ...) and not (defun ...). This seemed very familiar, and then I remembered Impromptu, which ultimately lead to my discovery of Extempore (see more links below) and the realization that this is what Andrew was using to live code.

At the end of the performance a bunch of us jumped up and gave a standing ovation. (In fact, you can hear me yell out "YEAH" at the end of his presentation when he says "And there we go."). It was quite a show. It seemed that OSCON 2014 had been given a theme song. The next step was getting the source code ...


Andrew's gist (Dark Github Theme)

Sharing the Code

Andrew gave a presentation on Extempore in the ballroom right after the keynote. This too was fantastic and resulted in much tweeting.

Afterwards a bunch of us went up front and chatted with him, enthusing about his work, the recent presentation, the keynote, and his previously published pieces.

I had Andrew's ear for a moment, and asked him if he was interested in sharing his keynote source -- there had been several requests for it on Twitter (that also got retweeted and/or favourited). Without hesitation, he gave an enthusiastic "yes" and we were off and running for the lounge where we could sit down to create a gist (and grab a cappuccino!). The availability of the source was announced immediately, to the delight of many.


Setting Up Extempore

Sublime Text 3 connected to Extempore

Later that night in my hotel room, I had time to download and run Extempore ... and discovered that I couldn't actually play the keynote code, since there was some implicit setup I was missing. However, after some digging around on the docs site and the mail list, music was pouring forth from my laptop -- to my great joy :-D

To ensure anyone else who is not familiar with Extempore can also have this pleasure, I've put together the all the prerequisites and setup necessary in a forked gist, in multiple parts. I will go through those in this blog post. Also: all of my testing and live coding was done using Ben Swift's Extempore Sublime Text plugin.

The first step is getting all the dependencies. You'll want to start the downloads right away, since they are large (the sample files are compressed .wavs). While that's going on, you can install Extempore using Homebrew (this worked for me on Mac OS X with no additional tweaking/configuration necessary):

With Extempore running, let's do some setup. We're going to need to:

The easiest way to use the files below is to clone the gist repo and load them up in Sublime Text, executing blocks of text by hi-lighting them, and then pressing ^x^x.

Here is the file for the fist two bullets mentioned above:



You will need to edit this file to point to the locations where your samples were downloaded. Also,
at the very end there are some lines of code you can execute to make sure that your samples are working.

Now let's define the note aliases. You can just hi-light the entire contents of this file in Sublime Text and then ^x^x:

At this point, we're ready to play!


Playing the Music

To get started on the music, open up the fourth file from the clone of the gist and ^x^x the root, scale, and left-hand-notes-* constants.

Here is the evolution of the left hand part:
Go ahead and start that first one playing (^x^x the definition as well as the call). Wait for a bit, and then execute the next one, etc. Once you've started playing the final left hand form, you can switch to the wider range of notes defined/updated at the bottom.

Next, you'll want to bring in the right hand ... then bassline ... then the higher fmsynth sparkles for the right hand:

Then you'll increase the energy with the drum section:

Finally, you'll bring it to the climax, and then start the gentle fade out:

A slightly modified code listing for the final keynote form is here:


Variation on a Theme

I have recorded a variation of Andrew's keynote based on the code above, for your listening pleasure :-) You can listen to it in your browser or download it.

This version plays part of the left hand piano an octave lower. There's a tiny bit of clipping in places, and I accidentally jazzed it up (and for too long!) with a hi-hat change in the middle. There are also some awkward transitions and volume oddities. However, these should be inspiration for you to make your own variation of the OSCON 2014 Theme Song :-)

The "script" used for the recording can found here.


Links of Note

Some of these were mentioned above, some haven't been. All relate to Extempore :-)



31 Jul 2014 3:33am GMT

29 Jul 2014

feedPlanet Twisted

Duncan McGreggor: The Future of Programming - Adopting The Functional Paradigm?

Series Links

Survivors' Breakfast

The previous post covered some thoughts on the future-looking programming themes present at OSCON 2014.

Following that wonderful conference, long-time Open Source advocate, Pythonista, and instructor Steve Holden, was kind enough to host his third annual "OSCON Survivors' Breakfast" with tens of esteemed attendees, speakers, and organizers enjoying great company and conversation, relaxing together after the flurry of conference activity, planning a leisurely day in Portland, and -- most immediately -- having some much-needed breakfast.

The view from the 23rd floor was quite an eyeful, and the conversation ranged across equally panoramic topics. Sitting with Alex Martelli, Anna Ravenscroft, and Katie Miller, the conversation inevitably turned to thoughts programmatical. One thread of the discussion was so compelling that it helped crystallize this series of blog posts. That was kicked off with Katie's question:

Why [have some large companies] not embraced functional programming to the extent that other large ones have?


Multiple points of discussion spawned from this, some of which still continue. The rest of this post explores these.


Large Companies?

What constitutes a large company? We settled on discussing Fortune 500 companies, which, by definition are:
  • U.S. Companies
  • Ranked by gross revenue (after adjustments for excise taxes).

Afterwards, I looked up the 2013 top 25 tech companies in the Fortune 500. I've listed them below; in parentheses is the Fortune 500 ranking. After the dash are the functional programming languages used on various company projects -- these are listed only if I have talked to someone who has worked on a project (or interviewed for a job that used the language), or if I have read an article by an employee who has stated that they use the listed language(s) [1].
  1. Apple (6) - Swift, Clojure, Scala
  2. AT&T (11) - Haskell
  3. HP (15) - F#, Scala
  4. Verizon Communications (16) - Scala
  5. IBM (20) - Scala
  6. Microsoft (35) - F#, F*
  7. Comcast (46) - Scala
  8. Amazon (49) - Haskell, Scala, Erlang
  9. Dell (51) - Erlang, Scala
  10. Intel (54) - Haskell, SML, PLT Scheme
  11. Google (55) - Haskell [2]
  12. Cisco (60) - Scala
  13. Ingram Micro (76) - ?
  14. Oracle (80) - Scala
  15. Avnet (117) - ?
  16. Tech Data (119) - ?
  17. Emerson Electric (123) - ?
  18. Xerox (131) - Scala
  19. EMC (133) - Scala
  20. Arrow Electronics (141) - ?
  21. Century Link (150) - ?
  22. Computer Sciences Corp. (176) - ?
  23. eBay (196) - Scala
  24. TI (218) - ?
  25. Western Digital (222) - ?

The companies which have committed to projects guessed to be of significant business value written in FP languages include: Apple, HP, and eBay. Possibly also Oracle and Intel. So, a rough estimate of between 3 to 5 of the top 25 U.S. tech companies have made a significant investment in FP.

Why not Google?

The next two sections offer summaries of some views on this.


Ideal Use Case?

Is an FP language suitable for large organisations? Are smaller companies better served by them? During breakfast, It was postulated that dealing with such things as immutable data, handling I/O in pure FP languages, and creating/using higher order functions is easier for small startups due to the shorter amount of time required to hire or train a critical mass of skilled programmers.

It is certainly true that it will take larger organisations longer to train its personnel simply due to sheer numbers and, even with enough trainers, logistics. But this argument can be made for any corporate level of instruction; in my book, this cancels out on both sides and is not an argument unique to hard topics, even less, specifically pertinent to FP adoption.


Brain Fit?

I've heard this one a bit: "Some people just don't think in FP terms." They need loops and iteration, not higher order functions and recursion. Joel Spolsky makes reference to this in his article The Guerrilla Guide to Interviewing. In particular, he says that "For some reason most people seem to be born without the part of the brain that understands pointers." This has been applied to topics in FP as well as C.

To be fair, Joel's comment was probably made with a bit of lightness and not meant to be a statement on the nature of mind or a theory of cognition. The context of the article is a very practical one: hiring. When trying to identify whether a programmer would be an asset for your team, you're not living in the space of cognitive theory, rather you inhabit the realm of quick approximations, gut instincts, and fast, economical decisions.

Regardless, I find this perspective -- Type Physicalism [3] -- fairly objectionable. This is because I see it as a kind of intellectual "racism." Early social sciences utilized this form of reasoning to justify all sorts of discriminatory thinking in the name of "science", reinforcing a rigid mentality of "us" vs. "them." In my own experience, I've seen this sort of approach used to shutdown exploration, to enforce elitism, and dismiss ideas that threaten the authority of the status quo.

Rather than seeing the problem of comprehending FP as a physical limitation of the individual, I see instructional failure as the obstacle to overcome. If we start with the proposition that certain brains are deficient, we are essentially abandoning education. It is the responsibility of the instructor to engage creatively with each student's learning style. When adhering to the idea that certain brains are limited, one discards creative engagement; one doesn't even consider working with the students and their learning styles. This is a view that, however implicitly, can be used to shun diversity and dismiss potential.

I believe the essence of what Joel was shooting for can be approached in a much kinder fashion (adapted for an FP discussion):

None of us was born knowing GOTO statements, global state, mutable data, or for loops. There are many programmers alive, though, whose first contact with programming involved one or more of these. That's their "home town", as it were; their programmatic birth place. Having utilized -- as well as taught -- imperative, OOP, and functional styles of programming, I do not feel that one is intrinsically any harder than another. However, they are sometimes so vastly different from each other in style or syntax or semantics that once a student has solidified around the concepts of a particular paradigm, it can be a challenge retraining to work easily in another.


Why the Objections?

If both "ideal use case" and "brain fit" are given as arguments against adopting FP (or any other new paradigm) in large organisations, and neither are considered logically or philosophically valid, what's at the root of the resistance?

It is not uncommon for changes in an industry or field of study to be met with resistance. The bigger or more different the change from the status quo, very often is proportional to the amount of resistance. I suspect that this is really what we're seeing when companies take a stance against FP. There are very often valid business concerns: "we've made an investment in OOP" or "it will cost too much to train/hire/migrate to FP."

I would remind those company leaders, though, that new sources of revenue, that product innovation and changes in market adoption do not often come from maintaining or enforcing the current state. Instead, that is an identifying characteristic of companies whose relevance is fading.

Even if your company has market dominance or is a monopoly, there is still a good incentive for exploring alternative paradigms. At the very least, one can uncover inefficiencies and apply new knowledge to remove duplication of efforts, increase margins, etc.


Careers

As a manager, I have found that about half of the senior engineers up for promotion have very little to no interest in taking on different (new to them) programmatic paradigms. They consider current burdens sufficient (or too much) and would rather spend what little free time they have available to them in improving existing systems.

Senior engineers who have a more academic or research bent (or are easily bored) are much more likely to embrace this sort of change. Interestingly, senior engineers who have little to no competitive drive will more readily pick up something new if the need arises. This may be due to such things as not perceiving accumulated knowledge as territory to defend, for example.

Younger engineers with less experience (and less of an investment made in a particular school of thought) are much more willing to take on new challenges. I believe there are many reasons for this, one of which may include an interest in becoming more professionally competitive with their peers.

Junior or senior, I have found that programmers who are currently looking to find new employment are nearly invariably not only willing to take on the challenge of learning different paradigms, but are usually going about that proactively and engaging in self-study.

I want to work with programmers who can take on any problem space in any paradigm and find creative solutions, contributing as valued members of a team. This is certainly an ideal set of characteristics, but one that I have seen in the wilds of the workplace on multiple occasions. It has nothing to do with FP or OOP paradigms, but rather with the people themselves.

Even if a company is locked into well-established processes and views on programming, they may find it in their best interests to provide a more open-minded approach with their employees who would enjoy that. Their retention rates could very well increase dramatically.


Do We Need To?

Philosophy and hiring strategies aside, do we -- as programmers, software projects, or organizations that support programming -- need to take on the burden of learning or adopting functional programming? Quite possibly not.

If Google's plans around Go involve building a new operating system (in the spirit of 1970s C and UNIX), the systems programmers may find pure functions too cumbersome to work with. FP may be too burdensome a fit for that type of work.

If one is not tied to a historical analogy with UNIX, as Mozilla is not with Rust, doing something like creating a new browser engine (or running a remote services company) may be a good fit for FP, especially if one has data showing reduced error counts when using type systems.

As we shall see illustrated in the next post, the usual advice continues to apply: the decision of which paradigm to employ for any given project should be dictated by the best fit and not ideological inflexibility. The bearing this has on programming is innovation: it is the early adopters who have the best chance of leading us into the future.

Up next: Retrospective on Programming Paradigms
Previously: Themes at OSCON 2014


Footnotes

[1] If anyone has additional information as to which FP languages are used by these top 25 companies, please let me know, and I will include that information. Bonus points for knowing of business-critical applications.

[2] Google Switzerland are using Haskell.

[3] Type Physicality is a form of reductive materialism, also known as the Mind-Brain Identity Theory that does not allow for mental states to be realized in organisms or computational systems that do not have a brain. See "Criticisms of Type Physicality" at http://en.wikipedia.org/wiki/Identity_theory_of_mind#Multiple_realizability.

29 Jul 2014 3:53am GMT

28 Jul 2014

feedPlanet Twisted

Duncan McGreggor: The Future of Programming - Themes at OSCON 2014

Series Links



A Qualitative OSCON Debrief

As you might have noticed from the OSCON Twitter-storm this year, the conference was a blast. Even if you weren't physically present, given the 17 tracks, you can imagine that the presentations -- and subsequent conversations -- were deeply varied.

This was the second OSCON I'd attended; the first was was in 2008 as a guest of Michael Bernstein, a friend who was speaking there. OSCON 2008 was a zoo - I'm not sure of the actual body count, but I've heard that attendees + vendors + miscellaneous topped 12,000 people over the course of the week (I would love to hear if someone has hard data on that -- googling didn't reveal much). OSCON 2008 was dominated by Big Data, Hadoop, endless buzzword bingo, and business posturing by all sorts. The most interesting bits of that conference were the outlines that formed around the conversations people weren't having. In fact, over the following 6 months, that's what I spent my spare time pondering: what people didn't say at OSCON.

This year's conference seemed like a completely different animal. It felt like easily 1/2 to 1/3rd the number of attendees in 2008. Where that one had all the anonymizing feel of rush-hour in a major metropolitan hub, OSCON 2014 had a distinctly small-town vibe to it -- I was completely charmed. Conversations (overheard as well as participated in) were not littered with examples from the latest bizspeak, but rather focused on essence. The interactions were not continually distracted, but rather steadily focused, allowing people to form, express, and dispute complete thoughts with their peers.


Conversations

So what were people talking about this year? Here are some of the topics I heard covered during lunches, in hallways, and at podiums; at pubs, in restaurants and at parks [1]:


After lots of reflection, here's how I classified most of the conversations I heard:

  • Developing communities,
  • Developing careers and/or personal/professional qualities, and
  • Developing software,

along lines such as:
  • Effective maintenance, maturity, and health,
  • Focusing on the "art", eventual mastery, and investments of time,
  • Tempering bare pragmatism with something resembling science or academic excellence,
  • Learning the new to bolster the old,
  • Inspiring innovation from a place of contemplation and analysis,
  • Mining the past for great ideas, and
  • Figuring out how to better share and spread the adoption of good ideas.


Themes

Generalized to such a degree, this could have been pretty much any congregation of interested, engaged minds since the dawn of civilization. So what does it look like if we don't normalize quite so much? Weighing these with what may well be my own bias (and the bias of like-minded peers), I submit to your review these themes:

  • A very strong interest in programming (thinking and creating) vs. integration (assessing and consuming).
  • An express desire to become better at abstraction (higher-order functions, composition, and types) to better deal with growing systems complexities.
  • An interest in building even more complicated systems.
  • A fear of reimplementing past mistakes or of letting dust gather on past intellectual achievements.

As you might have guessed, these number very highly among the reasons why the conference was such an unexpected pleasure for me. But it should also not come as a surprise that these themes are present:

  • We have had several years of companies such as Google and Amazon (AWS) building and deploying some of the most sophisticated examples of logic-made-manifest in human history. This has created perceived value in our industry and many wish to emulate it. Similarly, we have single purpose distributed systems being purchased for nearly 20 billion USD -- a different kind of complexity, with a different kind of perceived reward.
  • In the 70s and 80s, OOP adoption brought with it the ability to create large software systems in ways that people had not dared dream or were impractical to realize. Today's growing adoption of the Functional paradigm is giving early signs of allowing us to better integrate complex systems with more predictability and fewer errors.
  • Case studies of improvements in productivity or the capacity to handle highly complex or previously intractable problems with better abstractions, has ignited the passions of many. Not wanting to limit their scope of knowledge or sources of inspiration, people are not simply limiting themselves to the exploration of such things as Category Theory -- they are opening the vaults of computer science with such projects as Papers We Love.

There's a brave new world in the making. It's a world for programmers and thinkers, for philosophers and makers. There's a lot to learn, but it's really not so different from older worlds: the same passions drive us, the same idealism burns brightly. And it's nice to see that these themes arise not only in small, highly specialized venues such as university doctoral programs and StrangeLoop (or LambdaJam), but also in larger intersections of the industry like OSCON (or more general-audience ones like Meetups).

Up next: Adopting the Functional Paradigm?
Previously: An Overview


Footnotes

[1] It goes without saying that any one attendee couldn't possibly be exposed to enough conversations to form a perfectly accurate sense of the total distribution of conversation topics. No claim to the contrary is being made here :-)

[2] I strongly adhere to the multifaceted hypothesis proposed by Bret Victor

here in the section titled "Why did all these ideas happen during this particular time period?"



28 Jul 2014 5:25am GMT

Duncan McGreggor: The Future of Programming - An Overview

Art by Philip Straub

There's a new series of blog posts coming, inspired by on-going conversations with peers, continuous inspection of the development landscape, habitual navel-gazing, and participation at the catalytic OSCON 2014. As you might have inferred, these will be on the topic of "The Future of Programming."

Not to be confused with Bret Victor's excellent talk last year at DBX, these posts will be less about individual technologies or developer user experience, and more about historic trends and viewing the present (and near future) through such a lense.

In this mini-series, the aim is to present posts on following topics:


I did a similar set of posts, conceived in late 2008 and published in 2009 on the future of cloud computing entitled After the Cloud. It was a very successful series and the cloud industry seems to be heading towards some of the predictions made in it -- ZeroVM and Docker are an incremental step towards the future of distributed processes/functions outlined in To Atomic Computation and Beyond.

In that post, though, are two quotes from industry greats. These provide an excellent context for this series as well, hinting at an overriding theme:
  • Alan Kay, 1998: A crucial key to growing large systems is effective communications between components.
  • Joe Armstrong, 2004: To effectively model and solve problems in a distributed manner, we need concurrency... this is made easier when we isolate processes and do not share data.

In the decade since these statements were made, we have seen individuals, projects, and companies take that vision to heart -- and succeeding as a result. But as an industry, we continue to struggle with the definition of our art; we still are tormented by change -- both from within and externally -- and do not seem to adapt to it well.

These posts will peer into such places ... in the hope that such inspection might guide us better through the tangled forest of our present into the unimagined forest of our future.

Up next: Themes at OSCON 2014 ...


28 Jul 2014 5:17am GMT

18 Jul 2014

feedPlanet Twisted

Moshe Zadka: Clash of Clans and the Prisoners’ Dilemma

I have started playing, recently, the online game "Clash of Clans". Clash of Clans has the option of "donating" troops to other clan members. Troops donated stay in a separate area (so they do not take up area for one's own troops), and also participate in base defense (as opposed to one's own troops, which are offensive only). In short, being donated-to makes both one's offense and one's defense stronger. However, donating troops means paying the cost of training them and reaping no benefit from it.

Does we have the pay-off matrix: assume paying the cost of troops is -1, and the benefit is +2 (it has to be bigger than the cost, since players do build troops for themselves with even less benefits):

Donate/Donate: 1/1
Don't donate/Don't donate: 0/0
Donate/Don't donate: -1/2
Don't donate/Donate: 2/-1

What I love about this is this is a massive experiment with prisoners' dilemma on more-or-less average people. As expected, one sees both equilibrium: clans where hardly anyone donates, and clans where everyone donates. In the clans where hardly anyone donates, there is still some reciprocity: (some) people will try to donate "back" to people who donated to them. Since in practice the cost-benefit analysis turns that way, it is sometimes worthwhile to donate even for a .2 chance of a reciprocation, which allows the loop to be fed.

There are reputation effects (the number of troops one donated and one had donated to appears where everyone can see it), which would be assumed to improve the donation rate. However, from my anecdotal experience, being in two clans, the most distinguishing feature is "do you know these people in real life?" Clans made up of people whose reputation effects extend beyond the game are in the good equilibrium. Since I've joined the Facebook-employee-only clan, my life has been much better.

I started thinking that Facebook is like that too. In situations where you can help a colleague, where the benefit to the colleague would be disproportional to one's self, we encourage that kind of thinking. Facebook is in the good equilibrium of the dilemma!

[Plug: We're hiring engineers and eng. managers. Please contact me on FB with further questions or resumes.]

18 Jul 2014 4:10pm GMT