19 Apr 2017

feedPlanet Twisted

Glyph Lefkowitz: So You Want To Web A Twisted

As a rehearsal for our upcoming tutorial at PyCon, Creating And Consuming Modern Web Services with Twisted, Moshe Zadka, we are doing a LIVE STREAM WEBINAR. You know, like the kids do, with the video games and such.

As the webinar gods demand, there is an event site for it, and there will be a live stream.

This is a practice run, so expect "alpha" quality content. There will be an IRC channel for audience participation, and the price of admission is good feedback.

See you there!

19 Apr 2017 3:29am GMT

Moshe Zadka: Twisted Tutorial Webinar

Glyph and I are giving a tutorial about Twisted and web services at PyCon. In order to try it out, we are giving a webinar. Please come, learn, and let us know if you like it!

19 Apr 2017 3:14am GMT

17 Apr 2017

feedPlanet Twisted

Itamar Turner-Trauring: Learning without a mentor: how to become an expert programmer on your own

If you're an intermediate or senior programmer you may hit the point where you feel you're no longer making progress, where you're no longer learning. You're good at what you do, but you don't know what to learn next, or how: there are too many options, it's hard to get feedback or even tell you're making progress.

A mentor can help, if they're good at teaching... but what do you do if you don't have a mentor? How do you become a better programmer on your own?

In order to learn without a mentor you need to be able to recognize when you're learning and when you're not, and then you need to choose a new topic and learn it.

How to tell if you're learning

If you're not getting any feedback from an expert it can be tricky to tell whether you're actually learning or not. And lacking that knowledge it's easy to get discouraged and give up.

Luckily, there's an easy way to tell whether you're learning or not: learning is uncomfortable. If you're in your comfort zone, if you're thinking to yourself "this isn't so hard, I can just do this" you're not learning. You're just going over things you already know; that's why it feels comfortable.

If you're irritated and a little confused, if you feel clumsy and everything seems harder than it should be: now you're learning. That feeling of clumsiness is your brain engaging with new material it doesn't quite understand. You're irritated because you can't rely on existing knowledge. It's hard because it's new. If you feel this way don't stop: push through until you're feeling comfortable again.

You don't want to take this too far, of course. Pick a topic that is too far out of your experience and it will be so difficult you will fail to learn anything, and the experience may be so traumatic you won't want to learn anything.

Choosing something to learn

When choosing something to learn you want something just outside your comfort zone: close enough to your existing knowledge that it won't overwhelm you, far enough that it's actually new. You also want to pick something you'll be able to practice: without practice you'll never get past the point of discomfort.

Your existing job is a great place to practice new skills because it provides plenty of time to do so, and you'll also get real-world practice. That suggests picking new skills that are relevant to your job. As an added bonus this may give you the opportunity to get your employer to pay for initial training or materials.

Let's consider some of the many techniques you can use to learn new skills on the job.

Teaching

If you have colleagues you work with you will occasionally see them do something you think is obviously wrong, or miss something you think is the obviously right thing to do. For example, "obviously you should never do file I/O in a class constructor."

When this happens the tempting thing to do, especially if you're in charge, is to just tell them to change to the obviously better solution and move on. But it's worth resisting that urge, and instead taking the opportunity to turn this into a learning experience, for them and for you.

The interesting thing here is the obviousness: why is something obvious to you, and not to them? When you learn a subject you go through multiple phases:

When you have unconscious knowledge you are an expert: you've internalized a model so well you apply it automatically. There's are two problems with being an expert, however:

Teaching means taking your unconscious model and turning it into an explicit conscious model someone else can understand. And because teaching makes your mental model conscious you also get the opportunity to examine your own assumptions and improve your own understanding, ending up with a better model.

You don't have to teach only colleagues, of course: you can also write a blog post, or give a talk at a meetup or at a conference. The important thing is to notice the places you have unconscious knowledge and try to make it conscious by explaining it to others.

Switching jobs

While learning is uncomfortable, I personally suffer from a countervailing form of discomfort: I get bored really easily. As soon as I become comfortable with a new task or skill I'm bored, and I hate that.

In 2004 I joined a team writing high performance C++. As soon as I'd gotten just good enough to feel comfortable... I was bored. So I came up with slightly different tasks to work on, tasks that involved slightly different skills. And then I was bored again, so I moved on to a different team in the company, where I learned even more skills.

Switching tasks within your own company, or switching jobs to a new company, is a great way to get out of your comfort zone and learning something new. It's daunting, of course, because you always end up feeling clumsy and incompetent, but remember: that discomfort means you're learning. And every time you go through the experience of switching teams or jobs you will become better at dealing with this discomfort.

Learning from experts: skills, not technologies

Another way to learn is to learn from experts that don't work with you. It's tempting to try to find experts who will teach you new technologies and tools, but skills are actually far more valuable.

Programming languages and libraries are tools. Once you're an experienced enough programmer you should be able to just pick them up as needed if they become relevant to your job.

Skills are trickier: testing, refactoring, API design, debugging... skills will help you wherever you work, regardless of technology. But they're also easier to ignore or miss. There are skills we'd all benefit from that we don't even know exist.

So read a book or two on programming, but pick a book that will teach you a skill, not a technology. Or try to find the results of experts breaking down their unconscious models into explicit models, for example:

Conclusion: learning on your own

You don't need a mentor to learn. You can become a better software engineer on your own by:

17 Apr 2017 4:00am GMT

13 Apr 2017

feedPlanet Twisted

Moshe Zadka: PYTHONPATH Considered Harmful

Another post on my new blog.

13 Apr 2017 1:27am GMT

06 Apr 2017

feedPlanet Twisted

Itamar Turner-Trauring: You don't need a Computer Science degree

If you never studied Computer Science in school you might believe that's made you a worse programmer. Your colleagues who did study CS know more about algorithms and data structures than you do, after all. What did you miss? Are you really as good?

My answer: you don't need to worry about it, you'll do just fine. Some of the best programmers I've worked with never studied Computer Science at all.

A CS education has its useful parts, but real-world programming includes a broad array of skills that no one person can master. Programming is a team effort, and different developers can and should have different skills and strengths for the team to succeed. Where a CS degree does give you a leg up is when you're trying to get hired early in your career, but that is a hurdle that many people overcome.

What you learn in Comp Sci

I myself did study CS and Mathematics in college, dropped out of school, and then later went back to school and got a liberal arts degree. Personally I feel the latter was more useful for my career.

Much of CS is devoted to theory, and that theory doesn't come up in most programming. Yes, proving that all programming languages are equivalent is interesting, but that one sentence is all that I've taken away from a whole semester's class on the subject. And yes, I took multiple classes on data structures and algorithms... but I'm still no good at implementing new algorithms.

I ended up being bored in most of my CS classes. I dropped out to get a programming job where I could just write code, which I enjoyed much more. In some jobs that theory I took would be quite useful, but for me at least it has mostly been irrelevant.

Writing software in the real world

It's true that lacking a CS education you might not be as good as data structures. But chances are you have some other skill your colleagues lack, a unique strength that you contribute to your team.

Let me give a concrete example: at a previous job we gave all candidates a take home exercise, implementing a simple Twitter-like server. I reviewed many of the solutions, and each solution had different strengths. For example:

Packaging, data structures, network API design, big picture thinking, operational experience: these are just some of the skills that contribute to writing software. No one person can have them all. That means you can focus on your own strengths, because you're part of a team. Here's what that's meant in my case:

Job interviews

The one place where a Computer Science degree is unquestionably useful is getting a job early in your career. When you have no experience the degree will help; once you have experience your degree really doesn't matter.

It's true that many companies have an interview process that focuses on algorithmic puzzles. Unfortunately interviewing skills are often different than job skills, and need to be strengthened separately. And my CS degree doesn't really help me, at least: I've forgotten most of what I've learned, and algorithms were never my strength. So whenever I'm interviewing for jobs I re-read an old algorithms textbook and do some practice puzzles.

In short: don't worry about lacking a CS degree. Remember that you'll never be able to know everything, or do everything: it's the combined skills of your whole team that matters. Focus on your strengths, improve the skills you have, and see what new skills you can learn from your teammates.

06 Apr 2017 4:00am GMT

26 Mar 2017

feedPlanet Twisted

Itamar Turner-Trauring: Why and how you should test your software

This is a major rewrite second draft of a talk I'll be giving at PyCon 2017. Thanks to all those who commented on the first draft. Once again I would appreciate feedback, comments and contrasting points of view.

Why should you test your software? How should you test your software? Some people have easy answers to these questions.

Yolo programmers don't bother testing at all, happy to live in the moment. More serious programmers will tell you that you test your software in order to produce a Quality Product. To produce a Quality Product you must always write unit tests and integration tests and do QA. Neglect any of these and your code will fall into a bug-infested abyss.

While I'm much more sympathetic to this second view, I don't think it's a sufficient answer. Given how different software projects can be from each other it seems unlikely that one set of answers will fit everyone:

What you need is not a single answer, but a way to choose the answers that match your situation and your needs. We'll start by considering the means you have available to you for testing. Then we'll consider why you would to test your software. Finally, we'll combine means and goals to see how you can choose to test your software.

What means of testing can you use?

As a starting point, let's consider the different means of testing available to you. Is the following code a test?

def test_add():
    assert add(2, 2) == 5

I would say that, yes, that's clearly a test. Says so right in the function name, even. The test proves that add() does what it ought to do: add two numbers and give us the result.

You've noticed, of course, that this test is wrong. Luckily our development process has another step: code review. You, dear reader, can act as a code reviewer and tell me that my code is wrong, that 2 + 2 is 4, not 5.

Is code review a form of testing? If you're trying to verify your code matches a specification, well, this is a test. You have a specification for arithmetic in your head ("2 + 2 = 4") and you are checking that the code follows it.

Let's consider code review as a form of testing, alongside automated unit tests. Even if they're both tests, they're also quite different. What is the core difference between them?

One form of testing is automated, the other is done by a human.

An automated test is consistent and repeatable. You can write this:

def test_add_twice():
    for i in range(10000000):
        assert add(i, i) == 2 * i

And the computer will run the exact same code every time. The code will make sure add() consistently returns that particular result for those particular inputs. A human would face some difficulties in manually verifying ten million different computations: boredom, distraction, errors, slowness.

On the other hand, a human can read this code and tell you it's buggy:

def add(a, b):
    return a + b + 1

Where the computer does what it's told, for good or for bad, a human can provide meaning. Only a human can tell what the software is for.

Now we can categorize tests by the means used: humans test for meaning, whereas automated tests ensure consistency.

Why should you test your software?

Next, let's consider goals.

The first possible goal of testing your software is to make sure it meets a specification. This goal is what most programmers think of when testing is discussed: it covers things like unit testing and manual testing by QA. And as we saw it also covers code reviews. Your software has certain required functionality, the specification, and you want to make sure it actually does so now and in the future.

Some of the requirements are high-level: an online store wants customers to be able to order a product they've added to their shopping cart. Other requirements are low-level implementation details, mostly of interest only to the programmers. You might want the verify_creditcard() to accept a credit card number as a string and throw a InvalidCreditCard exception if the credit card is invalid.

The sum of all these requirements is the specification. It might be written down in great detail, or it might be a notion in your head (e.g. "2 + 2 is 4"). Regardless, you test your software to make sure it does what it's supposed to.

Sometimes, however, testing can have a different goal. In his book "Lean Startup" Eric Ries talks about building software only to discover that no one actually wanted to use it. Spending much time testing to ensure your software meets the specification is a waste of time if no one will ever use your software.

Ries argues that you first need to figure out if a product will succeed, by testing out what he calls a "Minimum Viable Product" with potential users and customers. This is a very different form of testing: it's not about verifying whether your software meets the specification, it's about learning something you hadn't known before.

The second possible goal of testing your software is in order to gain knowledge. Let's look at another form of testing with this goal. "A/B testing" is a form of testing where you try out two variations and see which produces a better result. Perhaps you are testing a redesign of your website: you show the current design to 90% of your visitors, the new design to 10% of your visitors, and see which results in more signups for your product.

Notice that you have two specifications, and you've already implemented them both. The point of the test is to figure out which works better, to learn something new, not to verify if the implementation meets the specification.

Now we have an answer for why you should test: either to verify you meet the specification or to gain new knowledge.

How should you test your software?

Combine the two goals we've come up with (gaining knowledge and matching the specification) and the two means of testing (human and software) and you get four different forms of testing, each providing a more specific testing goal:

Goal: Knowledge
Means: Humans Understanding Users Understanding Runtime Behavior Means: Software
Correct Functionality Stable Functionality
Goal: Implement the specification

You must choose the appropriate ones to use for your particular needs and situation. Let's go through these four types of testing one by one and see when you should each.

Understanding Users

These are all questions that cannot be answered by comparing your software to a specification. Instead you need empirical knowledge: you need to observe what actual human beings do when presented with your software.

Relevant testing techniques include:

Understanding Runtime Behavior

These questions can't always be answered by comparing your software to a specification. Once your software is complex enough you can't fully understand or predict how it will behave. You need to observe it actually running to understand its behavior.

Relevant testing techniques include:

Correct Functionality

It's tempting to say that automated tests can prove this, but remember the unit test that checked that 2 + 2 is 5. On a more fundamental level, software can technically match a specification and completely fail to achieve the goal of the specification. Only a human can understand the meaning of the specification and decide if the software matches it.

Relevant testing techniques include:

Stable Functionality

Humans are not a good way to test this. Humans are pretty good at ignoring small changes: if a button changes from "Send Now" to "Send now" you might not even notice at all. In contrast, software will break if your API changes from sendNow() to send_now(), or if the return type changes subtly.

This means a public API, an API that other software relies on, needs stability in order to be correct. Writing automated tests for private APIs, or code that is changing rapidly, will result in high maintenance costs as you continuously update your tests.

Relevant testing techniques include:

Applying the model

So what is all this good for?

Choosing how to test

First, our initial purpose: the model can help you choose what form of testing to do based on your goals.

Consider a startup building a product that it's not sure anyone wants. Writing automated tests may prove a waste of time, since it focuses on implementing a specification before figuring out what users actually want.

One possible alternative the Lean Startup methodology, which focuses on experiments or tests whose goal is finding what product will meet customers' needs. That means focusing on the Understanding Users quadrant. Once a product is chosen you spending the time to have more than the most minimal of specifications, at which point much more resources are applied to Correct Functionality and Stable functionality.

Recognizing when you've chosen the wrong type of testing

Second, the model can help you change course if you're using the wrong type of testing. For example, consider a hypothetical startup that writes tax preparation software (the details are inspired by a real company.) They wrote Selenium tests for their web UI at the same time were rapidly making significant changes to their UI.

Even with the tests their application was still buggy, and every time they changed the UI the tests would break. The tests didn't seem to improve quality, and they wasted developer time maintaining them. What were they doing wrong?

Their problem was that their system really has two parts:

  1. The tax engine, which is fairly stable: the tax code changes only once per year. Errors in the tax engine are a significant problem for users, and incompatible API changes are a problem for developers. This suggests the need for Stable Functionality tests, e.g. unit tests talking directly to the tax calculation engine. Correct Functionality could be ensured by code review and feedback from a tax accountant.
  2. The web-based user interface. The UI was changing constantly, which suggests Stable Functionality wasn't a goal yet. Correct Functionality was still a goal, so the UI should have been tested manually by humans (e.g. the programmers as they wrote the code.)

A basis for discussing testing

Finally, the model provides a shared terminology that can help you discuss testing in its broadest sense and its many differing goals.

Summary

Why should you test your software? Either to gain knowledge or to implement a specification.

How can you test your software? With humans or with software.

How should you test your software? Depending on your particular situation, choose the relevant forms of testing for understanding users, understanding runtime behavior, stable functionality or consistent functionality.

Questions or suggestions? Add a comment to the Hacker News or Reddit discussions linked below.

26 Mar 2017 4:00am GMT

20 Mar 2017

feedPlanet Twisted

Itamar Turner-Trauring: Dear recruiter, "open floor space" is not a job benefit

I was recently astonished by a job posting for a company that touted their "open floor space." Whoever wrote the ad appeared to sincerely believe that an open floor office was a reason to work for the company, to be mentioned alongside a real benefit like work/life balance.

While work/life balance and an open floor plan can co-exist, an open floor plan is a management decision much more akin to requiring long hours: both choose control over productivity.

The fundamental problem facing managers is that productivity is hard to measure. Faced with the inability to measure productivity, managers may feel compelled to measure time spent working. Never mind that it's counter-productive: at least it gives management control, even if it's control over the wrong thing.

Here is a manager explaining the problem:

I [would] like to manage [the] team's output rather than managing their time, because if they are forced to spend time inside the office, it doesn't mean they are productive or even working. At the same time it's hard to manage output, because software engineering tasks are hard to estimate and things can go out of the track easily.

In this case at least the manager involved understands that what matters is output, not hours in the office. But not everyone realizes is as insightful.

Choosing control

In cases where management does fall into the trap of choosing control over productivity, the end result is a culture where the only thing that matters is hours in the office. Here's a story I heard from a friend about a startup they used to work at:

People would get in around 8 or 9, because that's when breakfast is served. They work until lunch, which is served in the office, then until dinner, which is served in the office. Then they do social activities in the office, video games or board games, and then keep working until 10PM or later. Their approach was that you can bring your significant other in for free dinner, and therefore why leave work? Almost like your life is at the office.

Most of the low level employees, especially engineers, didn't feel that this was the most productive path. But everyone knew this was the most direct way to progress, to a higher salary, to becoming a team lead. The number of hours in the office is a significant part of how your performance is rated.

I don't think people can work 12 hours consistently. And when you're not working and you're in an open plan office, you distract people. It's not about hours worked, it's about hours in office, there's ping pong tables... so there's always someone asking you to play ping pong or distracting you with a funny story. They're distracting you, their head wasn't in the zone, but they had to be in the office.

A team of 10 achieved what a team of 3 should achieve.

Control through visibility

Much like measuring hours in the office, an open floor office is designed for control rather than productivity. A manager can easily see what each developer is doing: are they coding? Are they browsing the web? Are they spending too much time chatting to each other?

In the company above the focus on working hours was apparently so strong that the open floor plan was less relevant. But I've no doubt there are many companies where you'll start getting funny looks from your manager if you spend too much time appearing to be "unproductive."

To be fair, sometimes open floor plans are just chosen for cheapness, or thoughtlessly copied from other companies because all the cool kids are doing it. Whatever the motivation, they're still bad for productivity. Programming requires focus, and concentrated thought, and understanding complex systems that cannot fit in your head all at once and so constantly need to be swapped in and out.

Open floor spaces create exactly the opposite of the environment you need for programming: they're full of noise and distraction, and headphones only help so much. I've heard of people wearing hoodies to block out not just noise but also visual distraction.

Dear recruiter

Work/life balance is a real job benefit, for both employers and employees: it increases productivity while allowing workers space to live outside their job. But "open office space" is not a benefit for anyone.

At worst it means your company is perversely sabotaging its employees in order to control them better. At best it means your company doesn't understand how to enable its employees to do their job.

20 Mar 2017 4:00am GMT

Moshe Zadka: Shipping Python Applications in Docker

Posted on my new blog as an experiment!

20 Mar 2017 1:36am GMT

12 Mar 2017

feedPlanet Twisted

Itamar Turner-Trauring: Unit testing, Lean Startup, and everything in-between

This was my first draft of a talk I'll be giving at PyCon 2017. I've since rewritten it extensively, so you should just read the much improved second draft.

12 Mar 2017 5:00am GMT

05 Mar 2017

feedPlanet Twisted

Itamar Turner-Trauring: Why you're (not) failing at your new job

It's your first month at your new job, and you're worried you're on the verge of getting fired. You don't know what you're doing, everyone is busy and you need to bother them with questions, and you're barely writing any code. Any day now your boss will notice just how bad a job you're doing... what will you do then?

Luckily, you're unlikely to be fired, and in reality you're likely doing just fine. What you're going through happens to almost everyone when they start a new job, and your panicked feeling will eventually pass.

New jobs make you incompetent

Just like you, every time I've started a new job I've had to deal with feeling incompetent.

Do you notice the theme?

Every time you start a new job you are leaving behind the people, processes and tools you understand, and starting from scratch. A new language, new frameworks, new tools, new codebases, new ways of doing things, people you don't know, business logic you don't understand, processes you're unfamiliar with... of course it's scary and uncomfortable.

Luckily, for the most part this is a temporary feeling.

This too will pass

Like a baby taking their first steps, or a child on their first bike ride, those first months on a new job will make you feel incompetent. Soon the baby will be a toddler, the child will be riding with confidence. You too will soon be productive and competent.

Given some time and effort you will eventually learn what you need to know:

...the codebase.

...the processes, how things are done and maybe even why.

...the programming language.

...who to ask and when to ask them.

...the business you are operating in.

Since that's a lot to learn, it will take some time, but unless you are working for an awful company that is expected and normal.

What you can do

While the incompetent phase is normal and unavoidable, there is still something you can do about it: learn how to learn better. Every time you start a new job you're going to be learning new technologies, new processes, new business logic. The most important skill you can learn is how to learn better and faster.

The faster you learn the faster you'll get past the feeling of incompetence when you start a new job. The faster you learn the faster you can become a productive employee or valued consultant.

Some skills are specific to programming. For example, when learning new programming languages, I like skimming a book or tutorial first before jumping in: it helps me understand the syntax and basic concepts. Plus having a mental map of the book helps me know where to go back to when I'm stuck. Other skills are more generic, e.g. there is considerable research on how learning works that can help you learn better.

Finally, another way I personally try to learn faster is by turning my mistakes into educational opportunities. Past and present, coding or career, every mistake I make is a chance to figure out what I did wrong and how I can do better. If you'd like to avoid my past mistakes, sign up to get a weekly email with one of my mistakes and what you can learn from it.

05 Mar 2017 5:00am GMT

19 Feb 2017

feedPlanet Twisted

Itamar Turner-Trauring: When AI replaces programmers

The year is 2030, and artificial intelligence has replaced all programmers. Let's see how this brave new world works out:

Hi! I'm John the Software Genie, here to help you with all your software needs.

Hi, I'd like some software to calculate the volume of my house.

Awesome! May I have access to your location?

Why do you need to access my location?

I will look up your house in your city's GIS database and use its dimensions to calculate its volume.

Sorry, I didn't quite state that correctly. I want some software that will calculate the dimensions of any house.

Awesome! What is the address of this house?

No, look, I don't want anything with addresses. You can have multiple apartments in a house, and anyway some structures don't have an address, or are just being designed... and the attic and basement doesn't always count... How about... I want software that calculates the volume of an abstract apartment.

Awesome! What's an abstract apartment?

Grrrr. I want software that calculates the sum of the volumes of some rooms.

Awesome! Which rooms?

You know what, never mind, I'll use a spreadsheet.

I'm sorry Dave, I can't let you do that.

What.

Just a little joke! I'm sorry you decided to go with a spreadsheet. Your usage bill for $153.24 will be charged to your credit card. Have a nice day!

Back to the present: I've been writing software for 20 years, and I find the idea of being replaced by an AI laughable.

Processing large amounts of data? Software's great at that. Figuring out what a human wants, or what a usable UI is like, or what the real problem you need to solve is... those are hard.

Imagine what it would take for John the Software Genie to learn from that conversation. I've made my share of mistakes over the years, but I've learned enough that these days I can gather requirements decently. How do you teach an AI to gather requirements?

We might one day have AI that is as smart as a random human, an AI that can learn a variety of skills, an AI that can understand what those strange and pesky humans are talking about. Until that day comes, I'm not worried about being replaced by an AI, and you shouldn't worry either.

Garbage collection didn't make programmers obsolete just because it automated memory management. In the end automation is a tool to be controlled by human understanding. As a programmer you should focus on building the skills that can't be automated: figuring out the real problems and how to solve them.

19 Feb 2017 5:00am GMT

11 Feb 2017

feedPlanet Twisted

Twisted Matrix Laboratories: Twisted 17.1.0 Released

On behalf of Twisted Matrix Laboratories, I am honoured to announce the release of Twisted 17.1!

The highlights of this release are:


For more information, check the NEWS file (link provided below).

You can find the downloads on PyPI (or alternatively our website). The NEWS file is also available on GitHub.

Many thanks to everyone who had a part in this release - the supporters of the Twisted Software Foundation, the developers who contributed code as well as documentation, and all the people building great things with Twisted!

Twisted Regards,
Amber Brown (HawkOwl)

11 Feb 2017 10:08am GMT

10 Feb 2017

feedPlanet Twisted

Glyph Lefkowitz: Make Time For Hope

Pandora hastened to replace the lid! but, alas! the whole contents of the jar had escaped, one thing only excepted, which lay at the bottom, and that was HOPE. So we see at this day, whatever evils are abroad, hope never entirely leaves us; and while we have THAT, no amount of other ills can make us completely wretched.

It's been a rough couple of weeks, and it seems likely to continue to be so for quite some time. There are many real and terrible consequences of the mistake that America made in November, and ignoring them will not make them go away. We'll all need to find a way to do our part.

It's not just you - it's legit hard to focus on work right now. This is especially true if, as many people in my community are, you are trying to motivate yourself to work on extracurricular, after-work projects that you used to find exciting, and instead find it hard to get out of bed in the morning.

I have no particular position of authority to advise you what to do about this situation, but I need to give a little pep talk to myself to get out of bed in the morning these days, so I figure I'd share my strategy with you. This is as much in the hope that I'll follow it more closely myself as it is that it will be of use to you.

With that, here are some ideas.

It's not over.

The feeling that nothing else is important any more, that everything must now be a life-or-death political struggle, is exhausting. Again, I don't want to minimize the very real problems that are coming or the need to do something about them, but, life will go on. Remind yourself of that. If you were doing something important before, it's still important. The rest of the world isn't going away.

Make as much time for self-care as you need.

You're not going to be of much use to anyone if you're just a sobbing wreck all the time. Do whatever you can do to take care of yourself and don't feel guilty about it. We'll all do what we can, when we can.1

You need to put on your own oxygen mask first.

Make time, every day, for hope.

"You can stand anything for 10 seconds. Then you just start on a new 10 seconds."

Every day, set aside some time - maybe 10 minutes, maybe an hour, maybe half the day, however much you can manage - where you're going to just pretend everything is going to be OK.2

Once you've managed to securely fasten this self-deception in place, take the time to do the things you think are important. Of course, for my audience, "work on your cool open source code" is a safe bet for something you might want to do, but don't make the mistake of always grimly setting your jaw and nose to the extracurricular grindstone; that would just be trading one set of world-weariness for another.

After convincing yourself that everything's fine, spend time with your friends and family, make art, or heck, just enjoy a good movie. Don't let the flavor of life turn to ash on your tongue.

Good night and good luck.

Thanks for reading. It's going to be a long four years3; I wish you the best of luck living your life in the meanwhile.


  1. I should note that self-care includes just doing your work to financially support yourself. If you have a job that you don't feel is meaningful but you need the wages to survive, that's meaningful. It's OK. Let yourself do it. Do a good job. Don't get fired.

  2. I know that there are people who are in desperate situations who can't do this; if you're an immigrant in illegal ICE or CBP detention, I'm (hopefully obviously) not talking to you. But, luckily, this is not yet the majority of the population. Most of us can, at least some of the time, afford to ignore the ongoing disaster.

  3. Realistically, probably more like 20 months, once the Rs in congress realize that he's completely destroyed their party's credibility and get around to impeaching him for one of his numerous crimes.

10 Feb 2017 7:58am GMT

Itamar Turner-Trauring: Buggy Software, Loyal Users: Why Bug Reporting is Key To User Retention

Your software has bugs. Sorry, mine does too.

Doesn't matter how much you've tested it or how much QA has tested it, some bugs will get through. And unless you're NASA, you probably can't afford to test your software enough anyway.

That means your users will be finding bugs for you. They will discover that your site doesn't work on IE 8.2. They clicked a button and a blank screen came up. Where is that feature you promised? WHY IS THIS NOT WORKING?!

As you know from personal experience, users don't enjoy finding bugs. Now you have buggy software and unhappy users. What are you going to do about it?

Luckily, in 1970 the economist Albert O. Hirschman came up with an answer to that question.

Exit, Voice and Loyalty

In his classic treatise Exit, Voice and Loyalty, Hirschman points out that users who are unhappy with a product have exactly two options. Just two, no more:

  1. Exiting, i.e. giving up on your product.
  2. Voicing their concerns.

Someone who has given up on your software isn't likely to tell you about their issues. And someone who cares enough to file a bug is less likely to switch away from your software. Finally, loyal users will stick around and use their voice when otherwise they would choose exit.

Now, your software has no purpose without users. So chances are you want to keep them from leaving (though perhaps you're better off without some of them - see item #2).

And here's the thing: there's only two choices, voice or exit. If you can convince your users to use their voice to complain, and they feel heard, your users are going to stick around.

Now at this point you might be thinking, "Itamar, why are you telling me obvious things? Of course users will stick around if we fix their bugs." But that's not how you keep users around. You keep users around by allowing them to express their voice, by making sure they feel heard.

Sometimes you may not fix the bug, and still have satisfied users. And sometimes you may fix a bug and still fail at making them feel heard; better than not fixing the bug, but you can do better.

To make your users feel heard when they encounter bugs you need to make sure:

  1. They can report bugs with as little effort as possible.
  2. They hear back from you.
  3. If you choose to fix the bug, you can actually figure out the problem from their bug report.
  4. The bug fix actually gets delivered to them.

Let's go through these requirements one by one.

Bug reporting

Once your users notice a problem you want them to communicate it to you immediately. This will ensure they choose the path of voice and don't contemplate exiting to your shiny competitor at the domain next door.

Faster communication also makes it much more likely the bug report will be useful. If the bug occurred 10 seconds ago the user will probably remember what happened. If it's a few hours later you're going to hear something about how "there was a flying moose on the screen? or maybe a hyena?" Remember: users are human, just like you and me, and humans have a hard time remembering things (and sometimes we forget that.)

To ensure bugs get reported quickly (or at all) you want to make it as easy as possible for the user to report the problem. Each additional step, e.g. creating an account in an issue tracker, means more users dropping out of the reporting process.

In practice many applications are designed to make it as hard as possible to report bugs. You'll be visiting a website, when suddenly:

"An error has occurred, please refresh your browser."

And of course the page will give no indication of how or where you should report the problem if it reoccurs, and do the people who run this website even care about your problem?

Make sure that's not how you're treating your users when an error occurs.

Improving bug reporting

So let's say you include a link to the bug report page, and let's say the user doesn't have to jump through hoops and email verification to sign up and then fill out the 200 fields that JIRA or Bugzilla think are really important and would you like to tell us about your most common childhood nightmare from the following list? We'll assume that bug reporting is easy to find and easy to fill it out, as it should be.

But... you need some information from the user. Like, what version of the program they were using, the operating system and so on and so forth. And you do actually need that information.

But the user just wants to get this over with, and every additional piece of information you ask for is likely to make them give up and go away. What to do?

Here's one solution: include it for them.

I've been using the rather excellent Spacemacs editor, and as usual when I use new software I had to report a bug. So it goes.

Anyway, to report a bug in Spacemacs you just hit the bug reporting key combo. You get a bug reporting page. In your editor. And it's filled in the Spacemacs version and the Emacs version and all the configuration you made. And then you close the buffer and it pre-populates a new GitHub issue with all this information and you hit Submit and you're done.

This is pretty awesome. No need to find the right web page or copy down every single configuration and environmental parameter to report a bug: just run the error-reporting command.

Spacemacs screenshot

Also, notice that this is a lot better than automatic crash reporting. I got to participate and explain the bits that were important to me, it's not the "yeah we reported the crash info and let's be honest you don't really believe anyone looks at these do you?"-vibe that one gets from certain software.

You can do much the same thing with a command-line tool, or a web-based application. Every time an error occurs or the user is having issues (e.g. the user ran yourcommand --help):

  1. Ask for the user's feedback in the terminal, or within the context of the web page, and then file the bug report for them.
  2. Gather as much information as you can automatically and include it in the bug report, so the user only has to report the part they care about.

Responding to the bug report

The next thing you have to do is actually respond to the bug report. As I write this the Spacemacs issue I filed is sitting there all sad and lonely and unanswered, and it's a little annoying. I do understand, open source volunteer-run project and all. (This is where loyalty comes in.)

But for a commercial product I am paying for I want to know that my problem has been heard. And sometimes the answer might be "sorry, we're not going to fix that this quarter." And that's OK, at least I know where I stand. But silence is not OK.

Respond to your bug reports, and tell your users what you're doing about it. And no, an automated response is not good enough.

Diagnosing the bug

If you've decided to fix the bug you can now proceed to do so... if you can figure out what the actual problem is. If diagnosing problems is impossible, or even just too expensive, you're not going to do fix the problem. And that means your users will continue to be affected by the bug.

Let's look at a common bug report:

I clicked Save, and your program crashed and deleted a day's worth of my hopes and dreams.

This is pretty bad: an unhappy user, and as is often the case the user simply doesn't have the information you need to figure out what's going on.

Even if the bug report is useful it can often be hard to reproduce the problem. Testing happens in controlled environments, which makes investigation and reproduction easier. Real-world use happens out in the wild, so good luck reproducing that crash.

Remember how we talked about automatically including as much information as possible in the bug report? You did that, right? And included all the relevant logs?

If you didn't, now's the time to think about it: try to ensure that logs, core dumps, and so on and so forth will always be available for bug reports. And try to automate submitting them as much as possible, while still giving the user the ability to report their specific take on the problem.

(And don't forget to respect your user's privacy!)

Distributing the fix

So now you've diagnosed and fixed your bug: problem solved! Or rather, problem almost solved. If the user hasn't gotten the bug fix all this work has been a waste of time.

You need fast releases, and you need automated updates where possible. If it takes too long for fixes to reach your users then for for practical purposes your users are not being heard. (There are exceptions, as always: in some businesses your users will value stability over all else.)

A virtuous circle

If you've done this right:

  1. Your users will feel heard, and choose voice over exit.
  2. You will get the occasional useful bug report, allowing you to improve your product.
  3. All your users will quickly benefit from those improvements.

There is more to be done, of course: many bugs can and should be caught in advance. And many users will never tell you their problems unless you ask.

But it's a good start at making your users happy and loyal, even when they encounter the occasional bug.

10 Feb 2017 5:00am GMT

31 Jan 2017

feedPlanet Twisted

Jonathan Lange: Announcing haskell-cli-template

Last October, I announced servant-template, a cookiecutter template for creating production-ready Haskell web services.

Almost immediately after making it, I wished I had something for building command-line tools quickly. I know stack comes with a heap of them, but:

So I made haskell-cli-template. It's very simple, it just makes a Haskell command-line project with some tests, command-line parsing, and a CircleCI build.

I wanted to integrate logging-effect, but after a few months away from it my tired little brain wasn't able to figure it out. I like command-line tools with logging controls, so I suspect I'll add it again in the future.

Let me know if you use haskell-cli-template to make anything cool, and please feel free to fork and extend.

31 Jan 2017 12:00am GMT

30 Jan 2017

feedPlanet Twisted

Jonathan Lange: Announcing graphql-api: Haskell library for GraphQL

Late last year, my friend Tom tried to convince me that writing REST APIs was boring and repetitive and that I should give this thing called GraphQL a try.

I was initially sceptical. servant, the REST library that I'm most familiar with, is lovely. Its clever use of Haskell's type system means that all the boring boilerplate I'd have to write in other languages just goes away.

However, after watching Lee Byron's Strange Loop talk on GraphQL I began to see his point. Being able to get many resources with the same request is very useful, and as someone who writes & runs servers, I very much want clients to ask only for the data that they need.

The only problem is that there isn't really a way to write GraphQL servers in Haskell-until now.

Introducing graphql-api

Tom and I put together a proof-of-concept GraphQL server implementation called graphql-api, which we released to Hackage today.

It lets you take a GraphQL schema and translate it into a Haskell type that represents the schema. You can then write handlers that accept and return native Haskell types. graphql-api will take care of parsing and validating your user queries, and Haskell's type system will make sure that your handlers handle the right thing.

Using graphql-api

Say you have a simple GraphQL schema, like this:

type Hello {
  greeting(who: String!): String!
}

which defines a single top-level type Hello that contains a single field, greeting, that takes a single, required argument who.

A user would query it with something like this:

{
  greeting("World")
}

And expect to see an answer like:

{
  "data": {
    "greeting": "Hello World!"
  }
}

To do this in Haskell with GraphQL, first we'd define the type:

type Hello = Object "Hello" '[]
  '[ Argument "who" Text :> Field "greeting" Text ]

And then a handler for that type:

hello :: Handler IO Hello
hello = pure greeting
 where
   greeting who = pure ("Hello " <> who <> "!")

We can then run a query like so:

queryHello :: IO Response
queryHello = interpretAnonymousQuery @Hello hello "{ greeting(who: \"World\") }"

And get the output we expect.

There's a lot going on in this example, so I encourage you to check out our tutorial to get the full story.

graphql-api's future

Tom and I put graphql-api together over a couple of months in our spare time because we wanted to actually use it. However, as we dug deeper into the implementation, we found we really enjoyed it and want to make a library that's genuinely good and helps other people do cool stuff.

The only way to do that, however, is to release it and get feedback from our users, and that's what we've done. So please use graphql-api and tell us what you think. If you build something cool with it, let us know.

For our part, we want to improve the error messages, make sure our handling for recursive data types is spot on, and smooth down a few rough edges.

Thanks

Tom and I want to thank J. Daniel Navarro for his great GraphQL parser and encoder, which forms the basis for what we built here.

About the implementation

graphql-api is more-or-less a GraphQL compiler hooked up to type-based executing (aka "resolving") engine that's heavily inspired by Servant and uses various arcane type tricks from GHC 8.

We tried to stick to implementing the GraphQL specification. The spec is very well written, but implementing it has taught us that GraphQL is not at all as simple as it looks at first.

I can't speak very well to the type chicanery, except to point you at the code and at the Servant paper.

The compiler mostly lives in the GraphQL.Internal.Validation module. The main idea is that it takes the AST and translates it into a value that cannot possibly be wrong.

All the syntax stuff is from the original graphql-haskell, with a few fixes, and a tweak to guarantee that Name values are guaranteed to be correct.

30 Jan 2017 12:00am GMT