16 Oct 2018

feedPlanet Twisted

Jonathan Lange: Notes on test coverage

These are a few quick notes to self, rather than a cogent thesis. I want to get this out while it's still fresh, and I want to lower my own mental barrier to publishing here.

I've been thinking about test coverage recently, inspired by conversations that followed DRMacIver's recent post.

Here's my current working hypothesis:

The justification is that "the test of all knowledge is experiment" [0]. While we should absolutely make our code easy to reason about, and prove as much as we can about it, we need to check what it does against actual reality.

Simple testing really can prevent most critical failures. It's OK to not test some part of your code, but that should be a conscious, local, recorded decision. You have to explicitly opt out of test coverage. The tooling should create a moment where you either write a test, or you turn around and say "hold my beer".

Switching to this for an existing project can be prohibitively expensive, though, so a ratchet is a good idea. The ratchet should be "lines of uncovered code", and that should only be allowed to go down. Don't ratchet on percentages, as that will let people add new lines of uncovered code.

Naturally, all of this has to be enforced in CI. No one is going to remember to run the coverage tool, and no one is going to remember to check for it during code review. Also, it's almost always easier to get negative feedback from a robot than a human.

I tagged this post with Haskell, because I think all of this is theoretically possible to achieve on a Haskell project, but requires way too much tooling to set up.

As a bit of an experiment, I set up a test coverage ratchet with graphql-api. I wanted both to test out my new enthusiasm for aiming for 100% coverage, and I wanted to make it easier to review PRs.

The ratchet script is some ad hoc Python, but it's working. External contributors are actually writing tests, because the computer tells them to do so. I need to think less hard about PRs, because I can look at the tests to see what they actually do. And we are slowly improving our test coverage.

I want to build on this tooling to provide something genuinely good, but I honestly don't have the budget for it at present. I hope to at least write a good README or user guide that illustrates what I'm aiming for. Don't hold your breath.

[0] The Feynman Lectures on Physics, Richard Feynman

16 Oct 2018 11:00pm GMT

10 Oct 2018

feedPlanet Twisted

Itamar Turner-Trauring: The next career step for Senior Software Engineers (that isn't management)

You've been working as a programmer for a few years, you've been promoted once or twice, and now you're wondering what's next. The path until this point was straightforward: you learned how to work on your own, and then you get promoted to Senior Software Engineer or some equivalent job title.

But now there's no clear path ahead.

Do you become a manager and stop coding?

Do you just learn new technologies, or is that not enough?

What should you be aiming for?

In this post I'd like to present an alternative career progression, an alternative that will give you more autonomy, and more bargaining power. And unlike becoming a manager, it will still allow you to write code.

From coding to solving problems

In the end, your job as a programmer is solving problems, not writing code. Solving problems requires:

  1. Finding and identifying the problem.
  2. Coming up with a solution.
  3. Implementing the solution.

Each of these can be thought of a skill-tree: a set of related skills that can be developed separately and in parallel. In practice, however, you'll often start in reverse order with the third skill tree, and add the others on one by one as you become more experienced.

Randall Koutnik describes these as job titles of a sort, a career progression: Implementers, Solvers, and Finders.

As an Implementer, you're an inexperienced programmer, and your tasks are defined by someone else: you just implement small, well-specified chunks of code.

Let's imagine you work for a company building a website for animal owners. You go to work and get handed a task: "Add a drop-down menu over here listing all iguana diseases, which you can get from the IGUANA_DISEASE table. Selecting a menu item should redirect you the appropriate page."

You don't know why a user is going to be listing iguana diseases, and you don't have to spend too much time figuring out how to implement it. You just do what you're told.

As you become more experienced, you become a Solver: are able to come up with solutions to less well-defined problems.

You get handed a problem: "We need to add a section to the website where pet owners can figure out if their pet is sick." You figure out what data you have and which APIs you can use, you come up with a UI together with the designer, and then you create an implementation plan. Then you start coding.

Eventually you become a Finder: you begin identifying problems on your own and figuring out their underlying causes.

You go talk to your manager about the iguanas: almost no one owns iguanas, why are they being given equal space on the screen as cats and dogs? Not to mention that writing iguana-specific code seems like a waste of time, shouldn't you be writing generic code that will work for all animals?

After some discussion you figure out that the website architecture, business logic, and design are going to have to be redone so that you don't have to write new code every time a new animal is added. If you come up with the right architecture, adding a new animal will take just an hour's work, so the company can serve many niche animal markets at low cost. Designing and implementing the solution will likely be enough work that you're going to have to work with the whole team to do it.

The benefits of being a Finder

Many programmers end up as Solvers and don't quite know what to do next. If management isn't your thing, becoming a Finder is a great next step, for two reasons: autonomy and productivity.

Koutnik's main point is that each of these three stages gives you more autonomy. As an Implementer you have very little autonomy, as a Solver you have more, and as a Finder you have lots: you're given a pile of vague goals and constraints and it's up to you to figure out what to do. And this can be a lot of fun.

But there's another benefit: as you move from Implementer to Solver to Finder you become more productive, because you're doing less unnecessary work.

The better you are at diagnosing and identifying underlying problems, coming up with solutions, and working with others, the less unnecessary work you'll do, and the more productive you'll be.

Leveraging your productivity

If you're a Finder you're vastly more productive, which makes you a far more valuable employee. You're the person who finds the expensive problems, who identifies the roadblocks no one knew where there, who discovers what your customers really wanted.

And that means you have far more negotiating leverage:

So if you want to keep coding, and you still want to progress in your career, start looking for problems. If you pay attention, you'll find them everywhere.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

10 Oct 2018 4:00am GMT

02 Oct 2018

feedPlanet Twisted

Moshe Zadka: Why No Dry Run?

(Thanks to Brian for his feedback. All mistakes and omissions that remain are mine.)

Some commands have a --dry-run option, which simulates running the command but without taking effect. Sometimes the option exists for speed reasons: just pretending to do something is faster than doing it. However, more often this is because doing it can cause big, possibly detrimental, effects, and it is nice to be able to see what would happen before running the script.

For example, ansible-playbook has the --check option, which will not actually have any effect: it will just report what ansible would have done. This is useful when editing a playbook or changing the configuration.

However, this is the worst possible default. If we have already decided that our command can cause much harm, and one way to mitigate the harm is to run it in a "dry run" mode and have a human check that this makes sense, why is "cause damage" the default?

As someone in SRE/DevOps jobs, many of the utilities I run can cause great harm without care. They are built to destroy whole environments in one go, or to upgrade several services, or to clean out unneeded data. Running it against the wrong database, or against the wrong environment, can wreak all kinds of havoc: from disabling a whole team for a day to actual financial harm to the company.

For this reason, the default of every tool I write is to run in dry run mode, and when wanting to actually have effect, explicitly specify --no-dry-run. This means that my finger accidentally slipping on the enter key just causes something to appear on my screen. After I am satisfied with the command, I up-arrow and add --no-dry-run to the end.

I now do it as a matter of course, even for cases where the stakes are lower. For example, the utility that publishes this blog has a --no-dry-run that publishes the blog. When run without arguments, it renders the blog locally so I can check it for errors.

So I really have no excuses... When I write a tool for serious production system, I always implement a --no-dry-run option, and have dry runs by default. What about you?

02 Oct 2018 7:00am GMT

27 Sep 2018

feedPlanet Twisted

Itamar Turner-Trauring: Avoiding burnout: lessons learned from a 19th century philosopher

You're hard at work writing code: you need to ship a feature on time, or release a whole new product, and you're pouring all your time and energy into it, your heart and your soul. And then, an uninvited and dangerous question insinuates itself into your consciousness.

If you succeed, if you ship your code, if you release your product, will you be happy? Will all your time and effort be worth it?

And you realize the answer is "no". And suddenly your work is worthless, your goals are meaningless. You just can't force yourself to work on something that doesn't matter.

Why bother? Why work at all?

This is not a new experience. Almost 200 years ago, John Stuart Mill went through this crisis. And being a highly verbose 19th century philosopher, he also wrote a highly detailed explanation how he managed to overcome what we would call depression or burnout.

And this explanation is useful not just to his 19th century peers, but to us programmers as well.

"Intellectual enjoyments above all"

At the core of Mill's argument is the idea that rational thought, "analysis" he calls it, is corrosive: "a perpetual worm at the root both of the passions and of the virtues". He never rejected rational thought, but he concluded that on its own it was insufficient, and potentially dangerous.

Mill's education had, from an early age, focused him solely on rational analysis. As a young child Mill was taught by his father to understand-not just memorize-Greek, arithmetic, history, mathematics, political economy, far more than even many well-educated adults learned at the time. And since he was taught at home without outside influences, he internalized his father's ideas prizing intellect over emotions.

In particular, Mill's father "never varied in rating intellectual enjoyments above all others… For passionate emotions of all sorts, and for everything which has been said or written in exaltation of them, he professed the greatest contempt." Thus Mill learned to prize rational thought and analysis over other feelings, as many programmers do-until he discovered the cost of focusing on those alone.

"The dissolving influence of analysis"

One day, things went wrong:

I was in a dull state of nerves, such as everybody is occasionally liable to; unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent…

In this frame of mind it occurred to me to put the question directly to myself: "Suppose that all your objects in life were realized; that all the changes in institutions and opinions which you are looking forward to, could be completely effected at this very instant: would this be a great joy and happiness to you?" And an irrepressible self-consciousness distinctly answered, "No!"

From this point on Mill suffered from depression, for months on end. And being of an analytic frame of mind, he was able to intellectually diagnose his problem.

On the one hand, rational logical thought is immensely useful in understanding the world: "it enables us mentally to separate ideas which have only casually clung together". But this ability to analyze also has its costs, since "the habit of analysis has a tendency to wear away the feelings". In particular, analysis "fearfully undermine all desires, and all pleasures".

Why should this make you happy? You try to analyze it logically, and eventually conclude there is no reason it should-and now you're no longer happy.

"Find happiness by the way"

Eventually an emotional, touching scene in a book he was reading nudged Mill out of his misery, and when he fully recovered he changed his approach to life in order to prevent a recurrence.

Mill's first conclusion was that happiness is a side-effect, not a goal you can achieve directly, nor verify directly by rational self-interrogation. Whenever you ask yourself "can I prove that I'm happy?" the self-consciousness involved will make the answer be "no". Instead of choosing happiness as your goal, you need to focus on some other thing you care about:

Those only are happy (I thought) who have their minds fixed on some object other than their own happiness; on the happiness of others, on the improvement of mankind, even on some art or pursuit, followed not as a means, but as itself an ideal end. Aiming thus at something else, they find happiness by the way.

It's worth noticing that Mill is suggesting focusing on something you actually care about. If you're spending your time working on something that meaningless to you, you will probably have a harder time of it.

"The internal culture of the individual"

Mill's second conclusion was that logical thought and analysis are not enough on their own. He still believed in the value of "intellectual culture", but he also aimed to become a more balanced person by "the cultivation of the feelings". And in particular, he learned the value of "poetry and art as instruments of human culture".

For example, Mill discovered Wordsworth's poetry:

These poems addressed themselves powerfully to one of the strongest of my pleasurable susceptibilities, the love of rural objects and natural scenery; to which I had been indebted not only for much of the pleasure of my life, but quite recently for relief from one of my longest relapses into depression….

What made Wordsworth's poems a medicine for my state of mind, was that they expressed, not mere outward beauty, but states of feeling, and of thought coloured by feeling, under the excitement of beauty. They seemed to be the very culture of the feelings, which I was in quest of. In them I seemed to draw from a Source of inward joy, of sympathetic and imaginative pleasure, which could be shared in by all human beings…

Both nature and art cultivate the feelings, an additional and distinct way of being human beyond logical analysis:

The intensest feeling of the beauty of a cloud lighted by the setting sun, is no hindrance to my knowing that the cloud is vapour of water, subject to all the laws of vapours in a state of suspension…

The practice of happiness

Mill's advice is not a universal panacea; among other flaws, it starts from a position of immense privilege. But I do think Mill hits on some important points about what it means to be human.

If you wish to put it into practice, here is Mill's advice, insofar as I can summarize it (I encourage you to go and read his Autobiography on your own):

  1. Aim in your work not for happiness, but for a goal you care about: improving the world, or even just applying and honing a skill you value.
  2. Your work-and the rational thought it entails-will not suffice to make you happy; rational thought on its own will undermine your feelings.
  3. You should therefore also cultivate your feelings: through nature, and through art.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

27 Sep 2018 4:00am GMT

26 Sep 2018

feedPlanet Twisted

Jp Calderone: Asynchronous Object Initialization - Patterns and Antipatterns

I caught Toshio Kuratomi's post about asyncio initialization patterns (or anti-patterns) on Planet Python. This is something I've dealt with a lot over the years using Twisted (one of the sources of inspiration for the asyncio developers).

To recap, Toshio wondered about a pattern involving asynchronous initialization of an instance. He wondered whether it was a good idea to start this work in __init__ and then explicitly wait for it in other methods of the class before performing the distinctive operations required by those other methods. Using asyncio (and using Toshio's example with some omissions for simplicity) this looks something like:


class Microblog:
def __init__(self, ...):
loop = asyncio.get_event_loop()
self.init_future = loop.run_in_executor(None, self._reading_init)

def _reading_init(self):
# ... do some initialization work,
# presumably expensive or otherwise long-running ...

@asyncio.coroutine
def sync_latest(self):
# Don't do anything until initialization is done
yield from self.init_future
# ... do some work that depends on that initialization ...

It's quite possible to do something similar to this when using Twisted. It only looks a little bit difference:


class Microblog:
def __init__(self, ...):
self.init_deferred = deferToThread(self._reading_init)

def _reading_init(self):
# ... do some initialization work,
# presumably expensive or otherwise long-running ...

@inlineCallbacks
def sync_latest(self):
# Don't do anything until initialization is done
yield self.init_deferred
# ... do some work that depends on that initialization ...

Despite the differing names, these two pieces of code basical do the same thing:

Maintenance costs

One thing this pattern gives you is an incompletely initialized object. If you write m = Microblog() then m refers to an object that's not actually ready to perform all of the operations it supposedly can perform. It's either up to the implementation or the caller to make sure to wait until it is ready. Toshio suggests that each method should do this implicitly (by starting with yield self.init_deferred or the equivalent). This is definitely better than forcing each call-site of a Microblog method to explicitly wait for this event before actually calling the method.

Still, this is a maintenance burden that's going to get old quickly. If you want full test coverage, it means you now need twice as many unit tests (one for the case where method is called before initialization is complete and another for the case where the method is called after this has happened). At least. Toshio's _reading_init method actually modifies attributes of self which means there are potentially many more than just two possible cases. Even if you're not particularly interested in having full automated test coverage (... for some reason ...), you still have to remember to add this yield statement to the beginning of all of Microblog's methods. It's not exactly a ton of work but it's one more thing to remember any time you maintain this code. And this is the kind of mistake where making a mistake creates a race condition that you might not immediately notice - which means you may ship the broken code to clients and you get to discover the problem when they start complaining about it.

Diminished flexibility

Another thing this pattern gives you is an object that does things as soon as you create it. Have you ever had a class with a __init__ method that raised an exception as a result of a failing interaction with some other part of the system? Perhaps it did file I/O and got a permission denied error or perhaps it was a socket doing blocking I/O on a network that was clogged and unresponsive. Among other problems, these cases are often difficult to report well because you don't have an object to blame the problem on yet. The asynchronous version is perhaps even worse since a failure in this asynchronous initialization doesn't actually prevent you from getting the instance - it's just another way you can end up with an incompletely initialized object (this time, one that is never going to be completely initialized and use of which is unsafe in difficult to reason-about ways).

Another related problem is that it removes one of your options for controlling the behavior of instances of that class. It's great to be able to control everything a class does just by the values passed in to __init__ but most programmers have probably come across a case where behavior is controlled via an attribute instead. If __init__ starts an operation then instantiating code doesn't have a chance to change the values of any attributes first (except, perhaps, by resorting to setting them on the class - which has global consequences and is generally icky).

Loss of control

A third consequence of this pattern is that instances of classes which employ it are inevitably doing something. It may be that you don't always want the instance to do something. It's certainly fine for a Microblog instance to create a SQLite3 database and initialize a cache directory if the program I'm writing which uses it is actually intent on hosting a blog. It's most likely the case that other useful things can be done with a Microblog instance, though. Toshio's own example includes a post method which doesn't use the SQLite3 database or the cache directory. His code correctly doesn't wait for init_future at the beginning of his post method - but this should leave the reader wondering why we need to create a SQLite3 database if all we want to do is post new entries.

Using this pattern, the SQLite3 database is always created - whether we want to use it or not. There are other reasons you might want a Microblog instance that hasn't initialized a bunch of on-disk state too - one of the most common is unit testing (yes, I said "unit testing" twice in one post!). A very convenient thing for a lot of unit tests, both of Microblog itself and of code that uses Microblog, is to compare instances of the class. How do you know you got a Microblog instance that is configured to use the right cache directory or database type? You most likely want to make some comparisons against it. The ideal way to do this is to be able to instantiate a Microblog instance in your test suite and uses its == implementation to compare it against an object given back by some API you've implemented. If creating a Microblog instance always goes off and creates a SQLite3 database then at the very least your test suite is going to be doing a lot of unnecessary work (making it slow) and at worst perhaps the two instances will fight with each other over the same SQLite3 database file (which they must share since they're meant to be instances representing the same state). Another way to look at this is that inextricably embedding the database connection logic into your __init__ method has taken control away from the user. Perhaps they have their own database connection setup logic. Perhaps they want to re-use connections or pass in a fake for testing. Saving a reference to that object on the instance for later use is a separate operation from creating the connection itself. They shouldn't be bound together in __init__ where you have to take them both or give up on using Microblog.

Alternatives

You might notice that these three observations I've made all sound a bit negative. You might conclude that I think this is an antipattern to be avoided. If so, feel free to give yourself a pat on the back at this point.

But if this is an antipattern, is there a pattern to use instead? I think so. I'll try to explain it.

The general idea behind the pattern I'm going to suggest comes in two parts. The first part is that your object should primarily be about representing state and your __init__ method should be about accepting that state from the outside world and storing it away on the instance being initialized for later use. It should always represent complete, internally consistent state - not partial state as asynchronous initialization implies. This means your __init__ methods should mostly look like this:


class Microblog(object):
def __init__(self, cache_dir, database_connection):
self.cache_dir = cache_dir
self.database_connection = database_connection

If you think that looks boring - yes, it does. Boring is a good thing here. Anything exciting your __init__ method does is probably going to be the cause of someone's bad day sooner or later. If you think it looks tedious - yes, it does. Consider using Hynek Schlawack's excellent attrs package (full disclosure - I contributed some ideas to attrs' design and Hynek ocassionally says nice things about me (I don't know if he means them, I just know he says them)).

The second part of the idea an acknowledgement that asynchronous initialization is a reality of programming with asynchronous tools. Fortunately __init__ isn't the only place to put code. Asynchronous factory functions are a great way to wrap up the asynchronous work sometimes necessary before an object can be fully and consistently initialized. Put another way:


class Microblog(object):
# ... __init__ as above ...

@classmethod
@asyncio.coroutine
def from_database(cls, cache_dir, database_path):
# ... or make it a free function, not a classmethod, if you prefer
loop = asyncio.get_event_loop()
database_connection = yield from loop.run_in_executor(None, cls._reading_init)
return cls(cache_dir, database_connection)

Notice that the setup work for a Microblog instance is still asynchronous but initialization of the Microblog instance is not. There is never a time when a Microblog instance is hanging around partially ready for action. There is setup work and then there is a complete, usable Microblog.

This addresses the three observations I made above:

I hope these points have made a strong case for one of these approaches being an anti-pattern to avoid (in Twisted, in asyncio, or in any other asynchronous programming context) and for the other as being a useful pattern to provide both convenient, expressive constructors while at the same time making object initializers unsurprising and maximizing their usefulness.

26 Sep 2018 11:39pm GMT

21 Sep 2018

feedPlanet Twisted

Itamar Turner-Trauring: Never use the word "User" in your code

You're six months into a project when you realize a tiny, simple assumption you made at the start was completely wrong. And now you need to fix the problem while keeping the existing system running-with far more effort than it would've taken if you'd just gotten it right in the first place.

Today I'd like to tell you about one common mistake, a single word that will cause you endless trouble. I am speaking, of course, about "users".

There are two basic problems with this word:

  1. "User" is almost never a good description of your requirements.
  2. "User" encourages a fundamental security design flaw.

The concept "user" is dangerously vague, and you will almost always be better off using more accurate terminology.

You don't have users

To begin with, no software system actually has "users". At first glance "user" is a fine description, but once you look a little closer you realize that your business logic actually has more complexity than that.

We'll consider three examples, starting with an extreme case.

Airline reservation systems don't have "users"

I once worked on the access control logic for an airline reservation system. Here's a very partial list of the requirements:

And so on and so forth. Some the basic concepts that map to humans are "Traveler", "Agent" (the website might also be an agent), and "Purchaser". The concept of "user" simply wasn't useful, and we didn't use the word at all-in many requests, for example, we had to include credentials for both the Traveler and the Agent.

Unix doesn't have "users"

Let's take a look at a very different case. Unix (these days known as POSIX) has users: users can log-in and run code. That seems fine, right? But let's take a closer look.

If we actually go through all the things we call users, we have:

These are four fairly different concepts, but in POSIX they are all "users". As we'll see later on, smashing all these concept into one vague concept called "user" can lead to many security problems.

But operationally, we don't even have a way to say "only Alice and Bob can login to the shared admin account" within the boundaries of the POSIX user model.

SaaS providers don't have "users"

Jeremy Green recently tweeted about the user model in Software-as-a-Service, and that is what first prompted me to write this post. His basic point is that SaaS services virtually always have:

  1. A person at an organization who is paying for the service.
  2. One or more people from that organization who actually use the service, together.

If you combine these into a single "User" at the start, you will be in a world of pain latter. You can't model teams, you can't model payment for multiple people at once-and now you need to retrofit your system. Now, you could learn this lesson for the SaaS case, and move on with your life.

But this is just a single instance of a broader problem: the concept "User" is too vague. If you start out being suspicious of the word "User", you are much more likely to end up realizing you actually have two concepts at least: the Team (unit of payment and ownership) and the team Members (who actually use the service).

"Users" as a security problem

The word "users" isn't just a problem for business logic: it also has severe security consequences. The word "user" is so vague that it conflates two fundamentally different concepts:

To see why this is a problem, let's say you visit a malicious website which hosts an image that exploits a buffer overflow in your browser. The remote site now controls your browser, and starts uploading all your files to their server. Why can it do that?

Because your browser is running as your operating system "user", which is presumed to be identical to you, a human being, a very different kind of "user". You, the user, don't want to upload those files. The operating system account, also the user, can upload those files, and since your browser is running under your user all its actions are presumed to be what you intended.

This is known as the Confused Deputy Problem. It's a problem that's much more likely to be part of your design if you're using the word "user" to describe two fundamentally different things as being the same.

The value of up-front design

The key to being a productive programmer is getting the same work done with less effort. Using vague terms like "user" to model your software will take huge amounts of time and effort to fix later on. It may seem productive to start coding immediately, but it's actually just the opposite.

Next time you start a new software project, spend a few hours up-front nailing down your terminology and concepts: you still won't get it exactly right, but you'll do a lot better. Your future self will thank you for the all the wasteful workaround work you've prevented.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

21 Sep 2018 4:00am GMT

10 Sep 2018

feedPlanet Twisted

Itamar Turner-Trauring: Work/life balance and challenging work: you can have both

You want to work on cutting edge technology, you want challenging problems, you want something interesting. Problem is, you also want work/life balance: you don't want to deal with unrealistic deadlines from management, or pulling all-nighters to fix a bug.

And the problem is that when you ask around, people tell say you need to work long hours if you want to work on challenging problems. That's just how it is, they say.

To which I say: bullshit.

You can work on challenging problems and still have work/life balance. In fact, you'll do much better that way.

My apparently impossible career so far

Just as a counter-example, let me tell you how I've spent the past 14 years. Among other things, I've worked on:

All of these were hard problems, and interesting problems, and challenging problems, and none of them required working long hours.

Maybe those past 14 years are some sort of statistical aberration, but I rather doubt it. You can, for example, go work on some really tricky distributed systems problems over at Cockroach Labs, and have Fridays off to do whatever you want. (Not a personal endorsement: I know nothing about them other than those two points.)

Long hours have nothing to do with interesting problems

There is no inherent relationship between interesting problems and working long hours. You're actually much more likely to solve hard problems if you're well rested, and have plenty of time off to relax and let your brain do its thing off in the background.

The real origin of this connection is a marketing strategy for a certain subset of startups: "Yes, we'll pay you jack shit and have you work 70 hours a week, but that's the only way you can work on challenging problems!"

This is nonsense.

The real problem that these companies are trying to solve is "how do I get as much work out of these suckers with as little pay as possible." It's an incompetent self-defeating strategy, but there's enough VCs who think exploitation is a great business model that you're going to encounter it at least some startups.

The reality is that working long hours is the result of bad management. Which is to say, it's completely orthogonal to how interesting the problem is.

You can just as easily find bad management in enterprise companies working on the most pointless and mind-numbingly soul-crushing problems (and failing to implement them well). And because of that bad management you'll be forced to work long hours, even though the problems aren't hard.

Luckily, you can also find good management in plenty of organizations, big and small-and some of them are working on hard, challenging problems too.

Avoiding bad workplaces

So how do you avoid exploitative workplaces and find the good ones? By asking some questions up front. You shouldn't be relying on luck to keep you away from bad jobs; I made that mistake once, but never again.

Long ago I was interviewing for a job in NYC, and I mentioned that I wanted to continue working on open source software in my spare time. Here's how the rest of the conversation went:

Interviewer: "Well, that's fine, but… we used to have an employee here who did some non-profit work. We could never tell if their mind was here or on their volunteering, and it didn't really work out. So we want to make sure you'll be really focused on your job."

Me: "Did they do their volunteering during work hours?"

Interviewer: "Oh, no, they only did that on their own time, it was just that they left at 5 o'clock every day."

At that point I realized that, while I was willing to exchange 40 hours a week for a salary, I was not willing to exchange my whole life. I escaped that company by accident because they were so blatant about it, but you can do better.

Finding the job you want

When you're interviewing for a job, don't just ask about the problems they're working on. You should also be asking about the work environment and work/life balance.

You can do so tactfully and informatively by asking things like "What's a typical work day like here?" or "How are deadlines determined?" (You can get a good list of questions over at Culture Queries.)

There are companies out there that do interesting work and have work/life balance: do your research, ask the right questions, and you too will be able to find them.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

10 Sep 2018 4:00am GMT

04 Sep 2018

feedPlanet Twisted

Itamar Turner-Trauring: Stabbing yourself with a fork() in a multiprocessing.Pool full of sharks

It's time for another deep-dive into Python brokenness and the pain that is POSIX system programming, this time with exciting and not very convincing shark-themed metaphors! Most of what you'll learn isn't really Python-specific, so stick around regardless and enjoy the sharks.

Let's set the metaphorical scene: you're swimming in a pool full of sharks. (The sharks are a metaphor for processes.)

Next, you take a fork. (The fork is a metaphor for fork().)

You stab yourself with the fork. Stab stab stab. Blood starts seeping out, the sharks start circling, and pretty soon you find yourself-dead(locked) in the water!

In this journey through space and time you will encounter:

Let's begin!

Introducing multiprocessing.Pool

Python provides a handy module that allows you to run tasks in a pool of processes, a great way to improve the parallelism of your program. (Note that none of these examples were tested on Windows; I'm focusing on the *nix platform here.)

from multiprocessing import Pool
from os import getpid

def double(i):
    print("I'm process", getpid())
    return i * 2

if __name__ == '__main__':
    with Pool() as pool:
        result = pool.map(double, [1, 2, 3, 4, 5])
        print(result)

If we run this, we get:

I'm process 4942
I'm process 4943
I'm process 4944
I'm process 4942
I'm process 4943
[2, 4, 6, 8, 10]

As you can see, the double() function ran in different processes.

Some code that ought to work, but doesn't

Unfortunately, while the Pool class is useful, it's also full of vicious sharks, just waiting for you to make a mistake. For example, the following perfectly reasonable code:

import logging
from threading import Thread
from queue import Queue
from logging.handlers import QueueListener, QueueHandler
from multiprocessing import Pool

def setup_logging():
    # Logs get written to a queue, and then a thread reads
    # from that queue and writes messages to a file:
    _log_queue = Queue()
    QueueListener(
        _log_queue, logging.FileHandler("out.log")).start()
    logging.getLogger().addHandler(QueueHandler(_log_queue))

    # Our parent process is running a thread that
    # logs messages:
    def write_logs():
        while True:
            logging.error("hello, I just did something")
    Thread(target=write_logs).start()

def runs_in_subprocess():
    print("About to log...")
    logging.error("hello, I did something")
    print("...logged")

if __name__ == '__main__':
    setup_logging()

    # Meanwhile, we start a process pool that writes some
    # logs. We do this in a loop to make race condition more
    # likely to be triggered.
    while True:
        with Pool() as pool:
            pool.apply(runs_in_subprocess)

Here's what the program does:

  1. In the parent process, log messages are routed to a queue, and a thread reads from the queue and writes those messages to a log file.
  2. Another thread writes a continuous stream of log messages.
  3. Finally, we start a process pool, and log a message in one of the child subprocesses.

If we run this program on Linux, we get the following output:

About to log...
...logged
About to log...
...logged
About to log...
<at this point the program freezes>

Why does this program freeze?

How subprocesses are started on POSIX (the standard formerly known as Unix)

To understand what's going on you need to understand how you start subprocesses on POSIX (which is to say, Linux, BSDs, macOS, and so on).

  1. A copy of the process is created using the fork() system call.
  2. The child process replaces itself with a different program using the execve() system call (or one of its variants, e.g. execl()).

The thing is, there's nothing preventing you from just doing fork(). For example, here we fork() and then print the current process' process ID (PID):

from os import fork, getpid

print("I am parent process", getpid())
if fork():
    print("I am the parent process, with PID", getpid())
else:
    print("I am the child process, with PID", getpid())

When we run it:

I am parent process 3619
I am the parent process, with PID 3619
I am the child process, with PID 3620

As you can see both parent (PID 3619) and child (PID 3620) continue to run the same Python code.

Here's where it gets interesting: fork()-only is how Python creates process pools by default.

The problem with just fork()ing

So OK, Python starts a pool of processes by just doing fork(). This seems convenient: the child process has access to a copy of everything in the parent process' memory (though the child can't change anything in the parent anymore). But how exactly is that causing the deadlock we saw?

The cause is two problems with continuing to run code after a fork()-without-execve():

  1. fork() copies everything in memory.
  2. But it doesn't copy everything.

fork() copies everything in memory

When you do a fork(), it copies everything in memory. That includes any globals you've set in imported Python modules.

For example, your logging configuration:

import logging
from multiprocessing import Pool
from os import getpid

def runs_in_subprocess():
    logging.info(
        "I am the child, with PID {}".format(getpid()))

if __name__ == '__main__':
    logging.basicConfig(
        format='GADZOOKS %(message)s', level=logging.DEBUG)

    logging.info(
        "I am the parent, with PID {}".format(getpid()))

    with Pool() as pool:
        pool.apply(runs_in_subprocess)

When we run this program, we get:

GADZOOKS I am the parent, with PID 3884
GADZOOKS I am the child, with PID 3885

Notice how child processes in your pool inherit the parent process' logging configuration, even if that wasn't your intention! More broadly, anything you configure on a module level in the parent is inherited by processes in the pool, which can lead to some unexpected behavior.

But fork() doesn't copy everything

The second problem is that fork() doesn't actually copy everything. In particular, one thing that fork() doesn't copy is threads. Any threads running in the parent process do not exist in the child process.

from threading import Thread, enumerate
from os import fork
from time import sleep

# Start a thread:
Thread(target=lambda: sleep(60)).start()

if fork():
    print("The parent process has {} threads".format(
        len(enumerate())))
else:
    print("The child process has {} threads".format(
        len(enumerate())))

When we run this program, we see the thread we started didn't survive the fork():

The parent process has 2 threads
The child process has 1 threads

The mystery is solved

Here's why that original program is deadlocking-with their powers combined, the two problems with fork()-only create a bigger, sharkier problem:

  1. Whenever the thread in the parent process writes a log messages, it adds it to a Queue. That involves acquiring a lock.
  2. If the fork() happens at the wrong time, the lock is copied in an acquired state.
  3. The child process copies the parent's logging configuration-including the queue.
  4. Whenever the child process writes a log message, it tries to write it to the queue.
  5. That means acquiring the lock, but the lock is already acquired.
  6. The child process now waits for the lock to be released.
  7. The lock will never be released, because the thread that would release it wasn't copied over by the fork().

In simplified form:

from os import fork
from threading import Lock

# Lock is acquired in the parent process:
lock = Lock()
lock.acquire()

if fork() == 0:
    # In the child process, try to grab the lock:
    print("Acquiring lock...")
    lock.acquire()
    print("Lock acquired! (This code will never run)")

Band-aids and workarounds

There are some workarounds that could make this a little better.

For module state, the logging library could have its configuration reset when child processes are started by multiprocessing.Pool. However, this doesn't solve the problem for all the other Python modules and libraries that set some sort of module-level global state. Every single library that does this would need to fix itself to work with multiprocessing.

For threads, locks could be set back to released state when fork() is called (Python has a ticket for this.) Unfortunately this doesn't solve the problem with locks created by C libraries, it would only address locks created directly by Python. And it doesn't address the fact that those locks don't really make sense anymore in the child process, whether or not they've been released.

Luckily, there is a better, easier solution.

The real solution: stop plain fork()ing

In Python 3 the multiprocessing library added new ways of starting subprocesses. One of these does a fork() followed by an execve() of a completely new Python process. That solves our problem, because module state isn't inherited by child processes: it starts from scratch.

Enabling this alternate configuration requires changing just two lines of code in your program:

from multiprocessing import get_context

def your_func():
    with get_context("spawn").Pool() as pool:
        # ... everything else is unchanged

That's it: do that and all the problems we've been going over won't affect you. (See the documentation on contexts for details.)

But this still requires you to do the work. And it requires every Python user who trustingly follows the examples in the documentation to get confused why their program sometimes breaks.

The current default is broken, and in an ideal world Python would document that, or better yet change it to no longer be the default.

Learning more

My explanation here is of course somewhat simplified: for example, there is state other than threads that fork() doesn't copy. Here are some additional resources:

Stay safe, fellow programmers, and watch out for sharks and bad interactions between threads and processes! 🦈🦑

(Want more stories of software failure? I write a weekly newsletter about 20+ years of my mistakes as a programmer.)

Thanks to Terry Reedy for pointing out the need for if __name__ == '__main__'.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

04 Sep 2018 4:00am GMT

03 Sep 2018

feedPlanet Twisted

Moshe Zadka: Managing Dependencies

(Thanks to Mark Rice for his helpful suggestions. Any mistakes or omissions that remain are my responsibility.)

Some Python projects are designed to be libraries, consumed by other projects. These are most of the things people consider "Python projects": for example, Twisted, Flask, and most other open source tools. However, things like mu are sometimes installed as an end-user artifact. More commonly, many web services are written as deployable Python applications. A good example is the issue tracking project trac.

Projects that are deployed must be deployed with their dependencies, and with the dependencies of those dependencies, and so forth. Moreover, at deployment time, a specific version must be deployed. If a project declares a dependency of flask>=1.0.1, for example, something needs to decide whether to deploy flask 1.0.1 or flask 1.0.2.

For clarity, in this text, we will refer to the declared compatibility statements in something like setup.py (e.g., flask>=1.0.1) as "intent" dependencies, since they document programmer intent. The specific dependencies that are eventually deployed will be referred as the "expressed" dependencies, since they are expressed in the actual deployed artifact (for example, a Docker image).

Usually, "intent" dependencies are defined in setup.py. This does not have to be the case, but it almost always is: since there is usually some "glue" code at the top, keeping everything together, it makes sense to treat it as a library -- albeit, one that sometimes is not uploaded to any package index.

When producing the deployed artifact, we need to decide on how to generate the expressed dependencies. There are two competing forces. One is the desire to be current: using the latest version of Django means getting all the latest bug fixes, and means getting fixes to future bugs will require moving less versions. The other is the desire to avoid changes: when deploying a small bug fix, changing all library versions to the newest ones might introduce a lot of change.

For this reason, most projects will check in the "artifact" (often called requirements.txt) into source control, produce actual deployed versions from that, and some procedure to update it.

A similar story can be told about the development dependencies, often defined as extra [dev] dependencies in setup.py, and resulting in a file dev-requirements.txt that is checked into source control. The pressures are a little different, and indeed, sometimes nobody bothers to check in dev-requirements.txt even when checking in requirements.txt, but the basic dynamic is similar.

The worst procedure is probably "when someone remembers to". This is not usually anyone's top priority, and most developers are busy with their regular day-to-day task. When an upgrade is necessary for some reason -- for example, a bug fix is available, this can mean a lot of disruption. Often this disruption manifests in that just upgrading one library does not work. It now depends on newer libraries, so the entire dependency graph has to be updated, all at once. All intermediate "deprecation warnings" that might have been there for several months have been skipped over, and developers are suddenly faced with several breaking upgrades, all at once. The size of the change only grows with time, and becomes less and less surmountable, making it less and less likely that it will be done, until it ends in a case of complete bitrot.

Sadly, however, "when someone remembers to" is the default procedure in the absence of any explicit procedure.

Some organizations, having suffered through the disadvantages of "when someone remembers to", decide to go to the other extreme: avoiding to check in the requirements.txt completely, and generating it on every artifact build. However, this means causing a lot of unnecessary churn. It is impossible to fix a small bug without making sure that the code is compatible with the latest versions of all libraries.

A better way to approach the problem is to have an explicit process of recalculating the expressed dependencies from the intent dependencies. One approach is to manufacture, with some cadence, code change requests that update the requirements.txt. This means they are resolved like all code changes: review, running automated tests, and whatever other local processes are implemented.

Another is to do those on a calendar based event. This can be anything from a manually-strongly-encouraged "update Monday", where on Monday morning, one of a developer tasks is to generate a requirements.txt updates for all projects they are responsible for, to including it as part of a time-based release process: for example, generating it on a cadence that aligns with agile "sprints", as part of the release of the code changes in a particular sprints.

When updating does reveal an incompatibility it needs to be resolved. One way is to update the local code: this certainly is the best thing to do when the problem is that the library changed an API or changed an internal implementation detail that was being used accidentally (...or intentionally). However, sometimes the new version has a bug in it that needs to be fixed. In that case, the intent is now to avoid that version. It is best to express the intent exactly as that: !=<bad version>. This means when an even newer version is released, hopefully fixing the bug, it will be used. If a new version is released without the bug fix, we add another != clause. This is painful, and intentionally so. Either we need to get the bug fixed in the library, stop using the library, or fork it. Since we are falling further and further behind the latest version, this is introducing risk into our code, and the increasing != clauses will indicate this pain: and encourage us to resolve it.

The most important thing is to choose a specific process for updating the expressed dependencies, clearly document it and consistently follow it. As long as such a process is chosen, documented and followed, it is possible to avoid the bitrot issue.

03 Sep 2018 3:00am GMT

22 Aug 2018

feedPlanet Twisted

Itamar Turner-Trauring: Guest Post: How to engineer a raise

You've discovered you're underpaid. Maybe you found out a new hire is making more than you. Or maybe you've been doing a great job at work, but your compensation hasn't changed.

Whatever the reason, you want to get a higher salary.

Now what?

To answer that question, the following guest post by Adrienne Bolger will explain how you can negotiate a raise at your current job. As you'll see, she's successfully used these strategies to negotiate 20-30% raises on multiple occasions.

This article will answer some common questions, and explain some useful strategies, to help you-a software engineer-engineer a raise from your employer. I'll cover:

  1. Researching your worth and options.
  2. Expectation setting.
  3. Strategies that I have used-and helped others use-to ask for a raise.

How much are you "worth"?

At the end of the day, an optimized salary in a more-or-less capitalist market is the highest salary you think you can get that passes the "laugh test." If you ask for a salary or bonus, and your (theoretical) boss or HR head laughs in your face, then the number is too high.

Note that this number isn't your laugh test number: many people, out of fear of rejection, are afraid to ask for a 25% raise rather than a more "modest" sounding 5% raise. But sometimes the 25% value is the right increase! Your number should not be based on fear: it should be based on research.

There are several ways to calculate your "market value" to an employer. To start, take 2 or 3 of the following quizzes to calculate median/mean salaries based on your demographics:

How much could you be worth in the future?

Take the surveys a second time. However, this time, give yourself a small imaginary promotion: 2 years more experience and the next job title you want-Senior Engineer, Engineer II, Software Architect, Engineering Manager, Director, whatever it is. How far away is that yearly salary amount from the first one? A little? A lot?

This is an important number, because the pay market for software engineers is not linear. Check out this graph created by ArsTechnica from the 2017 Stack Overflow salary data.

This graph shows the economics of a very hot job market: people with relatively little experience still make a good living, because their skills are in high demand. However, the median salary for a developer between 15 and 20 years of experience is completely flat. This isn't the best news for experienced developers who haven't kept learning (and some languages pay more than others), but for early career professionals, this external market factor is fantastic.

With data to back you up, you can ask for a 20 to 30% raise after only a year or two on the job with a completely straight face. I did it in my own career at the 2 and 4 year marks at the same company, and received the raise both times.

Adjust expectations for your company and industry

If you've come to the conclusion you are underpaid because you know what your colleagues earn, then you can skip this step. Otherwise, you have a little more research to do.

Ask your company's HR department and recruiters: when hiring in general, does your company go for fair market prices, under-market bargains, or above-market talent? Industries like finance pay better than non-profits and civil service organizations whether you are an engineer or an accountant.

The bigger the company, the more likely you are to get standard yearly pay adjustments for things like cost-of-living expenses, but a bigger company is also likely more rigid in salary bands for a specific job title. HR at your company may or may not be willing to share the exact high and low range for a job title. If they are not, Glassdoor can provide a decent estimate for similarly size companies.

When to ask

Again, know your company. Does it have a standard financial cycle, with cost-of-living and raises allocated yearly 1-2 months after reviews are in?

If so, time your "ask" before your formal review by 3-8 weeks. That might be November if your yearly reviews are in December, or it might be January if company yearly performance reviews occur in March, after the fiscal year results from last year are in.

Why do this?

The problem with waiting until a formal review is scheduled is that is ruins plans you can't see or are not privy to. Even in the best case where you were getting a raise anyway, the manager giving your review already has a planned number in their head and their accounting software. Asking a month beforehand gives your boss time to budget your raise into a yearly plan, which is much easier than trying to fight bureaucracy out-of-cycle.

You should not ask for a raise more frequently than every 2 years. If you feel like you have to, then you probably didn't ask for enough last time. Consider that if you find yourself afraid to ask for as much.

If you are debating between asking for a raise and going job hunting because you feel undervalued, ask for the raise first. I suggest this because job searching is a huge time sink, especially if you don't really want to change jobs.

You owe it to yourself to proactively seek happiness. If what you really want is more money and to stay at your current company, then give your employer a chance to make you happy. If you ask and are denied, then at least you've done all the research into compensation when you go looking.

How to ask

Ask for a raise both in writing and in person.

As email is still considered informal, this is one of those cases where an actual letter-printed out and hand-delivered to a scheduled meeting with your manager-is a good idea. The meeting gives the chance to explain what you want a little more, but the letter is a written record of what you want that goes to HR, as well as a way to keep yourself from backing out due to nerves or stress.

I once requested a raise from a manager who (unbeknownst to me) was let go 2 weeks later. However, because my raise request was also in writing, I received the raise from my new boss with no confusion after the transfer.

The letter should be 2-3 paragraphs long and:

The letter (and subsequent meeting) should not:

The meeting

Once you ask for a raise and a meeting to talk about it, nerves may kick in. Do your homework ahead of time and come in prepared. Bring a copy of your letter and, during the meeting, re-iterate exactly what it is you want and why you deserve it.

It's fine to be nervous, but do not attempt any weird "Hollywood caricature of a car salesman" negotiating tactics. Don't be short-sighted; remember that you have to perform your day job with your manager once the meeting is over.

If your employer declines

If you asked for your "laugh test" number and your employer can only meet you halfway or can't increase your compensation at all, your response should be "Why? And what can I do to change that?"

Be proactive in determining where the problem is. At a big company, if there's a salary band, you may need a promotion before you can get the raise. If the company isn't making enough money for raises for anyone, it may be time to discreetly look for another job anyway.

Whether or not you choose to accept a compromise or counteroffer is up to you-but make sure that you can live with your choice, at least short term, because it won't make sense to ask again for another few months.

And that's Adrienne's post. I hope you found it useful: I certainly learned a lot from it.

Of course, reading this article isn't enough. You still need to go and do the work to get the raise. So why not start today?

  1. Do your research.
  2. Pick the right moment.
  3. Go ask for that raise!



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

22 Aug 2018 4:00am GMT

16 Aug 2018

feedPlanet Twisted

Itamar Turner-Trauring: How to say "no" to your boss, your boss's boss, and even the CEO

You've got plenty of work to do already, when your boss (or their boss, or the CEO) comes by and asks you to do yet another task. If you take yet another task on you're going to be working long hours, or delivering your code late, and someone is going to be unhappy.

You don't want to say no to your boss (let alone the CEO!). You don't want to say yes and spend your weekend working.

What do you do? How do you keep everyone happy?

What you need is your management to trust your judgment. If they did, you could focus on the important work, the work that really matters. And when you had to say "no", your boss (or the CEO!) would listen and let you continue on your current task.

To get there, you don't immediately say "no", and don't immediately say "yes".

Here's what you do instead:

  1. Start with your organizational and project goals.
  2. Listen and ask questions.
  3. Make a decision.
  4. Communicate your decision in terms of organizational and project goals.

Step 1: Start with you goals

If you want people to listen to you, you need a strong understanding of why you're doing the work you're doing.

You should be able to connect your individual action to project success, and connect that to organizational success. For example, "Our goal is to increase recurring revenue, customer churn is too high and it's decreasing revenue, so I am working on this bug because it's making our product unusable for at least 7% of users."

When you're just starting out as an employee this can be difficult to do, but as you grow in experience you can and should make sure you understand this.

(Starting with your goals is useful in other ways as well, e.g. helping you stay focused).

Step 2: Listen and ask questions

Your lead developer/product manager/team mate/CEO/CTO had just stopped by your desk and given you a new task. No doubt you already have many existing tasks. How should you handle this situation?

To begin with, don't immediately give an answer:

Instead of immediately agreeing or disagreeing to do the task, take the time find out why the task needs to be done. Make sure you demonstrate you actually care about the request and are seriously considering it.

That means first, listening to what they have to say.

And second, asking some questions: why does this need to be done? What is the deadline? How important is it to them?

Sometimes the CEO will come by and ask for something they don't really care about: they only want you to do it if you have the spare time. Sometimes your summer intern will come by and point out a problem that turns out to be a critical production-impacting bug.

You won't know unless you listen, and ask questions to find out what's really going on.

Step 3: Decide based on your goals

Is the new task more important to project and organizational goals than your current task? You should probably switch to working on it.

Is the new task less important? You don't want to do it.

Not sure? Ask more questions.

Still not sure? Talk to your manager about it: "Can I get back to you in a bit about this? I need to talk this over with Jane."

Step 4: Communicate your decision

Once you've made a decision, you need to communicate it in a meaningful, respectful way, and in a way that reflects organizational and project goals.

If you decided to take the task on:

  1. Tell the person asking you that you'll take it on.
  2. Explain to the people who requested your previous tasks that those tasks will be late. Make sure it's clear why you took on a new task: "That feature is going to have to wait: it's fairly low on the priority list, and the CEO asked me to throw together a demo for the sales meeting on Friday."

If you decided not to take it on:

  1. Explain why you're not going to do it, in the context of project and organizational goals. "That's a great feature idea, and I'd love to do it, but this bug is breaking the app for 10% of our customers and so I really need to focus on getting it done."
  2. Provide an alternative, which can include:
    • Deflection: "Why don't you talk to the product manager about this?"
    • Queuing: "Why don't you add it to the backlog, and we can see if we have time to do it next sprint?"
    • Promise: "I'll do it next, as soon as I'm done with my current task."
    • Reminder: "Can you remind me again in a couple of weeks?"
    • Different solution: "Your original proposal would take me too long, given the release-blocker backlog, but maybe if we did this other thing instead I could fit it in. It seems like it would get us 80% of the functionality in a fraction of the time-what do you say?"

Becoming a more valuable employee

Saying "no" the right way makes you more valuable, because it ensures you're working on important tasks.

It also ensures your managers know you're more valuable, because you've communicated that:

  1. You've carefully and respectfully considered their request.
  2. You've taken existing requests you're already working on into account.
  3. You've made a decision not based on personal whim, but on your best understanding of what is important to your project and organization.

Best of all, saying "no" the right way means no evenings or weekends spent working on tasks that don't really need doing.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

16 Aug 2018 4:00am GMT

10 Aug 2018

feedPlanet Twisted

Itamar Turner-Trauring: There's always more work to do—but you still don't need to work long hours

Have you ever wished you could reduce your working hours, or even just limit yourself to 40 hours a week, but came up against all the work that just needs doing? There's always more work to do, always more bugs, always some feature that's important to someone-

How can you limit yourself to 40 hours a week, let alone a shorter workweek, given all this work?

The answer: by planning ahead. And planning ahead the right way.

The wrong way to plan

I was interviewing for a job at a startup, and my first interviewer was the VP of Engineering. He explained that he'd read my blog posts about the importance of work/life balance, and he just wanted to be upfront about the fact they were working 50-60 hours each week. And this wasn't a short-term emergency: in fact, they were going to be working long hours for months.

I politely noted that I felt good prioritization and planning could often reduce the need for long hours.

The VP explained the problem: they'd planned all their tasks in detail. But then-to their surprise-an important customer asked for more features, and that blew through their schedule, which is why they needed to work long hours.

I kept my mouth shut and went through the interview process. But I didn't take the job.

Here's what's wrong with this approach:

  1. Important customers asking for more features should not be a surprise. Customers ask for changes, this is how it goes.
  2. More broadly, the original schedule was apparently created with the presumption that everything would go perfectly. In the real world nothing ever goes perfectly.
  3. When it became clear that that there was too much work to do, their solution was to work longer hours, even though research suggests that longer hours do not increase output over the long term.

The better way: prioritization and padding

So how do you keep yourself from blowing through your schedule without working long hours?

  1. Prioritize your work.
  2. Leave some padding in your schedule for unexpected events.
  3. Set your deadlines shorter than they need to be.
  4. If you run out of time, drop the least important work.

1. Prioritize your work

Not all work is created equal. By starting with your goals, you can divide tasks into three buckets:

  1. Critical to your project's success.
  2. Really nice to have-but not critical.
  3. Clearly not necessary.

Start by dropping the third category, and minimizing the second. You'll have to say "no" sometimes, but if you don't say "no" you'll never get anything delivered on time.

2. Leave some padding in your schedule

You need to assume that things will go wrong and you'll need extra time to do any given task. And you need to assume other important tasks will also become critical; you don't know which, but this always happens. So never give your estimate as the actual delivery date: always pad it with extra time for unexpected difficulties and unexpected interruptions.

If you think a task will take a day, promise to deliver it in three days.

3. Set shorter deadlines for yourself

Your own internal deadline, the one you don't communicate to your boss or customer, should be shorter than your estimate. If you think a task will take a day, try to finish it in less time.

Why?

4. When you run out of time, drop the less important work

Inevitably things will still go wrong and you'll find yourself running low on time. Now's the time to drop all the nice-to-haves, and rethink whether everything you thought was critical really is (quite often, it's not).

Long hours are the wrong solution

Whenever you feel yourself with too much work to do, go back and apply these principles: underpromise, limit your own time, prioritize ruthlessly. With practice you'll learn how to deliver the results that really matter-without working long hours.

When you've reached that point, you can work a normal 40-hour workweek without worrying. Or even better, you can start thinking about negotiating a 3-day weekend.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

10 Aug 2018 4:00am GMT

09 Aug 2018

feedPlanet Twisted

Hynek Schlawack: Hardening Your Web Server’s SSL Ciphers

There are many wordy articles on configuring your web server's TLS ciphers. This is not one of them. Instead I will share a configuration which is both compatible enough for today's needs and scores a straight "A" on Qualys's SSL Server Test.

09 Aug 2018 6:00pm GMT

03 Aug 2018

feedPlanet Twisted

Moshe Zadka: Tests Should Fail

(Thanks to Avy Faingezicht and Donald Stufft for giving me encouragement and feedback. All mistakes that remain are mine.)

"eyes have they, but they see not" -- Psalms, 135:16

Eyes are expensive to maintain. They require protection from the elements, constant lubrication, behavioral adaptations to protect them and more. However, they give us a benefit. They allow us to see: to detect differences in the environment. Eyes register different signals when looking at an unripe fruit and when looking at a ripe fruit. This allows us to eat the ripe fruit, and wait for the unripe fruit to ripen: to behave differently, in a way that ultimately furthers our goals (eat yummy fruits).

If our eyes did not get different signals that influenced our behavior, they would not be cost effective. Evolution is a harsh mistress, and the eyes would be quickly gone if the signals from them were not valuable.

Writing tests is expensive. It takes time to write them, time to review them, time to modify them as code evolves. A test that never fails is like an eye that cannot see: it always sends the same signal, "eat that fruit!". In order to be valuable, a test must be able to fail, and that failure must modify our behavior.

The only way to be sure that a test can fail is to see it fail. Test-driven-development does it by writing tests that fail before modifying the code. But even when not using TDD, making sure that tests fail is important. Before checking in, break your code. Best of all is to break the code in a way that would be realistic for a maintenance programmer to do. Then run the tests. See them fail. Check it in to the branch, and watch CI fail. Make sure that this CI failure is clearly communicated: something big must be red, and merging should be impossible, or at least require using a clearly visible "override switch".

If there is no code modification that makes the test fail, of if such code modification is weird or unrealistic, it is not a good test. If a test failure does not halt the CI with a visible message, it is not a good CI. These are false gods, with eyes that do not see, and mouths that do not speak.

Real tests have failures.

03 Aug 2018 5:30am GMT

Moshe Zadka: Thank you, Guido

When I was in my early 20s, I was OK at programming, but I definitely didn't like it. Then, one evening, I read the Python tutorial. That evening changed my mind. I woke up the next morning, like Neo in the matrix, and knew Python.

I was doing statistics at the time. Python, with Numeric, was a powerful tool. It definitely could do things that SPSS could only dream about. Suddenly, something has happened that never happened before -- I started to enjoy programming.

I had to spend six years in the desert of programming in languages that were not Python, before my work place, and soon afterwards the world, realized what an amazing tool Python is. I have not had to struggle to find a Python position since.

I started with Python 1.4. I have grew up with Python. Now I am...no longer in my 20s, and Python version 3.7 was recently released.

I owe much of my career, many of my friends, and much of my hobby time to that one evening, sitting down and reading the Python tutorial -- and to the man who made the language and wrote the first version of that tutorial, Guido van Rossum.

Python, like all open source projects, like, indeed, all software projects, is not a one man show. A whole team, with changing personnel, works on core Python and its ecosystem. But it was all started by Guido.

As Guido is stepping down to take a less active role in Python's future, I want to offer my eternal gratitude. For my amazing career, for my friends, for my hobby. Thank you, Guido van Rossum. Your contribution to humanity, and to this one human in particular, is hard to overestimate.

03 Aug 2018 4:30am GMT

29 Jul 2018

feedPlanet Twisted

Itamar Turner-Trauring: Bad at whiteboard puzzles? You can still get a programming job

Practicing algorithm puzzles stresses you out: just looking at a copy of Cracking the Coding Interview makes you feel nervous.

Interviewing is worse. When you do interview you freeze up: you don't have IDE error checking and auto-completion, you can't use a search engine Google like a real programmer would, there's a stranger staring you down. You screw up, you make typos, you don't know what to say, you make a bad impression.

If this happens to you, it's not your fault! Whiteboard puzzles are a bad way to hire programmers.

They're not realistic: unless you're Jeff Goldblum haxoring the alien mothership's computer just in time for Will Smith to blow up some invaders, you're probably not coding on a 5-minute deadline.

And the skills they're testing aren't used by 95% of programmers 95% of the time. I recently had to do a graph traversal in dependency order-which meant I was all prepared to find my algorithms text book from college. But then I found this library already had a utility called toposort, and vague memories of classes 19 years ago reminded me that this was called a "topological sort". I didn't actually have to implement it, but if I did would have done it with textbook in hand, over the course of a couple of hours (gotta write tests!).

Unfortunately, many companies still use them, and you need a job. A programming job. What should you do?

Here are some ideas to help you find a job-even if you hate whiteboard puzzles.

1. Interview at companies with a better process

Not all companies do on-the-spot programming puzzles. The last three companies I worked at didn't-one had a take-home exercise that wasn't about algorithms (a decision I was involved in, and which I now regret because of the burden it puts on people with no free time). Two others just had talking interviews: I talked about myself, they talked about the company, all very relaxes and civilized.

To find such companies:

  1. Here's one list of 500+ companies that don't do whiteboard puzzles.
  2. The invaluable Key Values job board also tells you about the interview process at the covered companies (see the column on the right when looking at a particular company).

2. Offer an alternative

If you are interviewing at a company with whiteboard puzzles, you don't have to accept their process without pushing back. Just like your salary and working hours, the interview process is also something you can negotiate.

If you have some code you've written that you're particularly proud of and have the ability to share, ask the company if you can share it with them in lieu of a whiteboard puzzle. I once made the mistake of only suggesting this during the interview, and the guy who was interviewing me said he would have accepted it if I'd asked earlier. So make sure to suggest this before the day of the interview, so they have time to review the code in advance.

3. Take control of the process

If all else fails and you're stuck doing a puzzle, there are ways to take control of the process and make a good impression, even if the puzzle is too hard for you. I cover this in more detail in another post.

4. Don't give up

Finally, remember whiteboard puzzles have nothing to do with actual programming, even when the work you're doing is algorithmic. They're a hazing ritual you may be forced to go through, but they in no way reflect on your ability as a programmer.



It's Friday afternoon. You just can't write another line of code-but you're still stuck at the office...

What if every weekend could be a 3-day weekend?

29 Jul 2018 4:00am GMT