19 Sep 2017

feedPlanet Twisted

Moshe Zadka: Announcing NColony 17.9.0

I have released NColony 17.9.0, available in a PyPI near you.

New this version:

Thanks to Mark Williams for reviewing many pull requests.

19 Sep 2017 10:00pm GMT

18 Sep 2017

feedPlanet Twisted

Itamar Turner-Trauring: Join our startup, we'll cut your pay by 40%!

Have you ever thought to yourself, "I need to get paid far far less than I'm worth?" Me neither. And yet some companies not only pay less, they're proud of it. Allow me to explain-

I recently encountered a job posting from one such startup. My usual response would be to roll my eyeballs and move on, but this particular posting was so egregious that had I done so I would've ended up looking at the back of my skull.

So in an effort to avoid the pain of over-rolled eyeballs, and more importantly to help you avoid the pain of working for this kind of company, let me share the key sentence from the job posting:

"It's not unusual to see some team members in the office late into the evening; many of us routinely work and study 70+ hours a week."

In this post I will work through the implications of that sentence. I made sure not to drink anything while writing it, because if I had I'd be spitting my drink out every time I reread that sentence. The short version is that should you join such a company, you'd be working for people who are:

Cutting your salary by 40%

Let's start with your salary. The standard workweek in the US is 40 hours a week. If you're going to be working 70 hours a week that means you're working 75% more hours than usual. Or, to put it another way, the company is offering to pay you 40% less than market rate for your time.

Instead of hiring more engineers, they're trying to get their engineers to do far more for the same amount of money. This is exploitation, and there's no reason you should put up with it.

It's not that hard to find companies where you can work a normal 40 hour workweek. I've done so at the past five companies I've worked at, ranging from tiny startups to Google. Sometimes you need to push back, it's true, but it's certainly possible. And even if you can't find such a job, there are many more companies where you can work 45 hours, or 50 hours. Even an awful workweek of 60 hours is better than 70.

When programming is your hobby

Now, it may be that you love programming so much that you're thinking, "I'd be coding 70 hours a week anyway, why not do it at work?" As I'll mention below, I don't think working 70 hours a week is going to produce much, but even if it did you still shouldn't do it on your employer's behalf.

Let's imagine you're coding 70 hours a week. You could work 70 hours for your employer, getting paid nothing extra for your time, or you could stick to 40 hours and use those remaining 30 hours to:

And you'd also have some optional slack time, which is useful when life gets in the way of programming.

"Work not just smart, but also hard"

Encouraging 70 hour workweeks is an extraordinary level of exploitation, but sadly it's also a rather common form of stupidity. The problem is encapsulated in another statement from the job posting:

"[We] work not just smart, but also hard."

If your starting point is exploitation, if you're setting out to extract as much work as possible from your employees, you lose sight of the purpose of work. Work has no inherent value: what matters is the results. The problems solved, the value created, this is what you're trying to maximize.

And is turns out there's decades of research showing that consistently working more than 40 hours a week results in less output. But presumably the people running this startup don't believe that, or they wouldn't be pushing for it. And maybe you don't believe that either. But even if we assume 70 hours of work produce 75% more output than 40 hours of work, it's still a fundamentally bad idea for the company.

When an organization tries to maximize inputs, rather than outputs, the result is a whole series of bad judgments. Hiring, for example, as you can see from this job ad. A junior programmer working 70 hours a week will produce far less valuable output than an experienced programmer working 40 hours a week. But a company that wants to maximize exploitation, to maximize work, will write job ads that ensure the latter will never apply.

Emergencies: when long hours are necessary.

Beyond reduced output, and beyond a confused hiring policy, encouraging long hours also implies a lack of project management skills. Long work hours are both a cause and a symptom of this particular failure.

70 hours a week means 7 days a week, from 9AM to 7PM. That doesn't leave much slack time for life, and it also leaves no slack time for the project. Sooner or later every project has an emergency. If a production server crashes, someone is going to have to bring it back up. And more broadly, extra work comes up: a customer asks for more features, or a seemingly simple task turns out to be far more difficult than expected.

To help deal with these situations you need some advance planning. Scheduling everything down to the minute won't help, and pushing everyone to work at the absolute limit won't help. The problem is unexpected work, after all. What you need is planned slack time, time that hasn't been budgeted, that's available for all the inevitable unexpected problems.

But a manager that is pushing you to work 70 hours a week isn't a manager who plans ahead for unexpected work. No, this is a manager who solves problem by telling you to work harder and longer. So when the unexpected happens, when an emergency happens, your manager will be saying "who coulda knowed? ¯\_(ツ)_/¯" and before you know it you're working 80 hours a week.

Maybe that will fix things. But I doubt it. More plausibly you'll eventually burn out and quit, taking your business knowledge with you.

"Strong willingness to help junior engineers"

The job posting that led to this post also suggested that a "strong willingness to help junior engineers" would be helpful, though not required. So here's my advice to all you junior engineers out there: avoid companies that want you to work crazy hours.

  1. It's bad for you.
  2. It's bad for the company.
  3. And you don't want to work for a manager who isn't competent enough to realize what's bad for the company.

And if you are stuck working for such a company, you might want to read my book, The Programmer's Guide to a Sane Workweek.

18 Sep 2017 4:00am GMT

15 Sep 2017

feedPlanet Twisted

Jp Calderone: SSH to EC2 (Refrain)

Recently Moshe wrote up a demonstration of the simple steps needed to retrieve an SSH public key from an EC2 instance to populate a known_hostsfile. Moshe's example uses the highly capable boto3 library for its EC2 interactions. However, since his blog is syndicated on Planet Twisted, reading it left me compelled to present an implementation based on txAWS instead.

First, as in Moshe's example, we need argv and expanduser so that we can determine which instance the user is interested in (accepted as a command line argument to the tool) and find the user's known_hosts file (conventionally located in ~):


from sys import argv
from os.path import expanduser

Next, we'll get an abstraction for working with filesystem paths. This is commonly used in Twisted APIs because it saves us from many path manipulation mistakes committed when representing paths as simple strings:


from filepath import FilePath

Now, get a couple of abstractions for working with SSH. Twisted Conch is Twisted's SSH library (client & server). KnownHostsFile knows how to read and write the known_hosts file format. We'll use it to update the file with the new key. Key knows how to read and write SSH-format keys. We'll use it to interpret the bytes we find in the EC2 console output and serialize them to be written to the known_hosts file.


from twisted.conch.client.knownhosts import KnownHostsFile
from twisted.conch.ssh.keys import Key

And speaking of the EC2 console output, we'll use txAWS to retrieve it. AWSServiceRegion is the main entrypoint into the txAWS API. From it, we can get an EC2 client object to use to retrieve the console output.


from txaws.service import AWSServiceRegion

And last among the imports, we'll write the example with inlineCallbacks to minimize the quantity of explicit callback-management code. Due to the simplicity of the example and the lack of any need to write tests for it, I won't worry about the potential problems with confusing tracebacks or hard-to-test code this might produce. We'll also use react to drive the whole thing so we don't need to explicitly import, start, or stop the reactor.


from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import react

With that sizable preamble out of the way, the example can begin in earnest. First, define the main function using inlineCallbacks and accepting the reactor (to be passed by react) and the EC2 instance identifier (taken from the command line later on):


@inlineCallbacks
def main(reactor, instance_id):

Now, get the EC2 client. This usage of the txAWS API will find AWS credentials in the usual way (looking at AWS_PROFILE and in ~/.aws for us):


region = AWSServiceRegion()
ec2 = region.get_ec2_client()

Then it's a simple matter to get an object representing the desired instance and that instance's console output. Notice these APIs return Deferred so we use yield to let inlineCallbacks suspend this function until the results are available.


[instance] = yield ec2.describe_instances(instance_id)
output = yield ec2.get_console_output(instance_id)

Some simple parsing logic, much like the code in Moshe's implementation (since this is exactly the same text now being operated on). We do take the extra step of deserializing the key into an object that we can use later with a KnownHostsFile object.


keys = (
Key.fromString(key)
for key in extract_ssh_key(output.output)
)

Then write the extracted keys to the known hosts file:


known_hosts = KnownHostsFile.fromPath(
FilePath(expanduser("~/.ssh/known_hosts")),
)
for key in keys:
for name in [instance.dns_name, instance.ip_address]:
known_hosts.addHostKey(name, key)
known_hosts.save()

There's also the small matter of actually parsing the console output for the keys:


def extract_ssh_key(output):
return (
line for line in output.splitlines()
if line.startswith(u"ssh-rsa ")
)

And then kicking off the whole process:


react(main, argv[1:])

Putting it all together:


from sys import argv
from os.path import expanduser

from filepath import FilePath

from twisted.conch.client.knownhosts import KnownHostsFile
from twisted.conch.ssh.keys import Key

from txaws.service import AWSServiceRegion

from twisted.internet.defer import inlineCallbacks
from twisted.internet.task import react

@inlineCallbacks
def main(reactor, instance_id):
region = AWSServiceRegion()
ec2 = region.get_ec2_client()

[instance] = yield ec2.describe_instances(instance_id)
output = yield ec2.get_console_output(instance_id)

keys = (
Key.fromString(key)
for key in extract_ssh_key(output.output)
)

known_hosts = KnownHostsFile.fromPath(
FilePath(expanduser("~/.ssh/known_hosts")),
)
for key in keys:
for name in [instance.dns_name, instance.ip_address]:
known_hosts.addHostKey(name, key)
known_hosts.save()

def extract_ssh_key(output):
return (
line for line in output.splitlines()
if line.startswith(u"ssh-rsa ")
)

react(main, argv[1:])

So, there you have it. Roughly equivalent complexity to using boto3 and on its own there's little reason to prefer this to what Moshe has written about. However, if you have a larger Twisted-based application then you may prefer the natively asynchronous txAWS to blocking boto3 calls or managing boto3 in a thread somehow.

Also, I'd like to thank LeastAuthority (my current employer and operator of the Tahoe-LAFS-based S4 service which just so happens to lean heavily on txAWS) for originally implementing get_console_output for txAWS (which, minor caveat, will not be available until the next release of txAWS is out).

As always, if you like this sort of thing, check out the support links on the right.

15 Sep 2017 2:34pm GMT

09 Sep 2017

feedPlanet Twisted

Itamar Turner-Trauring: The better way to learn a new programming language

Have you ever failed to learn a new programming language in your spare time? You pick a small project to implement, get a few functions written… and then you run out of time and motivation. So you give up, at least until the next time you give it a try.

There's a better way to learn new programming languages, a method that I've applied multiple times. Where starting a side project often ends in failure and little knowledge gained, this method starts with success. For example, the last time I did this was with Ruby: I started by publishing a whole new Ruby Gem, and getting a bug fix accepted into the Sinatra framework.

In this post I will:

Side projects: the hard way to learn a language

Creating a new software project from scratch in your spare time is a tempting way to learn a new language. You get to build something new: building stuff is fun. You get to pick your language: you have the freedom to choose.

Unfortunately, learning via a side project is a difficult way to learn a new language. When you're learning a new programming language you need to learn:

This is a huge amount of knowledge, and you're doing so with multiple handicaps:

Learning on your own: You have to figure out everything on your own.

Blank slate: You're starting from scratch. Quite often there's no scaffolding to help you, no good starting point to get you going.

Simultaneous learning: You are trying to learn everything in the list above at the same time.

Limited time: You're doing this in your spare time, so you may have only limited amounts of spare time to apply to the task.

Lack of motivation: If you care about the side project's success, you probably will be motivated to switch back to a language you know. If you just care about learning the language, you'll be less motivated to do all the boring work to make the side project succeed.

Vague goals: "Learning a language" is an open-ended task, since there's always more to learn. How will you know you've achieved something?

Personally I have very limited free time: I can't start a new side project in a language I already know, let alone a new one. But I do occasionally learn a new language.

That time I learned some Ruby

Rather than learning new languages at home, I use a better method: learning a language by solving problems at my job.

For example, I know very little Ruby, and when I started learning it I knew even less. One day, however, I joined a company that was publishing a SDK in multiple languages, one of which was Ruby.

A tiny gem

My first task involving Ruby was integrating the SDK with popular Ruby HTTP clients and servers. Which is to say, I started learning a new language with a specific goal, motivation, and time to learn at work. Much better than a personal side project!

I started by learning one thing, not multiple things simultaneously: which 3rd party HTTP libraries were popular. Once I'd found the popular HTTP clients and servers, my next task was implementing the SDK integration. One integration was with Sinatra, a popular Ruby HTTP server framework.

As a coding task this was pretty simple:

  1. The Sinatra docs pointed me towards a library called Rack, a standard way to write HTTP server middleware for Ruby.
  2. Rack has documentation and tutorials on how to create middleware.
  3. There are lots of pre-existing middleware packages I could use as examples, for both the code itself and for tests.
  4. I only needed to learn just enough Ruby syntax and semantics to write the middleware. Googling tutorials was enough for that.

I learned just enough to implement the middleware: 40 lines of trivial code.

Next I needed to package the middleware as a gem, Ruby's packaging format. Once again, I was only working on a single task, a well-documented task with many examples. And I had motivation, specific goals, examples to build off of, and the time to do it.

At this point I'd learned: a tiny bit of syntax and semantics, some 3rd party libraries, packaging, and a little bit of the toolchain.

A bugfix to an existing project

Shortly after creating our SDK integration I discovered a bug in Sinatra: Sinatra middleware was only initialized after the first request. So I tracked down the bug in Sinatra… which gave me an opportunity to learn more of the language's syntax, semantics, and idioms by reading a real-world existing code base. And, of course, the all-important skill of knowing how to add debug print statements to the code.

Reading code is a lot easier than writing code. And since Sinatra was a pre-existing code base, I could rely on pre-existing tests as examples when I wrote a test for my patch. I didn't need to figure out how to structure a large project, or every edge case of the syntax that wasn't relevant to the bug. I had a specific goal, and I learned just enough to reach it.

At the end of the process above I still couldn't start a Ruby project from scratch, or write more than tiny amounts of Ruby. And I haven't done much with it since. But I do know enough to deal with packaging, and if I ever started writing Ruby again I'd start with a lot more knowledge of the toolchain, to the point where I'd be able to focus purely on syntax and semantics.

But I've used a similar method to learn other languages to a much greater extent: I learned C++ by joining a company that used it, and I became a pretty fluent C++ programmer for a while.

Learning a new language: a better method

How should you learn a new programming language? As in my story above, the best way to do so is at work, and ideally by joining an existing project.

Existing projects

The easiest way to learn a new language is to join an existing project or company that uses a language you don't know. None of the problems you would have with a side project apply:

New projects

Lacking an existing project to join, look out for opportunities where there's a strong motivation for your project to add a new language. Some examples:

Starting a new project is not quite as easy a learning experience, unfortunately. But you're still starting with specific goals in mind, and with time at work to learn the language.

Make sure to limit yourself to only learning one thing at a time. In my example above I sequentially learned about: which 3rd party libraries existed, the API for one library, writing miniscule amounts of trivial integration code, packaging, and then how to read a lot more syntax and semantics. If you're doing this with co-workers you can split up tasks: you do the packaging while your co-worker builds the first prototype, and then you can teach each other what you've learned.

Learning at work is the best learning

More broadly, your job is a wonderful place to learn. Every task you do at work involves skills, skills you can practice and improve. You can get better at debugging, or notice a repetitive task and automate it, or learn how to write better bug reports. Perhaps you could figure out what needs changing so you can get make changes done faster (processes? architecture? APIs?). Maybe you can figure out how to test your code better to reduce the number of bugs you ship. And if that's not enough, Julia Evans has even more ideas.

In all these cases you'll have motivation, specific goals, time, and often an existing code base to build off of. And best of all, you'll be able to learn while you're getting paid.

09 Sep 2017 4:00am GMT

31 Aug 2017

feedPlanet Twisted

Moshe Zadka: SSH to EC2

(Thanks to Donald Stufft for reviewing this post, and to Glyph Lefkowitz for inspiring much of it.)

It is often the case that after creating an EC2 instance in AWS, the next step is SSHing. This might be because the machine is a development machine, or it might be tilling the ground for a different remote control: for example, setting up a salt minion.

In those cases, many either press y when seeing SSH prompt them about an unknown host key, or even turn off host key verification altogether. This is convenient, quick, and very insecure. A man in the middle can use this to steal credentials -- maybe not permanently, but enough to log in into any other machine with the same SSH key.

The correct thing to do is to prepare the SSH configuration by retrieving the host key via the AWS API. Unfortunately, doing it is not trivial.

Fortunately, it is a good example of how to use the AWS API from Python.

import sys
import boto3

client = boto3.client('ec2', region_name='us-west-2')
resource = boto3.resource('ec2', region_name='us-west-2')

output = client.get_console_output(InstanceId=sys.argv[1])
result = output['Output']

rsa = [line for line in result.splitlines()
            if line.startswith('ssh-rsa')][0]

instance = resource.Instance(sys.argv[1])
known_hosts = '{},{} {}\n'.format(instance.public_dns_name,
                                  instance.public_ip_address,
                                  rsa)

with open(os.path.expanduser('~/.ssh/known_hosts'), 'a') as fp:
    fp.write(known_hosts)

Let's go through this script section by section.

import sys
import boto3

We import the sys module and the first-party AWS module boto3.

client = boto3.client('ec2', region_name='us-west-2')
resource = boto3.resource('ec2', region_name='us-west-2')

It is often confusing what functionality is in client and what is in resource. The only rule I learned in a year of using the AWS API is to look in both places, and create both a client and a resource. In general, client maps directly to AWS low-level REST API, while resource gives higher level abstractions.

output = client.get_console_output(InstanceId=sys.argv[1])
result = output['Output']

This is the meat of the script -- we use the API to get the console output. These are the boot up messages from all services. When the SSH server starts up, it prints its key. All that is left now is to find it.

rsa = [line for line in result.splitlines()
            if line.startswith('ssh-rsa')][0]

This is a little hacky, but there is no nice way to do it. There are other possible heuristics. The nice thing is that if the heuristic fails, this will result in connection failure -- not an insecure connection!

instance = resource.Instance(sys.argv[1])
known_hosts = '{},{} {}\n'.format(instance.public_dns_name,
                                  instance.public_ip_address,
                                  rsa)

We grab the IP and name through the resource, and format them in the right way for SSH to understand.

with open(os.path.expanduser('~/.ssh/known_hosts'), 'a') as fp:
    fp.write(known_hosts)

I chose to update known_hosts like this because originally this script was in a throw-away Docker image. In other cases, it might be wise to have a separate known hosts file for EC2 instances, or have an atomic update methodology.

After running this code, it is possible to SSH without having to verify the host key. It is best to set the SSH options to fail if the host key is not there, for extra safety.

An alternative approach is to use the AWS API to set the SSH secret key. However, this is, in general, even less trivial to do securely.

31 Aug 2017 4:30am GMT

17 Aug 2017

feedPlanet Twisted

Duncan McGreggor: NASA/EOSDIS Earthdata

Update

It's been a few years since I posted on this blog -- most of the technical content I've been contributing to in the past couple years has been in the following:

But since the publication of the Mastering matplotlib book, I've gotten more and more into satellite data. The book, it goes without saying, focused on Python for the analysis and interpretation of satellite data (in one of the many topics covered). After that I spent some time working with satellite and GIS data in general using Erlang and LFE. Ultimately though, I found that more and more projects were using the JVM for this sort of work, and in particular, I noted that Clojure had begun to show up in a surprising number of Github projects.

EOSDIS

Enter NASA's Earth Observing System Data and Information System (see also earthdata.nasa.gov and EOSDIS on Wikipedia), a key part of the agency's Earth Science Data Systems Program. It's essentially a concerted effort to bring together the mind-blowing amounts of earth-related data being collected throughout, around, and above the world so that scientists may easily access and correlate earth science data for their research.

Related NASA projects include the following:

The acronym menagerie can be bewildering, but digging into the various NASA projects is ultimately quite rewarding (greater insights, previously unknown resources, amazing research, etc.).

Clojure

Back to the Clojure reference I made above: I've been contributing to the nasa/Common-Metadata-Repository open source project (hosted on Github) for a few months now, and it's been amazing to see how all this data from so many different sources gets added, indexed, updated, and generally made so much more available to any who want to work with it. The private sector always seems to be so far ahead of large projects in terms of tech and continuously improving updates to existing software, so its been pretty cool to see a large open source project in the NASA Github org make so many changes that find ways to keep helping their users do better research. More so that users are regularly delivered new features in a large, complex collection of libraries and services thanks in part to the benefits that come from using a functional programming language.

It may seem like nothing to you, but the fact that there are now directory pages for various data providers (e.g., GES_DISC, i.e., Goddard Earth Sciences Data and Information Services Center) makes a big difference for users of this data. The data provider pages now also offer easy access to collection links such as UARS Solar Ultraviolet Spectral Irradiance Monitor. Admittedly, the directory pages still take a while to load, but there are improvements on the way for page load times and other related tasks. If you're reading this a month after this post was written, there's a good chance it's already been fixed by now.

Summary

In summary, it's been a fun personal journey from looking at Landsat data for writing a book to working with open source projects that really help scientists to do their jobs better :-) And while I have enjoyed using the other programming languages to explore this problem space, Clojure in particular has been a delightfully powerful tool for delivering new features to the science community.

17 Aug 2017 2:05pm GMT

16 Aug 2017

feedPlanet Twisted

Itamar Turner-Trauring: The tragic tale of the deadlocking Python queue

This is a story about how very difficult it is to build concurrent programs. It's also a story about a bug in Python's Queue class, a class which happens to be the easiest way to make concurrency simple in Python. This is not a happy story: this is a tragedy, a story of deadlocks and despair.

This story will take you on a veritable roller coaster of emotion and elucidation, as you:

Join me, then, as I share this tale of woe.

Concurrency is hard

Writing programs with concurrency, programs with multiple threads, is hard. Without threads code is linear: line 2 is executed after line 1, with nothing happening in between. Add in threads, and now changes can happen behind your back.

Race conditions

The following counter, for example, will become corrupted if increment() is called from multiple threads:

from threading import Thread

class Counter(object):
    def __init__(self):
        self.value = 0
    def increment(self):
        self.value += 1

c = Counter()

def go():
    for i in range(1000000):
        c.increment()

# Run two threads that increment the counter:
t1 = Thread(target=go)
t1.start()
t2 = Thread(target=go)
t2.start()
t1.join()
t2.join()
print(c.value)

Run the program, and:

$ python3 racecondition.py
1686797

We incremented 2,000,000 times, but that's not what we got. The problem is that self.value += 1 actually takes three distinct steps:

  1. Getting the attribute,
  2. incrementing it,
  3. then setting the attribute.

If two threads call increment() on the same object around the same time, the following series steps may happen:

  1. Thread 1: Get self.value, which happens to be 17.
  2. Thread 2: Get self.value, which happens to be 17.
  3. Thread 1: Increment 17 to 18.
  4. Thread 1: Set self.value to 18.
  5. Thread 2: Increment 17 to 18.
  6. Thread 1: Set self.value to 18.

An increment was lost due to a race condition.

One way to solve this is with locks:

from threading import Lock

class Counter(object):
    def __init__(self):
        self.value = 0
        self.lock = Lock()
    def increment(self):
        with self.lock:
            self.value += 1

Only one thread at a time can hold the lock, so only one increment happens at a time.

Deadlocks

Locks introduce their own set of problems. For example, you start having potential issues with deadlocks. Imagine you have two locks, L1 and L2, and one thread tries to acquire L1 followed by L2, whereas another thread tries to acquire L2 followed by L1.

  1. Thread 1: Acquire and hold L1.
  2. Thread 2: Acquire and hold L2.
  3. Thread 1: Try to acquire L2, but it's in use, so wait.
  4. Thread 2: Try to acquire L1, but it's in use, so wait.

The threads are now deadlocked: no execution will proceed.

Queues make concurrency simpler

One way to make concurrency simpler is by using queues, and trying to have no other shared data structures. If threads can only send messages to other threads using queues, and threads never mutate data structures shared with other threads, the result is code that is much closer to single-threaded code. Each function just runs one line at a time, and you don't need to worry about some other thread interrupting you.

For example, we can have a single thread whose job it is to manage a collection of counters:

from collections import defaultdict
from threading import Thread
from queue import Queue

class Counter(object):
    def __init__(self):
        self.value = 0
    def increment(self):
        self.value += 1


counter_queue = Queue()


def counters_thread():
    counters = defaultdict(Counter)
    while True:
        # Get next command out the queue:
        command, name = counter_queue.get()
        if command == "increment":
            counters[name].increment()

# Start a new thread:
Thread(target=counters_thread).start()

Now other threads can safely increment a named counter by doing:

counter_queue.put(("increment", "shared_counter_1"))

A buggy program

Unfortunately, queues have some broken edge cases. Consider the following program, a program which involves no threads at all:

from queue import Queue

q = Queue()


class Circular(object):
    def __init__(self):
        self.circular = self

    def __del__(self):
        print("Adding to queue in GC")
        q.put(1)


for i in range(1000000000):
    print("iteration", i)
    # Create an object that will be garbage collected
    # asynchronously, and therefore have its __del__
    # method called later:
    Circular()
    print("Adding to queue regularly")
    q.put(2)

What I'm doing here is a little trickery involving a circular reference, in order to add an item to the queue during garbage collection.

By default CPython (the default Python VM) uses reference counting to garbage collect objects. When an object is created the count is incremented, when a reference is removed the count is decremented. When the reference count hits zero the object is removed from memory and __del__ is called on it.

However, an object with a reference to itself-like the Circular class above-will always have a reference count of at least 1. So Python also runs a garbage collection pass every once in a while that catches these objects. By using a circular reference we are causing Circular.__del__ to be called asynchronously (eventually), rather than immediately.

Let's run the program:

$ python3 bug.py 
iteration 0
Adding to queue regularly
Adding to queue in GC

That's it: the program continues to run, but prints out nothing more. There are no further iterations, no progress.

What's going on?

Debugging a deadlock with gdb

Modern versions of the gdb debugger have some neat Python-specific features, including ability to print out a Python traceback. Setup is a little annoying, see here and here and maybe do a bit of googling, but once you have it setup it's extremely useful.

Let's see what gdb tells us about this process. First we attach to the running process, and then use the bt command to see the C backtrace:

$ ps x | grep bug.py
28920 pts/4    S+     0:00 python3 bug.py
$ gdb --pid 28920
...
(gdb) bt
#0  0x00007f756c6d0946 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x6464e96e00) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  do_futex_wait (sem=sem@entry=0x6464e96e00, abstime=0x0) at sem_waitcommon.c:111
#2  0x00007f756c6d09f4 in __new_sem_wait_slow (sem=0x6464e96e00, abstime=0x0) at sem_waitcommon.c:181
#3  0x00007f756c6d0a9a in __new_sem_wait (sem=<optimized out>) at sem_wait.c:29
#4  0x00007f756ca7cbd5 in PyThread_acquire_lock_timed () at /usr/src/debug/Python-3.5.3/Python/thread_pthread.h:352
...

Looks like the process is waiting for a lock. I wonder why?

Next, we take a look at the Python backtrace:

(gdb) py-bt
Traceback (most recent call first):
  <built-in method __enter__ of _thread.lock object at remote 0x7f756cef36e8>
  File "/usr/lib64/python3.5/threading.py", line 238, in __enter__
    return self._lock.__enter__()
  File "/usr/lib64/python3.5/queue.py", line 126, in put
    with self.not_full:
  File "bug.py", line 12, in __del__
    q.put(1)
  Garbage-collecting
  File "/usr/lib64/python3.5/threading.py", line 345, in notify
    waiters_to_notify = _deque(_islice(all_waiters, n))
  File "/usr/lib64/python3.5/queue.py", line 145, in put
    self.not_empty.notify()
  File "bug.py", line 21, in <module>
    q.put(2)

Do you see what's going on?

Reentrancy!

Remember when I said that, lacking concurrency, code just runs one line at a time? That was a lie.

Garbage collection can interrupt Python functions at any point, and run arbitrary other Python code: __del__ methods and weakref callbacks. So can signal handlers, which happen e.g. when you hit Ctrl-C (your process gets the SIGINT signal) or a subprocess dies (your process gets the SIGCHLD signal).

In this case:

  1. The program was calling q.put(2).
  2. This involves acquiring a lock.
  3. Half-way through the function call, garbage collection happens.
  4. Garbage collection calls Circular.__del__.
  5. Circular.__del__ calls q.put(1).
  6. q.put(1) trys to acquire the lock… but the lock is already held, so it waits.

Now q.put(2) is stuck waiting for garbage collection to finish, and garbage collection can't finish until q.put(2) releases the lock.

The program is deadlocked.

Why this is a real bug…

The above scenario may seem a little far-fetched, but it has been encountered by multiple people in the real world. A common cause is logging.

If you're writing logs to disk you have to worry about the disk write blocking, i.e. taking a long time. This is particularly the case when log writes are followed by syncing-to-disk, which is often done to ensure logs aren't lost in a crash.

A common pattern is to create log messages in your application thread or threads, and do the actual writing to disk in a different thread. The easiest way to communicate the messages is, of course, a queue.Queue.

This use case is in fact directly supported by the Python standard library:

from queue import Queue
import logging
from logging.handlers import QueueListener, QueueHandler

# Write out queued logs to a file:
_log_queue = Queue()
QueueListener(
    _log_queue, logging.FileHandler("out.log")).start()

# Push all logs into the queue:
logging.getLogger().addHandler(QueueHandler(_log_queue))

Given this common setup, all you need to do to trigger the bug is to log a message in __del__, a weakref callback, or a signal handler. This happens in real code. For example, if you don't explicitly close a file, Python will warn you about it inside file.__del__, and Python also has a standard API for routing warnings to the logging system.

It's not just logging, though: the bug was also encountered among others by the SQLAlchemy ORM.

…and why Python maintainers haven't fixed it

This bug was originally reported in 2012, and in 2016 it was closed as "wont fix" because it's a "difficult problem".

I feel this is a cop-out. If you're using an extremely common logging pattern, where writes happen in a different thread, a logging pattern explicitly supported by the Python standard library… your program might deadlock. In particular, it will deadlock if any of the libraries you're using writes a log message in __del__.

This can happen just by using standard Python APIs like files and warning→logging routing. This happened to one of the users of my Crochet library, due to some logging in __del__ by the Twisted framework. I had to implement my own queuing system to ensure users weren't impacted by this problem. If I can fix the problem, so can the Python maintainers. For example, Queue.get and Queue.put could be atomic operations (which can be done in CPython by rewriting them in C).

Now, you could argue that __del__ shouldn't do anything: it should schedule stuff that is run outside it. But scheduling from reentrant code is tricky, and in fact not that different from mutating a shared data structure from multiple threads. If only there was a queue of some sort that we could call from __del__… but there isn't, because of this bug.

Some takeaways

  1. Concurrency is hard to deal with, but queue.Queue helps.
  2. Reentrancy is hard to deal with, and Python helps you a lot less.
  3. If you're using queue.Queue on Python, beware of interacting with the queue in __del__, weakref callbacks, or signal handlers.

And by the way, if you enjoyed reading this and would like to hear about all the many ways I've screwed up my own software, sign up for my Software Clown newsletter. Every week I share one of my mistakes and how you can avoid it.

Update: Thanks to Maciej Fijalkowski for suggesting actually demonstrating the race condition, and pointing out that __del__ probably really shouldn't do anything. Thanks to Ann Yanich for pointing out a typo in the code.

16 Aug 2017 4:00am GMT

10 Aug 2017

feedPlanet Twisted

Duncan McGreggor: Mastering matplotlib: Acknowledgments

The Book

Well, after nine months of hard work, the book is finally out! It's available both on Packt's site and Amazon.com. Getting up early every morning to write takes a lot of discipline, it takes even more to say "no" to enticing rabbit holes or herds of Yak with luxurious coats ripe for shaving ... (truth be told, I still did a bit of that).

The team I worked with at Packt was just amazing. Highly professional and deeply supportive, they were a complete pleasure with which to collaborate. It was the best experience I could have hoped for. Thanks, guys!

The technical reviewers for the book were just fantastic. I've stated elsewhere that my one regret was that the process with the reviewers did not have a tighter feedback loop. I would have really enjoyed collaborating with them from the beginning so that some of their really good ideas could have been integrated into the book. Regardless, their feedback as I got it later in the process helped make this book more approachable by readers, more consistent, and more accurate. The reviewers have bios at the beginning of the book -- read them, and look them up! These folks are all amazing!

The one thing that slipped in the final crunch was the acknowledgements, and I hope to make up for that here, as well as through various emails to everyone who provided their support, either directly or indirectly.

Acknowledgments

The first two folks I reached out to when starting the book were both physics professors who had published very nice matplotlib problems -- one set for undergraduate students and another from work at the National Radio Astronomy Observatory. I asked for their permission to adapt these problems to the API chapter, and they graciously granted it. What followed were some very nice conversations about matplotlib, programming, physics, education, and publishing. Thanks to Professor Alan DeWeerd, University of Redlands and Professor Jonathan W. Keohane, Hampden Sydney College. Note that Dr. Keohane has a book coming out in the fall from Yale University Press entitled Classical Electrodynamics -- it will contain examples in matplotlib.

Other examples adapted for use in the API chapter included one by Professor David Bailey, University of Toronto. Though his example didn't make it into the book, it gets full coverage in the Chapter 3 IPython notebook.

For one of the EM examples I needed to derive a particular equation for an electromagnetic field in two wires traveling in opposite directions. It's been nearly 20 years since my post-Army college physics, so I was very grateful for the existence and excellence of SymPy which enabled me to check my work with its symbolic computations. A special thanks to the SymPy creators and maintainers.

Please note that if there are errors in the equations, they are my fault! Not that of the esteemed professors or of SymPy :-)

Many of the examples throughout the book were derived from work done by the matplotlib and Seaborn contributors. The work they have done on the documentation in the past 10 years has been amazing -- the community is truly lucky to have such resources at their fingertips.

In particular, Benjamin Root is an astounding community supporter on the matplotlib mail list, helping users of every level with all of their needs. Benjamin and I had several very nice email exchanges during the writing of this book, and he provided some excellent pointers, as he was finishing his own title for Packt: Interactive Applications Using Matplotlib. It was geophysicist and matplotlib savant Joe Kington who originally put us in touch, and I'd like to thank Joe -- on everyone's behalf -- for his amazing answers to matplotlib and related questions on StackOverflow. Joe inspired many changes and adjustments in the sample code for this book. In fact, I had originally intended to feature his work in the chapter on advanced customization (but ran out of space), since Joe has one of the best examples out there for matplotlib transforms. If you don't believe me, check out his work on stereonets. There are many of us who hope that Joe will be authoring his own matplotlib book in the future ...

Olga Botvinnik, a contributor to Seaborn and PhD candidate at UC San Diego (and BioEng/Math double major at MIT), provided fantastic support for my Seaborn questions. Her knowledge, skills, and spirit of open source will help build the community around Seaborn in the years to come. Thanks, Olga!

While on the topic of matplotlib contributors, I'd like to give a special thanks to John Hunter for his inspiration, hard work, and passionate contributions which made matplotlib a reality. My deepest condolences to his family and friends for their tremendous loss.

Quite possibly the tool that had the single-greatest impact on the authoring of this book was IPython and its notebook feature. This brought back all the best memories from using Mathematica in school. Combined with the Python programming language, I can't imagine a better platform for collaborating on math-related problems or producing teaching materials for the same. These compliments are not limited to the user experience, either: the new architecture using ZeroMQ is a work of art. Nicely done, IPython community! The IPython notebook index for the book is available in the book's Github org here.

In Chapters 7 and 8 I encountered a bit of a crisis when trying to work with Python 3 in cloud environments. What was almost a disaster ended up being rescued by the work that Barry Warsaw and the rest of the Ubuntu team did in Ubuntu 15.04, getting Python 3.4.2 into the release and available on Amazon EC2. You guys saved my bacon!

Chapter 7's fictional case study examining the Landsat 8 data for part of Greenland was based on one of Milos Miljkovic's tutorials from PyData 2014, "Analyzing Satellite Images With Python Scientific Stack". I hope readers have just as much fun working with satellite data as I did. Huge thanks to NASA, USGS, the Landsat 8 teams, and the EROS facility in Sioux Falls, SD.

My favourite section in Chapter 8 was the one on HDF5. This was greatly inspired by Yves Hilpisch's presentation "Out-of-Memory Data Analytics with Python". Many thanks to Yves for putting that together and sharing with the world. We should all be doing more with HDF5.

Finally, and this almost goes without saying, the work that the Python community has done to create Python 3 has been just phenomenal. Guido's vision for the evolution of the language, combined with the efforts of the community, have made something great. I had more fun working on Python 3 than I have had in many years.

10 Aug 2017 4:12am GMT

Itamar Turner-Trauring: Python decorators, the right way: the 4 audiences of programming languages

Python decorators are a useful but flawed language feature. Intended to make source code easier to write, and a little more readable, they neglect to address another use case: that of the programmer who will be calling the decorated code.

If you're a Python programmer, the following post will show you why decorators exist, and how to compensate for their limitations. And even if you're not a Python a programmer, I hope to demonstrate the importance of keeping in mind all of the different audiences for the code you write.

Why decorators exists: authoring and reading code

A programming language needs to satisfy four different audiences:

  1. The computer which will run the source code.
  2. The author, the programmer writing the source code.
  3. A future reader of the source code.
  4. A future caller of the source code, a programmer who will write code that calls functions and classes in the source code.

Python decorators were created for authors and readers, but neglect the needs of callers. Let's start by seeing what decorators are, and how they make it easier to author and read code.

Imagine you want to emulate the Java synchronized keyword: you want to run a method of a class with a lock held, so only one thread can call the method at a time. You can do so with the following code, where the synchronized functions creates a new, replacement method that wraps the given one:

from threading import Lock

def synchronized(function):
    """
    Given a method, return a new method that acquires a
    lock, calls the given method, and then releases the
    lock.
    """
    def wrapper(self, *args, **kwargs):
        """A synchronized wrapper."""
        with self._lock:
            return function(self, *args, **kwargs)
    return wrapper

You can then use the synchronized utility like so:

class ExampleSynchronizedClass:
    def __init__(self):
        self._lock = Lock()
        self._items = []

    # Problematic usage:
    def add(self, item):
        """Add a new item."""
        self._items.append(item)
    add = synchronized(add)

As an author this usage is problematic: you need to type "add" twice, leading to a potential for typos. As a reader of the code you also only learn that add() is synchronized at the end, rather than the beginning. Python therefore provides the decorator syntax, which does the exact same thing as the above but more succinctly:

class ExampleSynchronizedClass:
    def __init__(self):
        self._lock = Lock()
        self._items = []

    # Nicer decorator usage:
    @synchronized
    def add(self, item):
        """Add a new item."""
        self._items.append(item)

Where decorators fail: calling code

The problem with decorators is that they fail to address the needs of programmers calling the decorated functions. As a user of ExampleSynchronizedClass you likely want your editor or IDE to show the docstring for add, and to detect the appropriate signature. Likewise if you're writing documentation and want to automatically generate an API reference from the source code.

But in fact, what you get is the signature, name and docstring for the wrapper function:

>>> help(ExampleSynchronizedClass.add)
Help on method wrapper in module synchronized:

wrapper(self, *args, **kwargs) unbound synchronized.ExampleSynchronizedClass method
    A synchronized wrapper.

To solve this Python provides a utility decorator called functools.wraps, that copies attributes like name and docstring from the wrapped function. We change the implementation of the decorator:

from threading import Lock
from functools import wraps

def synchronized(function):
    """
    Given a method, return a new method that acquires a
    lock, calls the given method, and then releases the
    lock.
    """
    @wraps(function)
    def wrapper(self, *args, **kwargs):
        """A synchronized wrapper."""
        with self._lock:
            return function(self, *args, **kwargs)
    return wrapper

And now we get better help:

Help on method add in module synchronized:

add(self, item) unbound synchronized.ExampleSynchronizedClass method
    Add a new item.

In versions of Python less than 3.4 signature will still be wrong: it's still the signature of the wrapper, not the underlying function. If you want to support older versions of Python, one solution is to use a 3rd party library called wrapt. We redefine our decorator once more, this time using wrapt instead of functools.wraps:

import wrapt
from threading import Lock

@wrapt.decorator
def synchronized(function, self, args, kwargs):
    """
    Given a method, return a new method that acquires a
    lock, calls the given method, and then releases the
    lock.
    """
    with self._lock:
        return function(*args, **kwargs)

Beyond supporting older versions of Python, wrapt also has the benefit of being more succinct.

Addressing all audiences

While functols.wraps and wrapt do the trick, they still require you to remember to use them every time you define a new decorator. Arguably this is a failure in the Python language: it would've been more elegant to do the equivalent functionality as part of the @ syntax in the language itself, rather than relying on library code to fix it.

When you are writing a library, or perhaps even designing a programming language, it's always worth keeping in mind that you need to support four distinct audiences: the computer, authors, readers and callers. And if you're a Python programmer creating a decorator, do use wrapt: it'll make your callers happier, and since it's also more succinct it will also make life a little easier for your readers.

Updated: Noted Python 3.4 does do signatures, and tried to make the issue with flaw more explicit. Thanks to Kevin Granger and hwayne for suggestions.

10 Aug 2017 4:00am GMT

08 Aug 2017

feedPlanet Twisted

Moshe Zadka: Python as a DSL

This is a joint post by Mark Williams and Moshe Zadka. You are probably reading it on one of our blogs -- if so, feel free to look at the other blog. We decided it would be fun to write a post together and see how it turns out. We definitely had fun writing it, and we hope you have fun reading it.

Introduction

A Domain Specific Language is a natural solution to many problems. However, creating a new language from whole cloth is both surprisingly hard and, more importantly, surprisingly hard to get right.

One needs to come up with a syntax that is easy to learn, easy to get right, hard to get wrong, and has the ability to give meaningful errors when incorrect input is given. One needs to carefully document the language, supplying at least a comprehensive reference, a tutorial, and a best practices guide all with examples.

On top of this, one needs to write a toolchain for the language that is as high quality as the one users are used to from other languages.

All of this raises a tempting question: can we use an existing language? In this manner, many languages have been used, or abused, as domain specific languages -- Lisp variants (such as Scheme) were among the first to be drafted, but were quickly followed by languages like TCL, Lua, and Ruby.

Python, being popular in quite a few niches, has also been used as a choice for things related to those niches -- the configuration format for Jupyter, the website structure specification in Pyramid the build directives for SCons, and the target specification for Pants.

In this post, we will show examples of Python as a Domain Specific Language (or DSL) and explain how to do it well -- and how to avoid doing it badly.

As programmers we use a variety of languages to solve problems. Usually these are "general purpose" languages, or languages whose design allows them to solve many kinds of problems equally well. Python certainly fits this description. People use it to solve problems in astronomy and biology, to answer questions about data sets large and small, and to build games, websites, and DNS servers.

Python programmers know how much value there is in generality. But sometimes that generality makes solving a problem tedious or otherwise difficult. Sometimes, a problem or class of problems requires so much set up, or has so many twists and turns, that its obvious solution in a general purpose language becomes complicated and hard to understand.

Domain specific languages are languages that are tailored to solve specific problems. They contain special constructions, syntax, or other affordances that organize patterns common to the problems they solve.

Emacs Lisp, or Elisp, is a Domain Specific Language focused on text editing. Emacs users can teach Emacs to do novel things by extending the editor with Elisp.

Here's an example of an Elisp function that swaps ' with " and vice-versa when the cursor is inside a Python string:

(defun python-swap-quotes ()
  "Swap single and double quotes."
  (interactive)
  (save-excursion
    (let ((state (syntax-ppss)))
      (when (eq 'string (syntax-ppss-context state))
        (let* ((left (nth 8 state))
               (right (1- (scan-sexps left 1)))
               (newquote (if (= ?' (char-after left))
                             ?\" ?')))
          (dolist (loc (list left right))
            (goto-char loc)
            (delete-char 1)
            (insert-char newquote 1)))))))

This is clearly Lisp code, and parts of it, such as defining a function with defun or variables with let, is not specific to text editing or even Emacs.

(interactive), however, is a special extension to Elisp that makes the function that encloses it something a user can assign to a keyboard short cut or select from a menu inside Emacs. Similarly, (save-excursion ...) ensures that file the user is editing and the location of the cursor is restored fter the code inside is run. This allows the function to jump around within a file or even multiple files without disturbing a user's place.

Lots of Elisp code makes use of special extensions, but Python programmers don't complain about their absence, because they're of no use outside Emacs. That specialization makes Elisp a DSL.

The language of Dockerfiles is also a domain specific language. Here's a simple hello world Dockerfile:

FROM scratch
COPY hello /
ENTRYPOINT ["/hello"]

The word that begins each line instructs Docker to perform some action on the arguments that follow, such as copying the file hello from the current directory into the image's root directory. Some of these commands have meaning specifically to Docker, such as the FROM command to underlay the image being built with a base image.

Note that unlike Elisp, Dockerfiles are not Turing complete, but both are DSLs. Domain specificity is distinct from mathematical concepts like decidability. It's a term we use to describe how specialized a language is to its problem domain, not a theoretical Computer Science term.

Code written in a domain specific language should be clearer and easier to understand because the language focuses on the domain, while the programmer focuses on the specific problem.

The Elisp code won't win any awards for elegance or robustness, but it benefits from the brevity of (interactive) and (save-excursion ..). Most of the function consists of the querying and computation necessary to find and rewrite Python string literals. Similarly, the Dockerfile doesn't waste the reader's attention on irrelevant details, like how a base image is found and associated with the current image. These DSLs keep their programs focused on their problem domains, making them easier to understand and extend.

Naive Usage of Python as a DSL

Programmers describe things that hide complexity behind a dubiously simple facade as magic. For some reason, when the idea of using Python as a DSL first comes up, many projects choose the strategy we will call "magical execution context". It is more common in projects written in C/C++ which embed Python, but happens quite a bit in pure-Python projects.

The prototypical code that creates a magical execution context might look something like:

namespace = dict(DomainFunction1=my_domain_function1,
                 DomainFunction2=my_domain_function2)
with open('Domainspecificfile') as fp:
    source = fp.read()
exec(source, globals=namespace)
do_something_with(namespace['special_name'])

Real-life DSLs usually have more names in their magical execution contexts (often ranging in the tens or sometimes hundreds), and DSL runtimes often have more complicated logic around finding the files they parse. However, this platonic example is useful to keep in mind when reading through the concrete examples.

While various other projects were automatable with Python, SCons might be the oldest surviving project where Python is used as a configuration language. It also happens to be implemented in Python -- but aside from making the choice of Python as a DSL easier to implement, it has no bearing on our discussion today.

An SCons file might look like this:

src_files = Split("""main.c
                     file1.c
                     file2.c""")
Program('program', src_files)

Code can also be imported from other files:

SConscript(['drivers/SConscript',
            'parser/SConscript',
            'utilities/SConscript'])

Note that it is not possible, via this method, to reuse any logic other than build settings across the files -- a function defined in one of them is not available elsewhere else.

At 12 years old, Django is another venerable Python project, and like the similarly venerable Ruby on Rails, it's no stranger to magic. Once upon a time, Django's database interaction APIs were magical enough that they constituted a kind of domain-specific language with a magical execution context.

Like modern Django, you would define your models by subclassing a special class, but unlike modern Django, they were more than just plain old Python classes.

A Django application in a module named best_sellers.py might have had a model that looked like this:

from django.core import meta

class Book(meta.Model):
      name = meta.CharField(maxlength=70)
      author = meta.CharField(maxlength=70)
      sold = meta.IntegerField()
      release_date = meta.DateTimeField(default=meta.LazyDate())

      def get_best_selling_authors(self):
          cursor = db.cursor()
          cursor.execute("""
          SELECT author FROM books WHERE release_date > '%s'
          GROUP BY author ORDER BY sold DESC
          """ % (db.quote(datetime.datetime.now() - datetime.timedelta(weeks=1)),))
          return [row[0] for row in cursor.fetchall()]

      def __repr__(self):
          return self.full_name

A user would then use it by like so:

from django.models.best_sellers import books
print books.get_best_selling_authors()

Django transplated the Book model into its own magic models module and renamed it books. Note the subtle transformation in the midst of more obvious magic: the Book model was lowercased and automatically pluralized.

Two magic globals were injected into the model's instance methods: db, the current database connection, and datetime, the Python standard library module. That's why our example module doesn't have to import them.

The intent was to reduce boilerplate by exploiting Python's dynamicism. The result, however, diverged from Python's expected behaviors and also invented new, idiosyncratic boilerplate; in particular, the injection of special globals prevented methods from accessing variables defined in their source modules, so methods had to directly import any module they used, forcing programmers to repeat themselves.

Django's developers came to see these features as "warts" and removed them before the 0.95 release. It's safe to say that the "magic-removal" process succeeded in improving Django's usability.

Python has well-documented built-ins. People who read Python code are, usually, familiar with those. Any symbol which is not a built-in or a reserved word is imported.

Any DSL will have its own, extra built-ins. Ideally, those are well documented -- but even when they are, this is a source of documentation separate from the host language. This code can never be used by something outside the DSL. A good example for such potential usage is for unit testing the code. Once a DSL catches on, it often inspires creation of vast amounts of code. The example of Elisp is particularly telling.

Another problem with such code is that it's often not obvious what the code itself is allowed to import or call. Is it safe to do long-running operations? If logging to a file, is logging properly set-up? Will the code double log messages, or does it cache the first time it is used? As an example, there are a number of questions about how to share code between SCons on StackOverflow, with explanations about the trade-offs between using an SConscript file or using Python modules and import.

Last, but not least, other Python code often implicitly assumes that functions and classes are defined by modules. This means that either it is ill-advised to write such in the DSL -- perhaps defining classes might lead to a memory leak because the contents are used in exec multiple times -- or, worse, that random functionality will break. For example, do tracebacks work correctly? Does pickle?

A New Hope

As seen from the examples of SCons and old, magical Django, naively using Python as a DSL is problematic. It gives up a lot of the benefits of using a pre-existing language, and results in something that is in the Python uncanny valley -- just close enough to Python that the distinctions result in a sense of horror, not cuteness.

One way to avoid the uncanny valley is to step further away and avoid confusion -- implement a little language using PyParsing that is nothing like Python. But valleys have two sides. We can solve the problem by just using pure, unadulterated Python. It turns out that removing an import statement at the top of the file does not reduce much overhead when specializing to a domain.

We explore, by example, good ways to use Python as DSL. We start by showing how even a well written module, taking advantage of some of the power of Python, can create a de-facto DSL. Taking it to the next level, frameworks (which call user code) can also be used to build DSLs in Python. Most powerfully, especially combined with libraries and frameworks, Python plugin systems can be used to avoid even the need for a user-controlled entry point, allowing DSLs which can be invoked from an arbitrary program.

Python is a flexible language and subtle use of its features can result in a flexible DSL that still looks like Python.

We explore four examples of such DSLs -- NumPy, Stan, Django ORM, and Pyramid.

NumPy

NumPy has the advantage of having been there since the dawn of Python, being preceded by the Numeric library, on which it was based. Using that long lineage, it has managed to exert some influence on adding some things to Python core's syntax -- the Ellipsis type and the @ operator.

Taking advantage of both those, as well as combinations of things that already exist in Python, NumPy is basically a DSL for performing multi-dimensional calculations.

As an example,

x[4,...,5,:]

lowers the dimension of x by 2, killing the first and next-to-last dimension. How does it work? We can explore what happens using this proof-of-concept:

class ItemGetterer(object):
    def __getitem__(self, idx):
        return idx

x = ItemGetterer()
print(x[4,...,5,:])

This prints (4, Ellipsis, 5, slice(None, None, None)).

In NumPy, the __getitem__ method expects tuples, and will parse them for numbers, the Ellipsis object and slice objects -- and then apply them to the number.

In addition, overriding the methods corresponding to the arithmetic operators, known as operator overloading, allows users of NumPy to write code that looks like the corresponding math expression.

Stan

Stan is a way to produce XML documents using pure Python syntax. This is often useful in web frameworks, which need to produce HTML.

For illustration, here is an example stan-based program""

from nevow import flat, tags, stan

video = stan.Tag('video')

aDocument = tags.html[
                tags.head[
                    tags.title["Title"]
                ],
                tags.body[
                    tags.h1["Heading" ],
                    tags.p(class_="life")["A paragraph about life."],
                    video["Your video here!"],
                ]
            ]
with open('output.html', 'w') as fp:

The tags module has a few popular tags. Those are instances of the stan.Tag class. If a new tag is needed, for example the <video> tag above, one can be added locally.

This is completely valid Python, without any magical execution contexts, in a regular importable module -- which allows easy generation of HTML.

As an example of the advantages of making this a regular Python execution context, we can see the benefits of dynamically generating HTML:

from nevow import flat, tags
bullets = [tags.li["bullet {}".format(i)] for i in range(10)]
aDocument = tags.html[
                tags.body[
                    tags.ul[bullets]
                ]
            ]
with open('output.html', 'w') as fp:
    fp.write(flat.flatten(aDocument))

In more realistic scenarios, this would be based on a database call, or a call to some microservice. Because stan is just pure Python code, it is easy to integrate into whatever framework expects it -- it can be returned from a function, or set as an object attribute.

The line between "taking advantage Python syntax and magic method overriding" and "abusing Python syntax" is sometimes subtle and always at least partially subjective. However, Python does allow surprising flexibility when it comes using pieces of the syntax for new purposes.

This gives great powers to mere library authors, without any need esoterica such as pushing and pulling variables into dictionaries before or after execing code. The with keyword, which we have not covered here, also often comes in handy for building DSLs in Python which do not need magic to work.

Django ORM

Operator overloading is one way Python allows programmers to imbue existing syntax with new, domain-specific semantics. When those semantics describe data with a repeated structure, Python's class system provides a natural model, and *metaclasses* allow you to extend that model to suite your purpose. This makes them a power tool for implementing Python DSLs.

Object-relational mapping (ORM) libraries often use metaclasses to ease defining and querying database tables. Django's Model class is the canonical example. Note that the API we're about to describe is part of modern, post-magic-removal Django!

Consider the models defined in Django's tutorial:

from django.db import models


class Question(models.Model):
    question_text = models.CharField(max_length=200)
    pub_date = models.DateTimeField('date published')


class Choice(models.Model):
    question = models.ForeignKey(Question, on_delete=models.CASCADE)
    choice_text = models.CharField(max_length=200)
    votes = models.IntegerField(default=0)

Each class encapsulates knowledge about and actions on a database table. The class attributes map to columns and inter-table relationships which power data manipulation and from which Django derives migrations. Django's models turn classes in a domain-specific language for database definitions and logic.

Here's what the generated DML might look like:

--
-- Create model Choice
--
CREATE TABLE "polls_choice" (
    "id" serial NOT NULL PRIMARY KEY,
    "choice_text" varchar(200) NOT NULL,
    "votes" integer NOT NULL
);
--
-- Create model Question
--
CREATE TABLE "polls_question" (
    "id" serial NOT NULL PRIMARY KEY,
    "question_text" varchar(200) NOT NULL,
    "pub_date" timestamp with time zone NOT NULL
);

A metaclass plays a critical role in this DSL by instrumenting Model subclasses. It's this metaclass that adds the objects class attribute, a Manager instance that mediates ORM queries, and the class-specific DoesNotExist and MultipleObjectsReturned exceptions.

Because metaclasses control class creation, they're an obvious way to inject these kinds of class-level attributes. For the same reason, but less obviouly, they also provide a place to run initialization hooks that should run only once in a program's lifetime. Classes are generally defined at module level. Thus, classes are created when modules are created. Because of Python's module caching, this means that metaclasses are usually run early and rarely. Django's DSL makes use of this assumption to register models with their applications upon creation.

Running code this soon can lead to strange issues, which make it tricky to use metaclasses correctly. They also rely on subclassing, which is considered harmful. These things and their use in ORMs, which are also considered harmful, might seem to limit their usefulness. However, a base class whose purpose is to inject a metaclass avoids many of the problems associated with subclassing, as little to no functionality will be inherited. Django weighs the benefits of familiar syntax over the costs of subclassing, resulting in a data definition DSL that's ergonomic for Python programmers.

Despite their complexity and shortcomings, metaclasses provide a succinct way to describe and manipulate all kinds of data, from wire protocols to XML documents. They can be just the trick for data-focused DSLs.

Pyramid

Pyramid allows defining web application logic, as opposed to the routing details, anywhere. It will match up the function to the route based on the route name, as defined in the configuration router.

# Removed imports

## The function definition can go anywhere
@view_config(route_name='home')
def my_home(context, request):
    return 'OK'

## This goes in whatever file we pass to our WSGI host
config = Configurator()
config.add_route('home', '/')
config.scan('.')
app = config.make_wsgi_app()

The builder pattern, as seen here, allows gradually creating an application. The methods on Configurator, as well as the decorators such as view_config, are effectively a DSL that helps build web applications.

Plugins

When code lives in real Python modules, and uses real Python APIs, it is sometimes useful for it to be executed automatically based on context. After all, one thing that DSL systems like SCons give us is automatically executing the SConscript when we run scons at the command line.

One tool that can be used for this is a plugin system. While a comprehensive review of plugin systems is beyond our scope here, we will give a few examples of using such systems for specific domains.

One of the oldest plugin systems is twisted.plugin. While it can be used as a generic plugin system, the main usage of it -- and a good case study of using it as a plugin system for DSLs -- is to extend the twist command line. These are also known as tap plugins, for historical reasons.

Here is a minimal example of a Twisted tap plugin:

# Removed imports
@implementer(IServiceMaker, IPlugin)
class SimpleServiceMaker(object):
    tapname = "simple-dsl"
    description = "The Simplest DSLest Plugin"

    class options(usage.Options):
        optParameters = [["port", "p", 1235, "Port number."]]

    def makeService(self, options):
        return internet.TCPServer(int(options["port"]),
                                  Factory.forProtocol(Echo))

serviceMaker = SimpleServiceMaker()

In order to be a valid plugin, this file must be placed under twisted.plugins. The usage.Options class defines a DSL, of sorts, for describing command-line options. We used only a small part of it here, but it is both powerful and flexible.

Note that this is completely valid Python code -- in fact, it will be imported as a module. This allows us to import it as well, and to test it using unit tests.

In fact, because this is regular Python code, usually serviceMakers are created using a helper class -- twisted.application.service.ServiceMaker. The definition above, while correct, is not idiomatic.

The gather library does not have a DSL. It does, however, function well as an agnostic plugin discovery mechanism. Because of that, it can be built into other systems -- that do provide a Pythonic DSL -- to serve as the autodiscovery mechanism.

# In  a central module:
ITS_A_DSLISH_FUNCTION = gather.Collector()

## Define the DSL as
## -- functions that get two parameters
##    -- conf holds some general configuration
##    -- send_result is used to register the result
def run_the_function_named(name, conf, send_result):
    res = ITS_A_DSLISH_FUNCTION.collect()
    return res[name](conf, result)

# In a module registering a DSL function
@ITS_A_DSLISH_FUNCTION.register(name='my_dslish_name')
def some_func(conf, send_result):
    with conf.temp_stuff() as some_thing:
         send_result(some_thing.get_value())

Conclusion

Python is a good language to use for DSLs. So good, in fact, that attrs, a DSL for defining classes, has achieved enormous popularity. Operator overloading, decorators, the with operator and generators, among other things, combine to allow novel usage of the syntax in specific problem domains. The existence of a big body of documentation of the language and its best practices, along with a thriving community of practicioners, is also an asset.

In order to take advantage of all of those, it is important to use Python as Python -- avoid magical execution contexts and novel input search algorithms in favor of the powerful code organization model Python already has -- modules.

Most people who want to use Python as a DSL are also Python programmers. Consider allowing your program's users to use the same tools that have made you successful.

As Glyph said in a related discussion, "do you want to confuse, surprise, and annoy people who may be familiar with Python from elsewhere?" Assuming the answer is "no", consider using real modules as your DSL mechanism.

08 Aug 2017 4:30am GMT

07 Aug 2017

feedPlanet Twisted

Itamar Turner-Trauring: Can we please do useful things with software?

If you want to read something that goes from depressing to exciting and back again every other paragraph, read the monthly Who's Hiring thread on Hacker News.

When I was younger I didn't think much about what software I was writing: I wanted to work on interesting problems, and get paid for doing it. So at one point I accepted a job at a financial trading platform, a job I would never take today; luckily I ended up walking away from that offer, in the end.

These days technical problems are still just as fun, but they're no longer sufficient: I want to do something useful, something that makes the world a tiny bit better. And so it's sad to see some of the software we programmers are spending our time writing, and exciting to see the useful ways in which software is being applied.

Less of this, please

If you need a job, you need a job, and as long as you're not doing something you consider unethical or immoral you do what you need to to get by. But if you have the opportunity, why not also do something useful, something that makes the world better?

Adtech? I mean, yes, advertising is kinda sorta maybe useful, if you squint, but at this point I find browsing without an ad blocker positively unpleasant. I miss the days when Google ad results were actually helpful.

Do we need to spend more time making it easy for brands to do anything? Can brands do anything? Does Coca-Cola have a giant glowing disembodied Coke avatar, ensconced deep within the bowels of Coca-Cola Worldwide Headquarters, sending out red ectomorphic tentacles to type out text into a SaaS written by programmers passionate about user engagement? I'd love to sit in on some of your customer interviews, if so, or perhaps just watch a recording from a safe distance.

While we're at it, can we stop being passionate?

Cryptocurrencies? Do you want to be responsible when the bubble bursts and it turns out capital flight from China is not a good basis for a currency? With tulips at least you had flowers at the end, even if they were bad investments; with cryptocurrencies people will be left with some digits on a USB drive. Woo.

Is it actually necessary to take an existing financial product (annuities, let's say) and call them something else (pensions, just for example)? Annuities and pensions have very different risk profiles; the former has individual company risk that the latter doesn't. Is this really worthwhile innovation?

Having previously lived in a different country I do realize the American medical system is a total and utter fuckup. But couldn't we just switch to single-payer like every other developed country, instead of writing software to put bandaids on a chest wound?

And the world probably doesn't need another startup whose business model involves taking VC money and giving it to poorly payed contractors in order to make the lives of the upper middle-class a miniscule increment more comfortable. How about a business model involving paying good wages to do something more valuable?

Do something useful

My definition of usefulness is personal and idiosyncratic, of course; I expect you will disagree with at least some of the list above. But there are also plenty of companies that sounds like they're building something almost anyone would find worthwhile.

"Technology to investigate pressure transients and flow instabilities in water supply networks"? May your hiring pipeline always be full.

"Reducing paperwork"? Sign me up (as long as the form is short).

Next time you're looking for job, spend a little time upfront thinking about what you think makes a company useful. Interesting technical problems are great, getting paid well is pretty damn good, and a short commute is a joy. But working on something that makes the world a better place will make your own job that much better.

07 Aug 2017 4:00am GMT

03 Aug 2017

feedPlanet Twisted

Itamar Turner-Trauring: Staying focused: it's not just your environment

To be a productive programmer you need to stayed focused. Deep-diving into TV Tropes, chatting with your friends, or reading up on that fancy new web framework might be fun, often even educational, but they won't get that feature you're working on out the door.

And there are harder to spot distractions, digressions masquerading as necessary work: a fun bug that is less important than the one you're working on, a technical detail that doesn't really matter, a task that can be put off until later. In a world full of distractions, how can you stay focused?

One obvious influence on your ability to focus is your environment. Is it noisy or quiet, are you constantly interrupted or do you get time to yourself? But whatever environment is best for you, even working in your ideal environment may not suffice: you can still suffer from distraction and lack of focus.

If you want to stay focused you will need, beyond a good environment:

  1. The motivation to do your work, which requires you to understand both yourself and your task.
  2. Coping techniques to help you deal with the fact that focus is a finite resource.

Motivation: why are you doing this?

If you don't care about your task, then you'll have a hard time focusing. But once you do understand why you're doing what you do, you'll have an easier time staying on task, and you'll have an easier time distinguishing between necessary subtasks and distracting digressions.

Why are you doing what you're doing at work? In part, there are general motivations that apply to all your work on the job. For example:

The problem with these motivations are that they are extrinsic: they come from the outside. Intrinsic motivations tend to work better. For example:

These general motivations will not suffice, however, if you don't understand why you're doing a particular task. Why does this data need to be collected? Why do you need to debug this seemingly impossible edge case; does it really matter?

Applying motivation: will this further your goal?

So how do you use motivation to stayed focused?

  1. Figure out the motivations for your task.
  2. Strengthen your motivation.
  3. Judge each part of your work based on your motivations.

1. Discovering your motivations

Start with the big picture: why are you working this job? Probably for the money, hopefully because you believe in the organization's goal, and perhaps for other reasons as well.

Then focus down on your particular task: why is it necessary? It may be that to answer this question you'll need do more research, talking to the product owner who requested a feature, or the user who reported a bug. This research will, as an added bonus, also help you solve the problem more effectively.

Combine all of these and you will get a list of motivations that applies to your particular task. For example, let's say you're working on a bug in a flight search engine. Your motivations might be:

  1. Money: I work to make money.
  2. Organizational goal: I work here because I think helping people find cheap, convenient flights is worth doing.
  3. Task goal: This bug should be fixed because it prevents users from finding the most convenient flight on certain popular routes.
  4. Fun: This bug involves a challenging C++ problem I enjoy debugging.

2. Strengthening your motivations

Keeping your motivations in mind will help you avoid distractions, and the stronger your motivations the better you'll do. If your motivations are weak then you can try different solutions:

3. Judging your work

As you go about solving your task you can use your motivations to judge whether a new potential subtask is worth doing. That is, your motivations can help prevent digressions, seemingly useful tasks that shouldn't actually be worked on.

Going back to the example above, imagine you encounter some interesting C++ language feature while working on it can be tempting to dive in. But judged by the four motivations it will only serve the fourth motivation, having fun, and likely won't further your other goals. So if the bug is urgent then you should probably wait until it's fixed to play around.

On the other hand, if you're working on a pointless feature, your sole motivation might be "keep my manager happy so I can keep getting paid." If you have two days to do the task, and it'll only take two hours to implement it, spending some time getting "distracted" learning a technical skill might help with a different motivation: switching to a more interesting position or job.

Coping with lack of focus

Even if you have an ideal environment and plenty of motivation, you will eventually run out of focus. This happens in two different dimensions:

  1. Time: Many programming tasks will take days or weeks to complete, and won't fit in the limited window you can stay focused at a time.
  2. Space: There's only so much code you can keep in your head at once, and most software projects will quickly exceed your limits. That means you can only focus on part of the code at a time.

You can only work around these limitations using a variety of coping techniques:

Another coping technique I don't see used quite as often is writing everything down.

Write everything down

You're working on a hard bug: you're not sure what's going on or why the problem occurs, and when you do figure it out it's going to take a few days to implement. Along the way you will be interrupted by scheduled meetings, coworkers asking questions, your bladder, email, going home for the evening, a weekend vacation, two quick bugs, and a few hundred other distractions. Write everything down and distractions and interruptions will matter far less.

You start by trying out different hypotheses: maybe the bug is in this function, perhaps it's in the environment, maybe it's a difference in library versions… Write down all your hypotheses. That way when you get interrupted you won't forget about them.

You try one hypothesis, and it turns out to be wrong. Write that down so you don't forget and test it again. Eventually you figure out the real problem: write that down too. That way when you come in the next day you'll remember what you learned.

Discover another bug along the way? Write that down by filing a ticket, and move on. Have an idea for a feature? Write that down too.

Next you come up with a list of subtasks to actually implement the fix, and then write them down, marking them off as you implement them. You'll be grateful to your past self when you come back from the weekend and try to remember where you were.

In short: write everything down.

How to stay focused

To stay focused you need to:

PS: Want to learn more software engineering skills and techniques? I write a weekly email covering one of my mistakes and what you can learn from it.

03 Aug 2017 4:00am GMT

26 Jul 2017

feedPlanet Twisted

Moshe Zadka: Image Editing with Jupyter

With the news about MS Paint going away from the default MS install, it might be timely to look at other ways to edit images. The most common edit I need to do is to crop images -- and this is what we will use as an example.

My favorite image editing tool is Jupyter. Jupyter needs some encouragement to be an image editor -- and to easily open images. As is often the case, I have a non-pedagogical, but useful, preamble. The preamble turns Jupyter into an image editor.

from matplotlib.pyplot import imshow
import numpy
import PIL
import os

%matplotlib inline

def inline(some_image):
    imshow(numpy.asarray(some_image))

def open(file_name):
    return PIL.Image.open(os.path.expanduser(file_name))

With the boring part done, it is time to edit some images! In the Shopkick birthday party, I had my caricature drawn. I love it -- but it has a whole baggage talking about the birthday party which is irrelevant for uploading to Facebook.

I have downloaded the image from the blog. I use Pillow (the packaging fork of PIL) to open the image.

a=open("~/Downloads/weeeee.jpg")

Then I want to visually inspect the image inline:

inline(a)

I use the crop method, and directly inline it:

inline(a.crop((0,0,1500,1600)))

If this was longer, and more realistic, this would be playing with the numbers back and forth -- and maybe resize, or combine it with other images.

The Pillow library is great, and this way we can inspect the results as we are modifying the image, allowing iterative image editing. For people like me, without a strong steady artist's hand to perfectly select the right circle, this solution works just great!

26 Jul 2017 5:20am GMT

21 Jul 2017

feedPlanet Twisted

Itamar Turner-Trauring: Incremental results, not incremental implementation

Update: Added section on iterative development.

You're working on a large project, bigger than you've ever worked on before: how do you ship it on time? How do you ensure it has all the functionality it needs? How do you design something that is too big to fit in your head?

My colleague Rafi Schloming, speaking in the context of the transition to microservices, suggests that focusing on incremental results is fundamentally better than focusing on incremental implementation. This advice will serve you well in most large projects, and to explain why I'd like to tell you the story of a software project I built the wrong way.

A real world example

The wrong way…

I once built a system for efficiently sending streams of data from one source to many servers; the resulting software was run by the company's ops team. Since I was even more foolish than I am now, I implemented it in the following order, based on the architecture I had come up with:

  1. First I implemented a C++ integration layer for the Python networking framework I was using, so I could write higher performance code.
  2. Then I implemented the messaging protocol and system, based on a research paper I'd found.
  3. Finally, I handed the code over to ops.

As you can see, I implemented my project based on its architecture: first the bottom layer, then the layers that built on top of it. Unfortunately, since I hadn't consulted ops enough about the design they then had to make some changes on their own. As a result, it took six months to a year until the code was actually being used in production.

…and the right way

How would I have built my tool to deliver incremental results?

  1. Build a working tool in pure Python. This would probably have been too slow for some of the higher-speed message streams.
  2. Hand initial tool over to ops. Ops could then start using it for slower streams, and provide feedback on the design.
  3. Next I would have fixed any problems reported by ops.
  4. Finally, I would rewrite the core networking in C++ for performance.

Notice that this is seemingly less efficient than my original plan, since it involves re-implementing some code. Nonetheless I believe it would have resulted in the project going live much sooner.

Why incremental results are better

Incremental results means you focus on getting results as quickly as possible, even if you can't get all the desired results with initial versions. That means:

Beyond iterative development

"Iterative development" is a common, and good, suggestion for software development, but it's not quite the same as focusing on incremental results. In iterative development you build your full application end-to-end, and then in each released iteration you make the functionality work better. In that sense, the better alternative I was suggesting above could be seen as simply suggesting iterative development. But incremental results is a more broadly applicable idea than iterative development.

Incremental results are the goal; iterative development is one possible technique to achieve that goal. Sometimes you can achieve incremental results without iterative development:

Whenever you can, aim for incremental results: it will reduce the risks, and make your project valuable much earlier. It may mean some wasted effort, yes, as you re-implement certain features, but that waste is usually outweighed by the reduced risk and faster feedback you'll get from incremental results.

PS: I've made lots of other mistakes in my career. If you'd like to learn how to avoid them, sign up for my newsletter, where every week I write up one of my mistakes and how you can avoid it.

21 Jul 2017 4:00am GMT

20 Jul 2017

feedPlanet Twisted

Moshe Zadka: Anatomy of a Multi-Stage Docker Build

Docker, in recent versions, has introduced multi-stage build. This allows separating the build environment from the runtime envrionment much more easily than before.

In order to demonstrate this, we will write a minimal Flask app and run it with Twisted using its WSGI support.

The Flask application itself is the smallest demo app, straight from any number of Flask tutorials:

# src/msbdemo/wsgi.py
from flask import Flask
app = Flask("msbdemo")
@app.route("/")
def hello():
    return "If you are seeing this, the multi-stage build succeeded"

The setup.py file, similarly, is the minimal one from any number of Python packaging tutorials:

import setuptools
setuptools.setup(
    name='msbdemo',
    version='0.0.1',
    url='https://github.com/moshez/msbdemo',
    author='Moshe Zadka',
    author_email='zadka.moshe@gmail.com',
    packages=setuptools.find_packages(),
    install_requires=['flask'],
)

The interesting stuff is in the Dockefile. It is interesting enough that we will go through it line by line:

FROM python:2.7.13

We start from a "fat" Python docker image -- one with the Python headers installed, and the ability to compile extensions.

RUN virtualenv /buildenv

We create a custom virtual environment for the build process.

RUN /buildenv/bin/pip install pex wheel

We install the build tools -- in this case, wheel, which will let us build wheels, and pex, which will let us build single file executables.

RUN mkdir /wheels

We create a custom directory to put all of our wheels. Note that we will not install those wheels in this docker image.

COPY src /src

We copy our minimal Flask-based application's source code into the docker image.

RUN /buildenv/bin/pip wheel --no-binary :all: \
                            twisted /src \
                            --wheel-dir /wheels

We build the wheels. We take care to manually build wheels ourselves, since pex, right now, cannot handle manylinux binary wheels.

RUN /buildenv/bin/pex --find-links /wheels --no-index \
                      twisted msbdemo -o /mnt/src/twist.pex -m twisted

We build the twisted and msbdemo wheels, togther with any recursive dependencies, into a Pex file -- a single file executable.

FROM python:2.7.13-slim

This is where the magic happens. A second FROM line starts a new docker image build. The previous images are available -- but only inside this Dockerfile -- for copying files from. Luckily, we have a file ready to copy: the output of the Pex build process.

COPY --from=0 /mnt/src/twist.pex /root

The --from=0 indicates copying from a previously built image, rather than the so-called "build context". In theory, any number of builds can take place in one Dockefile. While only the last one will actually result in a permanent image, the others are all available as targets for --from copying. In practice, two stages are usually enough.

ENTRYPOINT ["/root/twist.pex", "web", "--wsgi", "msbdemo.wsgi.app", \
            "--port", "tcp:80"]

Finally, we use Twisted as our WSGI container. Since we bound the Pex file to the -m twisted package execution, all we need to is run the web plugin, ask it to run a wsgi container, and give it the logical (module) path to our WSGI app.

Using Docker multi-stage builds has allowed us to create a Docker container for production with:

The biggest benefit is that it let us do so with one Dockerfile, with no extra machinery.

20 Jul 2017 4:30am GMT

18 Jul 2017

feedPlanet Twisted

Glyph Lefkowitz: Beyond ThunderDock

This weekend I found myself pleased to receive a Kensington SD5000T Thunderbolt 3 Docking Station.

Some of its functionality was a bit of a weird surprise.

The Setup

Due to my ... accretive history with computer purchases, I have 3 things on my desk at home: a USB-C macbook pro, a 27" Thunderbolt iMac, and an older 27" Dell display, which is old enough at this point that I can't link it to you. Please do not take this to be some kind of totally sweet setup. It would just be somewhat pointlessly expensive to replace this jumble with something nicer. I purchased the dock because I want to have one cable to connect me to power & both displays.

For those not familiar, iMacs of a certain vintage1 can be jury-rigged to behave as Thunderbolt displays with limited functionality (no access from the guest system to the iMac's ethernet port, for example), using Target Display Mode, which extends their useful lifespan somewhat. (This machine is still, relatively speaking, a powerhouse, so it's not quite dead yet; but it's nice to be able to swap in my laptop and use the big screen.)

The Link-up

On the back of the Thunderbolt dock, there are 2 Thunderbolt 3 ports. I plugged the first one into a Thunderbolt 3 to Thunderbolt 2 adapter which connects to the back of the iMac, and the second one into the Macbook directly. The Dell display plugs into the DisplayPort; I connected my network to the Ethernet port of the dock. My mouse, keyboard, and iPhone were plugged into the USB ports on the dock.

The Problem

I set it up and at first it seemed to be delivering on the "one cable" promise of thunderbolt 3. But then I switched WiFi off to test the speed of the wired network and was surprised to see that it didn't see the dock's ethernet port at all. Flipping wifi back on, I looked over at my router's control panel and noticed that a new device (with the expected manufacturer) was on my network. nmap seemed to indicate that it was... running exactly the network services I expected to see on my iMac. VNCing into the iMac to see what was going on, I popped open the Network system preference pane, and right there alongside all the other devices, was the thunderbolt dock's ethernet device.

The Punch Line

Despite the miasma of confusion surrounding USB-C and Thunderbolt 32, the surprise here is that apparently Thunderbolt is Thunderbolt, and (for this device at least) Thunderbolt devices connected across the same bus can happily drive whatever they're plugged in to. The Thunderbolt 2 to 3 adapter isn't just a fancy way of plugging in hard drives and displays with the older connector; as far as I can tell all the functionality of the Thunderbolt interface remains intact as both "host" and "guest". It's like having an ethernet switch for your PCI bus.

What this meant is that when I unplugged everything and then carefully plugged in the iMac before the Macbook, it happily lit up the Dell display, and connected to all the USB devices plugged into the USB hub. When I plugged the laptop in, it happily started charging, but since it didn't "own" the other devices, nothing else connected to it.

Conclusion

This dock works a little bit too well; when I "dock" now I have to carefully plug in the laptop first, give it a moment to grab all the devices so that it "owns" them, then plug in the iMac, then use this handy app to tell the iMac to enter Target Display mode.

On the other hand, this does also mean that I can quickly toggle between "everything is plugged in to the iMac" and "everything is plugged in to the MacBook" just by disconnecting and reconnecting a single cable, which is pretty neat.


  1. Sadly, not the most recent fancy 5K ones.

  2. which are, simultaneously, both the same thing and not the same thing.

18 Jul 2017 7:11am GMT