11 Nov 2024
Planet Twisted
Glyph Lefkowitz: It’s Time For Democrats To Get More Annoying
Kamala Harris lost. Here we are. So it goes.
Are you sad? Are you scared?
I am very sad. I am very scared.
But, like everyone else in this position, most of all, I want to know what to do next.
A Mission For Progress
I believe that we should set up a missionary organization for progressive and liberal values.
In 2017, Kayla Chadwick wrote the now-classic article, "I Don't Know How To Explain To You That You Should Care About Other People". It resonated with millions of people, myself included. It expresses an exasperation with a populace that seems ignorant of economics, history, politics, and indeed unable to read the news. It is understandable to be frustrated with people who are exercising their electoral power callously and irresponsibly.
But I think in 2024, we need to reckon with the fact that we do, in fact, need to explain to a large swathe of the population that they should care about other people.
We had better figure out how to explain it soon.
Shared Values - A Basis for Hope
The first question that arises when we start considering outreach to the conservative-leaning or undecided independent population is, "are these people available to be convinced?".
To that, I must answer an unqualified "yes".
I know that some of you are already objecting. For those of us with an understanding of history and the mechanics of bigotry in the United States, it might initially seem like the answer is "no".
As the Nazis came to power in the 1920s, they were campaigning openly on a platform of antisemitic violence. Everyone knew what the debate was. It was hard to claim that you didn't, in spite of some breathtakingly cowardly contemporaneous journalism, they weren't fooling anyone.
It feels ridiculous to say this, but Hitler did not have support among Jews.
Yet, after campaigning on a platform of defaming immigrants, and Mexican immigrants specifically for a decade, a large part of what drove his victory is that Trump enjoyed a shockingly huge surge of support among the Hispanic population. Even some undocumented migrants - the ones most likely to be herded into concentration camps starting in January - are supporting him.
I believe that this is possible because, in order to maintain support of the multi-ethnic working-class coalition that Trump has built, the Republicans must maintain plausible deniability. They have to say "we are not racist", "we are not xenophobic". Incredibly, his supporters even say "I don't hate trans people" with startling regularity.
Most voters must continue to believe that hateful policies with devastating impacts are actually race-neutral, and are simply going to get rid of "bad" people. Even the ones motivated by racial resentment are mostly motivated by factually incorrect beliefs about racialized minorities receiving special treatment and resources which they are not in fact receiving.
They are victims of a disinformation machine. One that has rendered reality incomprehensible.
If you listen to conservative messaging, you can hear them referencing this all the time. Remember when JD Vance made that comment about Democrats calling Diet Mountain Dew racist?
Many publications wrote about this joke "bombing"1, but the kernel of truth within it is this: understanding structural bigotry in the United States is difficult. When we progressives talk about it, people who don't understand it think that our explanations sound ridiculous and incoherent.
There's a reason that the real version of critical race theory is a graduate-level philosophy-of-law course, and not a couple of catch phrases.
If, without context, someone says that "municipal zoning laws are racist", this makes about as much sense as "Diet Mountain Dew is racist" to someone who doesn't already know what "redlining" is.
Conservatives prey upon this confusion to their benefit. But they prey on this because they must do so. They must do so because, despite everything, hate is not actually popular among the American electorate. Even now, they have to be deceived into it.
The good news is that all we need to do is stop the deception.
Politics Matter
If I have sold you on the idea that a substantial plurality of voters are available to be persuaded, the next question is: can we persuade them? Do we, as progressives, have the resources and means to do so? We did lose, after all, and it might seem like nothing we did had much of an impact.
Let's analyze that assumption.
Across the country, Trump's margins increased. However, in the swing states, where Harris spent money on campaigning, his margins increased less than elsewhere. At time of writing, we project that the safe-state margin shift will be 3.55% towards trump, and the swing-state margin shift will be 1.69%.
This margin was, sadly, too small for a victory, but it does show that the work mattered. Perhaps given more time, or more resources, it would have mattered just a little bit more, and that would have been decisive.
This is to say, in the places where campaign dollars were spent, even against the similar spending of the Trump campaign, we pushed the margin of support 1.86% higher within 107 days. So yes: campaigning matters. Which parts and how much are not straightforward, but it definitely matters.
This is a bit of a nonsensical comparison for a whole host of reasons2, but just for a ballpark figure, if we kept this pressure up continuously during the next 4 years, we could increase support for a democratic candidate by 25%.
We Can Teach, Not Sell
Political junkies tend to overestimate the knowledge of the average voter. Even when we are trying to compensate for it, we tend to vastly overestimate how much the average voter knows about politics and policy. I suspect that you, dear reader, are a political junkie even if you don't think of yourself as one.
To give you a sense of what I mean, across the country, on Election day and the day after, there was a huge spike in interest for the Google query, "did Joe Biden drop out".
Consistently over the last decade, democratic policies are more popular than their opponents. Even deep red states, such as Kansas, often vote for policies supported by democrats and opposed by Republicans.
This confusion about policy is not organic; it is not voters' fault. It is because Republicans constantly lie.
All this ignorance might seem discouraging, but it presents an opportunity: people will not sign up to be persuaded, but people do like being informed. Rather than proselytizing via a hard sales pitch, it should be possible to offer to explain how policy connects to elections. And this is made so much the easier if so many of these folks already generally like our policies.
The Challenge Is Enormous
I've listed some reasons for optimism, but that does not mean that this will be easy.
Republicans have a tremendously powerful, decentralized media apparatus that reinforces their culture-war messaging all the time.
After some of the post-election analysis, "The Left Needs Its Own Joe Rogan" is on track to become a cliché within the week.3 While I am deeply sympathetic to that argument, the right-wing media's success is not organic; it is funded by petrochemical billionaires.
We cannot compete via billionaire financing, and as such, we have to have a way to introduce voters to progressive and liberal media. Which means more voters need social connections to liberals and progressives.
Good Works
The democratic presidential campaign alone spent a billion and a half dollars. And, as shown above, this can be persuasive, but it's just the persuasion itself.
Better than spending all this money on telling people what good stuff we would do for them if we were in power, we could just show them, by doing good stuff. We should live our values, not just endlessly reiterate them.
A billion dollars is a significant amount of power in its own right.
For historical precedent, consider the Black Panthers' Free Breakfast For Children program. This program absolutely scared the shit out of the conservative power structure, to the point that Nixon's FBI literally raided them for giving out free food to children.
Religious missionaries, who are famously annoying, often offset their annoying-ness by doing charitable work in the communities they are trying to reach. A lot of the country that we need to reach are religious people, and nominally both Christians and leftists share a concern for helping those in need, so we should find some cultural common ground there.
We can leverage that overlap in values by partnering with churches. This immediately makes such work culturally legible to many who we most need to reach.
Jobs Jobs Jobs
When I raised this idea with Philip James, he had been mulling over similar ideas for a long time, but with a slightly different tack: free career skills workshops from folks who are obviously "non-traditional" with respect to the average rural voter's cultural expectations. Recruit trans folks, black folks, women, and non-white immigrants from our tech networks.
Run the trainings over remote video conferencing to make volunteering more accessible. Run those workshops through churches as a distribution network.
There is good evidence that this sort of prolonged contact and direct exposure to outgroups, to help people see others as human beings, very effective politically.
However, job skills training is by no means the only benefit we could bring. There are lots of other services we could offer remotely, particularly with the skills that we in the tech community could offer. I offer this as an initial suggestion; if you have more ideas I'd love to hear them. I think the best ideas are ones where folks can opt in, things that feel like bettering oneself rather than receiving charity; nobody likes getting handouts, particularly from the outgroup, but getting help to improve your own skills feels more participatory.
I do think that free breakfast for children, specifically, might be something to start with because people are far more willing to accept gifts to benefit others (particularly their children, or the elderly!) rather than themselves.
Take Credit
Doing good works in the community isn't enough. We need to do visible good works. Attributable good works.
We don't want to be assholes about it, but we do want to make sure that these benefits are clearly labeled. We do not want to attach an obligation to any charitable project, but we do want to attach something to indicate where it came from.
I don't know what that "something" should be. The most important thing is that whatever "something" is appeals to set of partially-overlapping cultures that I am not really a part of - Midwestern, rural, southern, exurban, working class, "red state" - and thus, I would want to hear from people from those cultures about what works best.
But it's got to be something.
Maybe it's a little sticker, "brought to you by progressives and liberals. we care about you!". Maybe it's a subtle piece of consistent branding or graphic design, like a stylized blue stripe. Maybe we need to avoid the word "democrats", or even "progressive" or "liberal", and need some independent brand for such a thing, that is clearly tenuously connected but not directly; like the Coalition of Liberal and Leftist Helpful Neighbors or something.
Famously, when Trump sent everybody a check from the government, he put his name on it. Joe Biden did the same thing, and Democrats seem to think it's a good thing that he didn't take credit because it "wasn't about advancing politics", even though this obviously backfired. Republicans constantly take credit for the benefits of Democratic policies, which is one reason why voters don't know they're democratic policies.
Our broad left-liberal coalition is attempting to improve people's material conditions. Part of that is, and must be, advancing a political agenda. It's no good if we provide job trainings and free lunches to a community if that community is just going to be reduced to ruin by economically catastrophic tariffs and mass deportations.
We cannot do this work just for the credit, but getting credit is important.
Let's You And Me - Yes YOU - Get Started
I think this is a good idea, but I am not the right person to lead it.
For one thing, building this type of organization requires a lot of organizational and leadership skills that are not really my forte. Even the idea of filing the paperwork for a new 501(c)3 right now sounds like rolling Sisyphus's rock up the hill to me.
For another, we need folks who are connected to this culture, in ways that I am not. I would be happy to be involved - I do have some relevant technical skills to help with infrastructure, and I could always participate in some of the job-training stuff, and I can definitely donate a bit of money to a nonprofit, but I don't think I can be in charge.
You can definitely help too, and we will need a wide variety of skills to begin with, and it will definitely need money. Maybe you can help me figure out who should be in charge.
This project will be weaker without your support. Thus: I need to hear from you.
You can email me, or, if you'd prefer a more secure channel, feel free to reach out over Signal, where my introduction code is glyph.99
. Please start the message with "good works:" so I can easily identify conversations about this.
If I receive any interest at all, I plan to organize some form of meeting within the next 30 days to figure out concrete next steps.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more things like it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! My aspirations for this support are more in the directions of software development than activism, but needs must, when the devil drives. Thanks especially to Philip James for both refining the idea and helping to edit this post, and to Marley Myrianthopoulos for assistance with the data analysis.
-
Personally I think that the perception of it "bombing" had to do with the microphones during his speech not picking up much in the way of crowd noise. It sounded to me like there were plenty of claps and laughs at the time. But even if it didn't land with most of the audience, it definitely resonated for some of them. ↩
-
A brief, non-exhaustive list of the most obvious ones:
- This is a huge amount of money raised during a crisis with an historic level of enthusiasm among democrats. There's no way to sustain that kind of momentum.
- There are almost certainly diminishing returns at some point; people harbor conservative (and, specifically, bigoted) beliefs to different degrees, and the first million people will be much easier to convince than the second million, etc.
- Support share is not fungible; different communities will look different, and some will be saturated much more quickly than others. There is no reason to expect the rate over time to be consistent, nor the rate over geography.
-
I mostly agree with this take, and in the interest of being the change I want to see in the world, let me just share a brief list of some progressive and liberal sources of media that you might want to have a look at and start paying attention to:
- If Books Could Kill
- Some More News
- Behind The Bastards
- Crooked Media, the publishers of Pod Save America, but you should check out everything they have on offer
- Bryan Tyler Cohen
- Hasan Piker
- PhilosophyTube
- Hbomberguy
- FD Signifier
- Citation Needed
- Platformer
Please note that not all of these are to my taste and not all of them may be to yours. They are all at different places along the left-liberal coalition spectrum, but find some sources that you enjoy and trust, and build from there. ↩
11 Nov 2024 4:01am GMT
03 Nov 2024
Planet Twisted
Glyph Lefkowitz: The Federation Deathmatch
It's the weekend, and I have some Thoughts about federated social media. So, buckle up, I guess, it's time to start some fights.
Recently there has been some discourse about Bluesky's latest fundraising round. I've been participating in conversations about this on Mastodon, and I think I might sometimes come across as a Mastodon partisan, but my feelings are complex and I really don't want to be boosting the ActivityPub Fediverse without qualification.
So here are some qualifications.
Bluesky Is Evil
To the extent that I am an ActivityPub partisan in the discourse between ActivityPub and ATProtocol, it is because I do not believe that Bluesky is a meaningfully decentralized social network. It is a social network, run by a company, which has a public API with some elements that might, one day, make it possible for it to be decentralized. But today, it is not, either practically or theoretically.
The Bluesky developers are putting in a ton of effort to maybe make it decentralized, hypothetically, someday. A lot of people think they will succeed. But ActivityPub (and, of course, Mastodon specifically) are already, today, meaningfully decentralized, as you can see on FediDB, there are instances with hundreds of thousands of people on them, before we even get to esoterica like the integrations Threads, Wordpress, Flipboard, and Ghost are doing.
The inciting incident for this post - that a lot of people are also angry about Bluesky raising millions of dollars from Evil Guys Doing Evil Stuff Capital - is indeed a serious concern. It lights the fuse that burns towards their eventual, inevitable incredible journey. ATProtocol is just an API, and that API will get shut off one day, whenever their funders get bored of the pretense of their network being "decentralized".
At time of writing, it is also interesting that 3 of the 4 times that the CEO of Bluesky has even skeeted the word "blockchain" is to say "no blockchain", to reassure users that the scam magnet of "Blockchain" is not actually near their product or protocol, which is a much harder position to maintain when your lead investor is "Blockchain Capital".
I think these are all valid criticisms of Bluesky. But I also think that the actual engineers working on the product are aware of these issues, and are making a significant effort to address them or mitigate them in any way they can. All that work can still be easily incinerated by a slow quarter in terms of user growth numbers or a missed revenue forecast when the VCs are getting impatient, but it's not nothing, it is a life's work.
Really, who among us could not have our life's ambitions trivially destroyed in an afternoon, simply because a billionaire decided that they should be? If you feel like you are safe from this, I have some bad news about how money works. So we are all doing our best in an imperfect system and maybe Bluesky is on to something here. That's eminently possible. They're certainly putting forth an earnest effort.
Mastodon Is Stupid
Meanwhile, not nearly as much has been made recently of Mastodon refusing funding from a variety of sources, when all indications are that funding is low, and plummeting, far below the level required to actually sustain the site, and they haven't done a financial transparency report for over a year, and that report was already nearly a year late.
Mastodon and the fediverse are not nearly in a position to claim moral superiority over Bluesky. Sure, taking blockchain VC money might seem like a rookie mistake, but going out of business because you are spurning every possible source of funding is not that wise either.
Some might think that, sure, Mastodon the company might die but at least the Fediverse as a whole will keep going strong, right? Lots of people run their own instances! I even find elements of this argument convincing, and I think there is probably some truth to it. But to really believe this argument as claimed, that it's a fait accompli that the fediverse will survive in some form, that all those self-run servers will be a robust network that will self-repair, requires believing some obviously false stuff. It is frankly unprofitable to run a Fediverse instance. Realistically, if you want to operate a mastodon server for yourself, it is going to cost at least $100/year once you include stuff like having a domain name, and managing the infrastructure costs is a complex problem that keeps getting harder to manage as the software itself gets slower.
Cory Doctorow has recently argued that this is all worth it, because at least on Mastodon, you're in control, not at the whims of centralized website operators like Bluesky. In his words,
On Mastodon (and other services based on Activitypub), you can easily leave one server and go to another, and everyone you follow and everyone who follows you will move over to the new server. If the person who runs your server turns out to be imperfect in a way that you can't endure, you can find another server, spend five minutes moving your account over, and you're back up and running on the new server
He concludes:
Any system where users can leave without pain is a system whose owners have high switching costs and whose users have none
(Emphasis mine).
This is a beautiful vision. It is, however, an incorrect assessment of the state of the Fediverse as it stands today. It's not true in two important ways:
First, if you look at any account of a user's fediverse account migration, like this one from Steve Bate or this one from the Ente project or this one from Erin Kissane, you will see that it is "painful for the foreseeable future" or "wasn't as seamless as advertised", and that "the best time to […] migrate instances […] is never". This language does not presage a pleasant experience, as Doctorow puts it, "without pain".
Second, migration is an active process that requires engagement from the instance that hosts you. If you have been blocked or banned, or had your account terminated, you are just out of luck. You do not have control over your data or agency over your online identity unless you've shelled out the relatively exorbitant amount of money to actually operate your own instance.
In short, ActivityPub is no panacea. A federated system is not really a "decentralized" system, as much as it is a bunch of smaller centralized systems that all talk to each other. You still need to know, and care, about your social and financial relationship to the operators of your instance. There is probably no getting away from this, like, just generally on the Internet, no matter how much peer-to-peer software we deploy, but there certainly isn't in the incomplete mess that is ActivityPub.
JOIN, or DIE.
Neither Mastodon (or ActivityPub) nor Bluesky (or ATProtocol) has a comprehensive solution to the problem of decentralized social media. These companies, and these protocols, are both deeply flawed and if everything keeps bumping along as it is, I believe both are likely to fail. At different times, on different timelines, and for different reasons, but fail nonetheless.
However, these networks are both small and growing, and we are not yet in the phase of enshittification where margins are shrinking and audiences are captured and the screws must be tightened to juice revenue. There are stil possibilities. Mastodon is crowdfunded and what they lack in resources they make up for in flexibility and scrappiness. Bluesky has money and while there will eventually be a need to monetize somehow, they have plenty of runway to come up with that answer, and a lot of sophisticated protocol work has been done. Not enough to make a complete circut and allow users true, practical decentralization, but it's not nothing, either.
Mastodon and Bluesky are both organizations with humans in them, and piles of data that is roughly schema-compatible even if the nuances and details are different. I know that there is a compatible model becuse thanks to both platforms being relatively open, there is a functioning ActivityPub/ATProtocol bridge in the form of Brid.gy Fed. You can use it today, and I highly recommend that you do so, so that "choice of protocol" does not fully define your audience. If you're on bluesky, follow this account, and if you're on Mastodon or elsewhere on the Fediverse, search for and follow @bsky.brid.gy@bsky.brid.gy
.
The reality that fans of decentralized, independent social media must confront is that we are a tiny audicence right now. Whichever site we are looking at, we are talking about a few million monthly active users at best, in a world where even the pathetic husk of Twitter still has hundreds of millions and Facebook has billions. Interneceine fights are not going to get us anywhere. We need to build bridges and links and connect our networks as densely as possible. If I'm being honest, Bridgy Fed looks like a pretty janky solution, but it's something, and we need to start doing something soon, so we do not collectively become a permanent minority that mass markets can safely ignore.
As users, we need to set an example, so that the developers of the respective platforms get their shit together and work together directly so that workarounds like Bridgy are not required. Frankly, this is mostly on the ActivityPub and Mastodon devs, as far as I can tell. Unfortunately, not a lot of this seems to be public, or at least I haven't witnessed a lot of it directly, but I have heard repeatedly that the ActivityPub developers are prickly, and this is one high-profile public example where an ActivityPub partisan is incredibly, pointlessly hostile and borderline harrassing towards someone - Mike Masnick, a long-time staunch advocate for open protocols and open patents, someone with a Mastodon account, and thus as good a prospective ally as the ActivityPub fediverse might reasonably find - explaining some of the relative benefits of Bluesky.
Most of us are technology nerds in one way or another. In that way we can look at signifiers like "ActivityPub" and "ATProtocol", and feel like these are hard boundaries around different all-encompassing structures for the future, and thus tribes we must join and support.
A better way to look at this, however, is to see social entities like Mastodon gGmbH and Bluesky PBC - or, more to the point, Fosstodon, SFBA Social, Hachyderm (and maybe, one day, even an instance which isn't fully just for software development nerds), as groups that deploy these protocols to access some data that they publish, just as they might publish their website over HTTP or their newsletters over SMTP. There are technical challenges involved in bridging between mutually unintelligible domain models, but that is, like, network software's whole deal. Most software is just some kind of translation from one format or context to another. The best possible future for the fediverse is the one where users care as much about the distinction between ATProtocol and ActivityPub as they do about the distinction between POP3 and IMAP.
To both developers and users of these systems, I say: get it together. Be nice to each other. Because the rest of the social media ecosystem is sure as shit not going to be nice to us if we ever see even a hint of success and start to actually cut into their user base.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
03 Nov 2024 9:49pm GMT
24 Sep 2024
Planet Twisted
Hynek Schlawack: Production-ready Python Docker Containers with uv
Starting with 0.3.0, Astral's uv brought many great features, including support for cross-platform lock files uv.lock
. Together with subsequent fixes, it has become Python's finest workflow tool for my (non-scientific) use cases. Here's how I build production-ready containers, as fast as possible.
24 Sep 2024 12:00am GMT
23 Sep 2024
Planet Twisted
Hynek Schlawack: Python Project-Local Virtualenv Management Redux
One of my first TIL entries was about how you can imitate Node's node_modules
semantics in Python on UNIX-like operating systems. A lot has happened since then (to the better!) and it's time for an update. direnv still rocks, though.
23 Sep 2024 12:00am GMT
11 Sep 2024
Planet Twisted
Glyph Lefkowitz: Python macOS Framework Builds
When you build Python, you can pass various options to ./configure
that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix
to tell the build where to install itself, --without-pymalloc
if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug
.
One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says "Create a Python.framework rather than a traditional Unix install." But… do you need a Python.framework? If you're used to running Python on Linux, then a "traditional Unix install" might sound pretty good; more consistent with what you are used to.
If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didn't really explain why you'd want it, just that it was generally desirable.
The traditional answer to this question is that you need a Framework build "if you want to use a GUI", but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is "run IDLE"; many non-Framework builds also omit Tkinter because they don't ship a Tk dependency, so IDLE won't start. But other GUI libraries work fine. For example, uv tool install runsnakerun
/ runsnake
will happily pop open a GUI window, Framework build or not. So it bears some explaining
Wait, what is a "Framework" anyway?
Let's back up and review an important detail of the mac platform.
On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo
for its executable and /share/foo/
for its associated data files, is instead on macOS bundled together into Foo.app
, and those components live in specified locations within that directory.
A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications
is roughly equivalent to the Unix /bin
, then /Library/Frameworks
is roughly equivalent to the Unix /lib
.
App bundles are contained in a directory with a .app
suffix, and frameworks are a directory with a .framework
suffix.
So what do you need a Framework for in Python?
The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesn't work, where you "need" or "don't need" a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didn't try that many different things, and there are a lot of different things that might fail.
The biggest issue is not actually the Python.framework
itself. The metadata on the framework is not used for much outside of a build or linker context. However, Python's Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle]
to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.
Various Apple platform features might want to ask a process questions like "what is your unique bundle identifier?" or "what entitlements are you authorized to access" and even beginning to answer those questions requires information stored in the application's bundle.
Python does not ship with a wrapper around the core macOS "cocoa" API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa
, I can do this
1 2 |
|
On a non-Framework build, it might look like this:
1 |
|
But on a Framework build (even in a venv in a similar location), it might look like this:
1 |
|
This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as "what is your app icon" for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.
This might seem like undocumented, unnecessary cruft, but it is that way because it's just basic stuff the platform expects to be there for a lot of different features of the platform.
/etc/
builds
Still, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If you're familiar with more Unix style application development, consider a junior developer - let's call him Jim - asking you if they should use an "/etc
build" or not as a basis for their Docker containers.
What is an "/etc
build"? Well, base images like ubuntu
come with a bunch of files in /etc
, and Jim just doesn't see the point of any of them, so he likes to delete everything in /etc
just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so what's the problem?
Off the top of your head, can you list all the features that all the files that /etc
is needed for? Why not? Jim thinks it's weird that all this stuff is undocumented, and it must just be unnecessary cruft.
If Jim were to come back to you later with a problem like "it seems like hostname resolution doesn't work sometimes" or "ls
says all my files are owned by 1001
rather than the user name I specified in my Dockerfile" you'd probably say "please, put /etc
back, I don't know exactly what file you need but lots of things just expect it to be there".
This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Python's Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc
exists and has the default files in it won't always be sufficient; sometimes you have to add more files to /etc
, with quite specific contents, for some features to work properly. But "don't get rid of /etc
(or your application bundle)" is pretty good advice.
Do you ever want a non-Framework build?
macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably don't have to care all that much about Framework builds. You're not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, it's fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.
Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library
or ~/Library
. You can move Python.framework
into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.
Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you don't need any platform-specific features, don't want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.
Why do I care?
Personally, I think it's important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.
Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like "show me a user notification when my data analysis is done" or "don't do a power-hungry data analysis when I'm on battery power", especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didn't before, or vice versa.
More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like "how can we set up our Mac developers' laptops with Python".
11 Sep 2024 7:43pm GMT
03 Sep 2024
Planet Twisted
Hynek Schlawack: How to Ditch Codecov for Python Projects
Codecov's unreliability breaking CI on my open source projects has been a constant source of frustration for me for years. I have found a way to enforce coverage over a whole GitHub Actions build matrix that doesn't rely on third-party services.
03 Sep 2024 12:00am GMT
02 Sep 2024
Planet Twisted
Hynek Schlawack: Why I Still Use Python Virtual Environments in Docker
Whenever I publish something about my Python Docker workflows, I invariably get challenged about whether it makes sense to use virtual environments in Docker containers. As always, it's a trade-off, and I err on the side of standards and predictability.
02 Sep 2024 11:00am GMT
16 Aug 2024
Planet Twisted
Glyph Lefkowitz: On The Defense Of Heroes
If a high-status member of a community that you participate in is accused of misbehavior, you may want to defend them. You may even write a long essay in their defense.
In that essay, it may seem only natural to begin with a lengthy enumeration of the accused's positive personal qualities. To extol the quality of their career and their contributions to your community. To talk about how nice they are. To be a character witness in the court of public opinion.
If you do this, you are not defending them. You are proving the point. This is exactly how missing stairs come to exist. People don't get away with bad behavior if they don't have high status and a good reputation already.
Sometimes, someone with antisocial inclinations seeks out status, in order to facilitate their bad behavior. Sometimes, a good, but, flawed person does a lot of really good work and thereby accidentally ends up with more status than they were expecting to have, and they don't know how to handle it. In either case, bad behavior may ensue.
If you truly believe that your fave is being accused or punished unjustly, focus on the facts. What, specifically, has been alleged? How are these allegations substantiated? What verifiable evidence exists to the contrary? If you feel that someone is falsely accusing them to ruin their reputation, is there evidence to support your claim that the accusation is false? Ask yourself the question: what information do you have, that is leading to your correct analysis of the situation, that the people making the accusations do not have, which might be leading them into error?
But, also, maybe just… don't?
The urge to defend someone like this is much more likely to come from a sense of personal grievance than justice. Consider: does it feel like you are being attacked, when your fave has been attacked? Is there a tightness in your chest, heat rising on your cheeks? Do you feel suddenly defensive?
Do you think that defensiveness is likely to lead to you making good, rational decisions about what steps to take next?
Let your heroes face accountability. If they are really worth your admiration, they might accept responsibility and make amends. Or they might fight the accusations with their own real evidence - evidence that you, someone peripheral to their situation, are unlikely to have - and prove the accusations wrong.
They might not want your defense. Even if they feel like they do want it in the moment - they are human too, after all, and facing accountability does not feel good to us humans - is the intensified feeling that they can't let down their supporters who believe in them likely to make them feel less defensive and panicked?
In either case, your character defense is unlikely to serve them. At best it helps them stay on an ego trip, at worst it muddies the waters and might confuse the collection of facts that would, if considered dispassionately, properly exonerate them.
Do you think that I am pretending to speak in generalities but really talking about one specific recent event?
Wrong!
Just in this last week, I have read 2 different blog posts about 2 completely different people in completely unrelated communities and both of their authors need to read this. But each of those were already of a type, one that I've read dozens of instances of in the past.
It is a very human impulse to perceive a threat to someone we think well of, and to try to defend against that threat. But the consequences of someone's own actions are not a threat you can defend them from.
16 Aug 2024 7:53pm GMT
04 Jul 2024
Planet Twisted
Glyph Lefkowitz: Against Innovation Tokens
Updated 2024-07-04: After some discussion, added an epilogue going into more detail about the value of the distinction between the two types of tokens.
In 2015, Dan McKinley laid out a model for software teams selecting technologies. He proposed that each team have a limited supply of "innovation tokens", and, when selecting a technology, they can choose boring ones for free but "innovative" ones cost a token. This implies that we all know which technologies are innovative, and we assume that they are inherently costly, so we want to restrict their supply.
That model has become popular to the point that it is now part of the vernacular. In many discussions, it is accepted as received wisdom, or even common sense.
In this post I aim to show you that despite being superficially helpful, this model is wrong, and in fact, may be counterproductive. I believe it is an attractive nuisance in computer programming discourse.
In fairness to Mr. McKinley, the model he described in this post is:
- nearly a decade old at this point, and
- much more nuanced in its description of the problem with "innovation" than the subsequent memetic mutation of the concept.
While I will be referencing McKinley's post, and I do take some issue with it, I am reacting more strongly to the life of its own that this idea has taken on once it escaped its original context. There are a zillion worse posts rehashing this concept, on blogs and LinkedIn, but I won't be linking to them because the goal is not to call anybody out.
To some extent I am re-raising McKinley's own caveats and reinforcing them. So I may be arguing with a strawman, but it's a strawman I have seen deployed with some regularity over the years.
To reduce it to its core, this strawman is "don't use new or interesting technology, and if you have to, only use a little bit".
Within the broader culture of programmers, an "innovation token" has become a shorthand to smear any technology perceived - almost always based on vibes, not data - as risky, and the adoption of novel approaches as pretentious and unserious. Speaking of programmer culture though, I do have to acknowledge there is also a pervasive tendency for us to get distracted by novelty and waste time on puzzles rather than problem-solving, so I understand where the reactionary attitude represented by the concept of an innovation token comes from.
But it is reactionary.
At its worst, it borders on anti-intellectualism. I have heard it used on more than one occasion as a thought-terminating cliche to discard a potentially promising new tool. But before I get into that, let me try to give a sympathetic summary of the idea, because the model is not entirely bad.
It has been popular for a long time because it does work okay as an heuristic.
The real problem that McKinley is describing is operational overhead. When programmers make a technology selection, we are often considering how difficult it will make the programming. Innovative technology selections are, by definition, less mature.
That lack of maturity - particularly in the open source world - often means that the project is in a part of its lifecycle where it is concerned with development affordances more than operational ones. Therefore, the stereotypical innovative project, even one which might legitimately be a big improvement to development velocity, will create more operational overhead. That operational overhead creates a hidden cost for the operations team later on.
This is a point I emphatically agree with. When selecting a technology, you should consider its ease of operation more than its ease of development. If your team is successful, they will be operating and maintaining it far longer than they are initially integrating and deploying it.
Furthermore, some operational overhead is inevitable. You will need to hire people to mitigate it. More popular, more mature projects will have a bigger talent pool to hire from, so your training costs will be lower, and those training costs are part of your operational cost too.
Rationing innovation tokens therefore can work as a reasonable heuristic, or proxy metric, for avoiding a mess of complex operational problems associated with dependencies that are expensive to operate and hard to hire for.
There are some minor issues I want to point out before getting to the overarching one.
- "has a lot of operational overhead" is a stereotype of a new technology, not an inherent property. If you want to reject a technology on the basis of being too high-overhead, at least look into its actual overhead a little bit. Sometimes, especially in 2024 as opposed to 2015, the point of a new, shiny piece of tech is to address operational issues that the more boring, older one had.
- "hard to learn" is also a stereotype; if "newer" meant "harder" then we would all be using
troff
rather than Google Docs. Actually ask if the innovativeness is making things harder or easier; don't assume. - You are going to have to train people on your stack no matter what. If a technology is adding a lot of value, it's absolutely worth hiring for general ability and making a plan to teach people about it. You are going to have to do this with the core technology of your product anyway.
As I said, though, these are minor issues. The big problem with modeling operational overhead as an "innovation token" is that an even bigger concern than selecting an innovative tool is selecting too many tools.
The impulse to select more tools and make your operational environment more complex can be made worse by trying to avoid innovative tools. The important thing is not "less innovation", but more consistency. To illustrate this, let's do a simple thought experiment.
Let's say you're going to make a web app. There's a tool in Haskell that you really like for a critical part of your app's problem domain. You don't want to spend more than one innovation token though, and everything in Haskell is inherently innovative, so you write a little service that just does that one part and you write the rest of your app in Ruby, calling into that service whenever you need to use that thing. This will appropriately restrict your "innovation token" expenditure.
Does doing this actually reduce your operational overhead, though?
First, you will have to find a team that likes both Ruby and Haskell and sees no problem using both. If you are not familiar with the cultural proclivities of these languages, suffice it to say that this is unlikely. Hiring for Haskell programmers is hard because there are fewer of them than Ruby programmers, but hiring for polyglot Haskell/Ruby programmers who are happy to do either is going to be really hard.
Since you will need to find different people to write in the different languages, even in the best case scenario, you will have two teams: the Haskell team and the Ruby team. Even if you are incredibly disciplined about inter-service responsibilities, there will be some areas where duplication of code is necessary across those services. Disagreements will arise and every one of these disagreements will be a source of social friction and software defects.
Then, you need to set up separate CI pipelines for each language, separate deployment systems, and of course, separate databases. Right away you are effectively doubling your workload.
In the worse, and unfortunately more likely scenario, there will be enormous infighting between these two teams. Operational incidents will be more difficult to manage because rather than learning the Haskell tools for operational visibility and disseminating that institutional knowledge amongst your team, you will be half-learning the lessons from two separate ecosystems and attempting to integrate them. Every on-call engineer will be frantically trying to learn a language ecosystem they don't use regularly, or you will double the size of your on-call rotation. The Ruby team may start to resent the Haskell team for getting to exclusively work on the fun parts of the problem while they are doing things that look more like rote grunt work.
A better way to think about the problem of managing operational overhead is, rather than "innovation tokens", consider "boundary tokens".
That is to say, rather than evaluating the general sense of weird vibes from your architecture, consider the consistency of that architecture. If you're using Haskell, use Haskell. You should be all-in on Haskell web frameworks, Haskell ORMs, Haskell OAuth integrations, and so on.1 To cross the boundary out of Haskell, you need to spend a boundary token, and you shouldn't have many of those.
I submit that the increased operational overhead that you might experience with an all-Haskell tool selection will be dwarfed by the savings that you get by having a team that is aligned with each other, that can communicate easily, and that can share programs with each other without needing to first strategize about a channel for the two pieces of work to establish bidirectional communication. The ability to simply call a function when you need to call it is very powerful, and extremely underrated.
Consistency ought to apply at each layer of the stack; it is perhaps most obvious with programming languages, but it is true of web frameworks, test frameworks, cryptographic libraries, you name it. Make a choice and stick with it, because every deviation from that choice carries a significant cost. Moreover this cost is a hidden cost, in the same way that the operational downsides of an "innovative" tool that hasn't seen much production use might be hidden.
Discarding a more standard tool in favor of a tool more consistent with your architecture extends even to fairly uncontroversial, ubiquitous tools. For example, one of my favorite architectural patterns is to forego the use of the venerable - and very boring - Cron, the UNIX task-scheduler. Instead of Cron, it can make a lot of sense to have hand-written bespoke code for scheduling tasks within the application. Within the "innovation tokens" model, this is a very silly waste of a token!
Just use Cron! Everybody knows how to use Cron!
Except… does everybody know how to use Cron? Here are some questions to consider, if you're about to roll out a big dependency on Cron:
- How do you write a unit test for a scheduling rule with Cron?
- Can you even remember how to write a cron rule that runs at the times you want?
- How do you inject secrets and configuration variables into the distinct and somewhat idiosyncratic runtime execution environment of Cron?
- How do you know that you did that variable-injection properly until the job actually runs, possibly in the middle of the night?
- How do you deploy your monitoring and error-logging frameworks to observe your scripts run under Cron?
Granted, this architectural choice is less controversial than it once was. Cron used to be ambiently available on whatever servers you happened to be running. As container-based deployments have increased in popularity, this sense that Cron is just kinda around has gone away, and if you need to run a container that just runs Cron, much of the jankiness of its deployment is a lot more immediately visible.
There is friction at the boundary between things. That friction is a cost, but sometimes it's a cost worth paying.
If there's a really good library in Haskell and a really good library in Ruby and you really do want to use them both, maybe it makes sense to actually have multiple services. As your team gets larger and more mature, the need to bring in more tools, and the ability to handle the associated overhead, will only increase over time. But the place that the cost comes in the most is at the boundary between tools, not in the operational deficiencies of any one particular tool.
Even in a bog-standard web application with the most boring, least innovative tech stack imaginable (PHP, MySQL, HTML, CSS, JavaScript), many of the annoying points of friction are where different, inconsistent technologies make contact. If you are a programmer working on the web yourself, consider your own impression of the level of controversy of these technologies:
- CSS frameworks that attempt to bridge the gap between CSS and JavaScript, like Bootstrap or Tailwind.
- ORMs that attempt to bridge the gap between SQL and your backend language of choice
- RPC systems that attempt to connect disparate services together using simple abstractions.
Consider that there are far more complex technical tools in terms of required skills to implement them, like computer vision or physics simulation, tools which are also pretty widely used, which consistently generate lower levels of controversy. People do have strong feelings about these things as well, of course, and it's hard to find things to link to that show "this isn't controversial", but, like, search your feelings, you know it to be true.
You can see the benefits of the boundary token approach in programming language design. Many of the most influential and best-loved programming languages had an impact not by bundling together lots of tools, but by making everything into one thing:
- LISP: everything is a list
- Smalltalk: everything is an object
- ML: everything is an algebraic data type
- Forth: everything is a stack
There is a tremendous power in thinking about everything as a single kind of thing, because then you don't have to juggle lots of different ideas about different kinds of things; you can just think about your problem.
When people complain about programming languages, they're often complaining about how many different kinds of thing they have to remember in order to use it.
If you keep your boundary-token budget small, and allow your developers to accomplish as much as possible while staying within a solution space delineated by a single, clean cognitive boundary, I promise you can innovate as much as you want and your operational costs will remain manageable.
Epilogue
In subsequent Mastodon discussion of this post on with Matt Campbell and Meejah, I realized that I may not have made it entirely clear why I feel the distinction between "boundary" and "innovation" tokens is important. I do say above that the "innovation token" model can be a useful heuristic, so why bother with a new, but slightly different heuristic? Especially since most experienced engineers - indeed, McKinley himself - would budget "innovation" quite similarly to "boundaries", and might even consider the use of more "innovative" Haskell tools in my hypothetical scenario to not even be an expenditure of innovation tokens at all.
To answer that, I need to highlight the purpose of having heuristics like this in the first place. These are vague, nebulous guidelines, not hard and fast rules. I cannot give you a token calculator to plug your technical decisions into. The purpose of either token heuristic is to facilitate discussions among a team.
With a team of skilled and experienced engineers, the distinction is meaningless. Senior and staff engineers (at least, the ones who deserve their level) will intuit the goals behind "innovation tokens" and inherently consider things like operational overhead anyway. In practice, a high-performing, well-aligned team discussing innovation tokens and one discussing boundary tokens will look functionally indistinguishable.
The distinction starts to be important when you have management pressures, nervous executives, inexperienced engineers, a fresh team without existing consensus about core technology choices, and so on. That is to say, most teams that exist in the messy, perpetually in medias res world of the software industry.
If you are just getting started on a project and you have a bunch of competent but disagreeable engineers, the words "innovation" and "boundaries" function very differently.
If you ask, "is this an innovation" about a particular technical tool, you are asking your interlocutor to pull in a bunch of their skills and experience to subjectively evaluate the relative industry-wide, or maybe company-wide, or maybe team-wide2 newness of the thing being discussed. The discussion of whether it counts as boring or innovative is immediately fraught with a ton of subjective, difficult-to-quantify information about costs of hiring, difficulty of learning, and your impression of the feelings of hundreds or thousands of people outside of your team. And, yes, ultimately you do need to have an estimate of all that stuff, but starting your "is it OK to use this" conversation by simultaneously arguing about all those subjective judgments is setting yourself up for failure.
Instead, if you ask "does this introduce a boundary between two different technologies with different conceptual models", while that is not a perfectly objective question, it is much easier for your team to answer, with much crisper intermediary factual questions. What are the two technologies? What are the models? How much do they differ? You can just hash out the answers to each one within the team directly, rather than needing to sift through the last few years of Stack Overflow developer surveys to determine relative adoption or popularity of technologies in the world at large.
Restricting your supply of either boundary or innovation tokens is a good idea, but achieving unanimity within your team about what your boundaries are is always going to be easier than deciding what your innovations are.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like "how can we make our architecture more consistent".
-
I gave a talk about this once, a very long time ago, where Haskell was Python. ↩
-
It's not clear, that's a big part of the problem. ↩
04 Jul 2024 7:54pm GMT
20 Jun 2024
Planet Twisted
Moshe Zadka: The Prehistory of Kubernetes: From Cubic Equations to Cloud Orchestration
The story of Kubernetes, the leading container orchestration platform, is a tale of mathematical innovation, wartime necessity, and the open-source revolution. It begins, perhaps unexpectedly, with the work of Omar Khayyam, the 11th-century Persian polymath known for his contributions to mathematics, astronomy, and poetry.
Khayyam's work on solving cubic equations laid the foundation for the development of algebraic geometry, which in turn led to the invention of Cartesian coordinates by René Descartes in the 17th century. This "algebrization of geometry" allowed for the mathematical description of physical phenomena, such as planetary motion, and paved the way for Isaac Newton's development of calculus.
However, Newton's calculus, while groundbreaking, lacked a rigorous mathematical foundation. It took the work of 19th-century mathematicians like Augustin-Louis Cauchy and Karl Weierstrass to establish the epsilon-delta definition of limits and place calculus on a solid footing. This development also opened up new questions about infinity, leading to Georg Cantor's work on set theory and the discovery of paradoxes, such as Russell's paradox by Bertrand Russell, that threatened the foundations of mathematics.
The quest to resolve these paradoxes and establish a secure foundation for mathematics led to Kurt Gödel's incompleteness theorems, published in 1931. Gödel's first incompleteness theorem showed that in any consistent axiomatic system that includes arithmetic, there are statements that can neither be proved nor disproved within the system. The second incompleteness theorem demonstrated that such a system cannot prove its own consistency.
Crucially, Gödel's theorems relied on the concept of computability, which he used to construct a formal system representing arithmetic. However, Gödel's definition of computability was not entirely convincing, as it relied on the intuitive notion of a "finite procedure." This left open the possibility that a non-computable axiomatization of number theory, capturing "all that is true about the natural numbers," could exist and potentially sidestep the incompleteness theorems.
It was Alan Turing who took up the challenge of formalizing the concept of computability. In his groundbreaking 1936 paper "On Computable Numbers, with an Application to the Entscheidungsproblem," Turing introduced the Turing machine, a simple yet powerful mathematical model of computation. Turing's work not only provided a more rigorous foundation for Gödel's ideas but also proved that certain problems, such as the halting problem, are undecidable by Turing machines.
Turing's formalization of computability had far-reaching implications beyond the foundations of mathematics. It laid the groundwork for the development of modern computer science and played a crucial role in the birth of the digital age. Turing's work took on new urgency with the outbreak of World War II, as the need to break the German Enigma machine led to the development of early computing machines based on the principles he had established.
After the war, the individuals who had worked on these machines helped to establish the first computing companies, leading to the industrialization of computing and the development of programming languages and operating systems. One notable example is Alan Turing himself, who joined the National Physical Laboratory (NPL) in London, where he worked on the design of the Automatic Computing Engine (ACE), one of the first stored-program computers.
Another key figure was John von Neumann, a mathematician and physicist who made significant contributions to the design of the EDVAC (Electronic Discrete Variable Automatic Computer), an early stored-program computer. Von Neumann's work on the EDVAC and his subsequent report, "First Draft of a Report on the EDVAC," laid the foundation for the von Neumann architecture, which became the standard design for modern computers.
In the United Kingdom, Maurice Wilkes, who had worked on radar systems during the war, led the development of the EDSAC (Electronic Delay Storage Automatic Calculator) at the University of Cambridge. The EDSAC, which became operational in 1949, was the first practical stored-program computer and inspired the development of similar machines in the United States and elsewhere.
In the United States, J. Presper Eckert and John Mauchly, who had worked on the ENIAC (Electronic Numerical Integrator and Computer) during the war, founded the Eckert-Mauchly Computer Corporation in 1946. The company developed the UNIVAC (Universal Automatic Computer), which became the first commercially available general-purpose computer in the United States.
The UNIVAC was followed by the Multiprocessing Automatic Computer (Multivac), developed by IBM in the late 1950s. The Multivac introduced several innovative features, such as multiprogramming and memory protection, which allowed multiple users to share the same machine and provided a degree of isolation between their programs. These features would later inspire both positive and negative lessons for the creators of Unix, the influential operating system developed at Bell Labs in the 1970s.
Due to antitrust pressures, Bell Labs made Unix available to universities, where it became a standard teaching tool. This decision led to the development of Minix, a simplified Unix-like system, and eventually to the creation of Linux by Linus Torvalds.
As Linux grew in popularity, thanks to its open-source nature and ability to run on cheap hardware, it caught the attention of Google, which was looking for an operating system to power its "cloud-native" approach to computing. Google's engineers contributed key features to the Linux kernel, such as cgroups and namespaces, which laid the groundwork for the development of containerization technologies like Docker.
Google, recognizing the potential of containers and the need for a robust orchestration platform, developed Kubernetes as an open-source system based on its experience with Borg and other orchestration tools. By establishing Kubernetes as the standard for container orchestration, Google aimed to reduce the barrier to entry for users looking to switch between cloud providers, challenging the dominance of Amazon Web Services.
Today, Kubernetes has become the de facto standard for managing containerized applications. As in previous improvements, this led to a new open problem: generating and deploying manifests. Tools for generating manifests range from general templating solutions like Bash variable substitution, Sed, and Jinja, through full fledged programming languages, like using Jsonnet and Python, all the way to using dedicated tools like Kustomize and Helm. Meanwhile, deploying the manifests to Kubernetes can be done through continuous integration platforms running "helm upgrade" or "kubectl apply" or using dedicated platforms like ArgoCD or FluxCD. ArgoCD or Flagger also support gradual roll-outs.
From cubic equations to cloud orchestration, the story of Kubernetes is a reminder that the path of progress is rarely straightforward, but rather a winding journey through the realms of mathematics, computer science, and human ingenuity.
20 Jun 2024 4:30am GMT
22 May 2024
Planet Twisted
Glyph Lefkowitz: A Grand Unified Theory of the AI Hype Cycle
The Cycle
The history of AI goes in cycles, each of which looks at least a little bit like this:
- Scientists do some basic research and develop a promising novel mechanism,
N
. One important detail is thatN
has a specific name; it may or may not be carried out under the general umbrella of "AI research" but it is not itself "AI".N
always has a few properties, but the most common and salient one is that it initially tends to require about 3x the specifications of the average computer available to the market at the time; i.e., it requires three times as much RAM, CPU, and secondary storage as is shipped in the average computer. - Research and development efforts begin to get funded on the hypothetical potential of
N
. BecauseN
is so resource intensive, this funding is used to purchase more computing capacity (RAM, CPU, storage) for the researchers, which leads to immediate results, as the technology was previously resource constrained. - Initial successes in the refinement of
N
hint at truly revolutionary possibilities for its deployment. These revolutionary possibilities include a dimension of cognition that has not previously been machine-automated. - Leaders in the field of this new development - specifically leaders, like lab administrators, corporate executives, and so on, as opposed to practitioners like engineers and scientists - recognize the sales potential of referring to this newly-"thinking" machine as "Artificial Intelligence", often speculating about science-fictional levels of societal upheaval (specifically in a period of 5-20 years), now that the "hard problem" of machine cognition has been solved by
N
. - Other technology leaders, in related fields, also recognize the sales potential and begin adopting elements of the novel mechanism to combine with their own areas of interest, also referring to their projects as "AI" in order to access the pool of cash that has become available to that label. In the course of doing so, they incorporate
N
in increasingly unreasonable ways. - The scope of "AI" balloons to include pretty much all of computing technology. Some things that do not even include
N
start getting labeled this way. - There's a massive economic boom within the field of "AI", where "the field of AI" means any software development that is plausibly adjacent to
N
in any pitch deck or grant proposal. - Roughly 3 years pass, while those who control the flow of money gradually become skeptical of the overblown claims that recede into the indeterminate future, where
N
precipitates a robot apocalypse somewhere between 5 and 20 years away. Crucially, because of the aforementioned resource-intensiveness, the gold owners skepticism grows slowly over this period, because their own personal computers or the ones they have access to do not have the requisite resources to actually run the technology in question and it is challenging for them to observe its performance directly. Public critics begin to appear. - Competent practitioners - not leaders - who have been successfully using
N
in research or industry quietly stop calling their tools "AI", or at least stop emphasizing the "artificial intelligence" aspect of them, and start getting funding under other auspices. WhateverN
does that isn't "thinking" starts getting applied more seriously as its limitations are better understood. Users begin using more specific terms to describe the things they want, rather than calling everything "AI". - Thanks to the relentless march of Moore's law, the specs of the average computer improve. The CPU, RAM, and disk resources required to actually run the software locally come down in price, and everyone upgrades to a new computer that can actually run the new stuff.
- The investors and grant funders update their personal computers, and they start personally running the software they've been investing in. Products with long development cycles are finally released to customers as well, but they are disappointing. The investors quietly get mad. They're not going to publicly trash their own investments, but they stop loudly boosting them and they stop writing checks. They pivot to biotech for a while.
- The field of "AI" becomes increasingly desperate, as it becomes the label applied to uses of
N
which are not productive, since the productive uses are marketed under their application rather than their mechanism. Funders lose their patience, the polarity of the "AI" money magnet rapidly reverses. Here, the AI winter is finally upon us. - The remaining AI researchers who still have funding via mechanisms less vulnerable to hype, who are genuinely thinking about automating aspects of cognition rather than simply
N
, quietly move on to the next impediment to a truly thinking machine, and in the course of doing so, they discover a new novel mechanism,M
. Go to step 1, withM
as the newN
, and our currentN
as a thing that is now "not AI", called by its own, more precise name.
The History
A non-exhaustive list of previous values of N
have been:
- Neural networks and symbolic reasoning in the 1950s.
- Theorem provers in the 1960s.
- Expert systems in the 1980s.
- Fuzzy logic and hidden Markov models in the 1990s.
- Deep learning in the 2010s.
Each of these cycles has been larger and lasted longer than the last, and I want to be clear: each cycle has produced genuinely useful technology. It's just that each follows the progress of a sigmoid curve that everyone mistakes for an exponential one. There is an initial burst of rapid improvement, followed by gradual improvement, followed by a plateau. Initial promises imply or even state outright "if we pour more {compute, RAM, training data, money} into this, we'll get improvements forever!" The reality is always that these strategies inevitably have a limit, usually one that does not take too long to find.
Where Are We Now?
So where are we in the current hype cycle?
- Here's a Computerphile video which explains some recent research into LLM performance. I'd highly encourage you to have a look at the paper itself, particularly Figure 2, "Log-linear relationships between concept frequency and CLIP zero-shot performance".
- Here's a series of posts by Simon Willison explaining the trajectory of the practicality of actually-useful LLMs on personal devices. He hasn't written much about it recently because it is now fairly pedestrian for an AI-using software developer to have a bunch of local models, and although we haven't quite broken through the price floor of the gear-acquisition-syndrome prosumer market in terms of the requirements of doing so, we are getting close.
- The Rabbit R1 and Humane AI Pin were both released; were they disappointments to their customers and investors? I think we all know how that went at this point.
- I hear Karius just raised a series C, and they're an "emerging unicorn".
- It does appear that we are all still resolutely calling these things "AI" for now, though, much as I wish, as a semasiology enthusiast, that we would be more precise.
Some Qualifications
History does not repeat itself, but it does rhyme. This hype cycle is unlike any that have come before in various ways. There is more money involved now. It's much more commercial; I had to phrase things above in very general ways because many previous hype waves have been based on research funding, some really being exclusively a phenomenon at one department in DARPA, and not, like, the entire economy.
I cannot tell you when the current mania will end and this bubble will burst. If I could, you'd be reading this in my $100,000 per month subscribers-only trading strategy newsletter and not a public blog. What I can tell you is that computers cannot think, and that the problems of the current instantation of the nebulously defined field of "AI" will not all be solved within "5 to 20 years".
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. Special thanks also to Ben Chatterton for a brief pre-publication review; any errors remain my own. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like "what are we doing that history will condemn us for". Or, you know, Python programming.
22 May 2024 4:58pm GMT
15 May 2024
Planet Twisted
Glyph Lefkowitz: How To PyCon
These tips are not the "right" way to do PyCon, but they are suggestions based on how I try to do PyCon. Consider them reminders to myself, an experienced long-time attendee, which you are welcome to overhear.
See Some Talks
The hallway track is awesome. But the best version of the hallway track is not just bumping into people and chatting; it's the version where you've all recently seen the same thing, and thereby have a shared context of something to react to. If you aren't going to talks, you aren't going to get a good hallway track.. Therefore: choose talks that interest you, attend them and pay close attention, then find people to talk to about them.
Given that you will want to see some of the talks, make sure that you have the schedule downloaded and available offline on your mobile device, or printed out on a piece of paper.
Make a list of the talks you think you want to see, but have that schedule with you in case you want to call an audible in the middle of the conference, switching to a different talk you didn't notice based on some of those "hallway track" conversations.
Participate In Open Spaces
The name "hallway track" itself is antiquated, in a way which is relevant and important to modern conferences. It used to be that conferences were exclusively oriented around their scheduled talks; it was called the "hallway" track because the way to access it was to linger in the hallways, outside the official structure of the conference, and just talk to people.
But however, at PyCon and many other conferences, this unofficial track is now much more of an integrated, official part of the program. In particular, open spaces are not only a more official hallway track, they are considerably better than the historical "hallway" experience, because these ad-hoc gatherings can be convened with a prepared topic and potentially a loose structure to facilitate productive discussion.
With open spaces, sessions can have an agenda and so conversations are easier to start. Rooms are provided, which is more useful than you might think; literally hanging out in a hallway is actually surprisingly disruptive to speakers and attendees at talks; us nerds tend to get pretty loud and can be quite audible even through a slightly-cracked door, so avail yourself of these rooms and don't be a disruptive jerk outside somebody's talk.
Consult the open space board, and put up your own proposed sessions. Post them as early as you can, to maximize the chance that they will get noticed. Post them on social media, using the conference's official hashtag, and ask any interested folks you bump into help boost it.1
Remember that open spaces are not talks. If you want to give a mini-lecture on a topic and you can find interested folks you could do that, but the format lends itself to more peer-to-peer, roundtable-style interactions. Among other things, this means that, unlike proposing a talk, where you should be an expert on the topic that you are proposing, you can suggest open spaces where you are curious - but ignorant - about something, in the hopes that some experts will show up and you can listen to their discussion.
Be prepared for this to fail; there's a lot going on and it's always possible that nobody will notice your session. Again, maximize your chances by posting it as early as you can and promoting it, but be prepared to just have a free 30 minutes to check your email. Sometimes that's just how it goes. The corollary here is not to always balance attending others' spaces with proposing your own. After all if someone else proposed it you know at least one other person is gonna be there.
Take Care of Your Body
Conferences can be surprisingly high-intensity physical activities. It's not a marathon, but you will be walking quickly from one end of a large convention center to another, potentially somewhat anxiously.
Hydrate, hydrate, hydrate. Bring a water bottle, and have it with you at all times. It might be helpful to set repeating timers on your phone to drink water, since it can be easy to forget in the middle of engaging conversations. If you take advantage of the hallway track as much as you should, you will talk more than you expect; talking expels water from your body. All that aforementioned walking might make you sweat a bit more than you realize.
Hydrate.
More generally, pay attention to what you are eating and drinking. Conference food isn't always the best, and in a new city you might be tempted to load up on big meals and junk food. You should enjoy yourself and experience the local cuisine, but do it intentionally. While you enjoy the local fare, do so in whatever moderation works best for you. Similarly for boozy night-time socializing. Nothing stings quite as much as missing a morning of talks because you've got a hangover or a migraine.
This is worth emphasizing because in the enthusiasm of an exciting conference experience, it's easy to lose track and overdo it.
Meet Both New And Old Friends: Plan Your Socializing
A lot of the advice above is mostly for first-time or new-ish conferencegoers, but this one might be more useful for the old heads. As we build up a long-time clique of conference friends, it's easy to get a bit insular and lose out on one of the bits of magic of such an event: meeting new folks and hearing new perspectives.
While open spaces can address this a little bit, there's one additional thing I've started doing in the last few years: dinners are for old friends, but lunches are for new ones. At least half of the days I'm there, I try to go to a new table with new folks that I haven't seen before, and strike up a conversation. I even have a little canned icebreaker prompt, which I would suggest to others as well, because it's worked pretty nicely in past years: "what is one fun thing you have done with Python recently?"2.
Given that I have a pretty big crowd of old friends at these things, I actually tend to avoid old friends at lunch, since it's so easy to get into multi-hour conversations, and meeting new folks in a big group can be intimidating. Lunches are the time I carve out to try and meet new folks.
I'll See You There
I hope some of these tips were helpful, and I am looking forward to seeing some of you at PyCon US 2024!
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
-
In PyCon2024's case, #PyConUS on Mastodon is probably the way to go. Note, also, that it is #PyConUS and not #pyconus, which is much less legible for users of screen-readers. ↩
-
Obviously that is specific to this conference. At the O'Reilly Software Architecture conference, my prompt was "What is software architecture?" which had some really fascinating answers. ↩
15 May 2024 9:12am GMT
07 May 2024
Planet Twisted
Glyph Lefkowitz: Hope
Humans are pattern-matching machines. As a species, it is our superpower. To summarize the core of my own epistemic philosophy, here is a brief list of the activities in the core main-loop of a human being:
- stuff happens to us
- we look for patterns in the stuff
- we weave those patterns into narratives
- we turn the narratives into models of the world
- we predict what will happen based on those models
- we do stuff based on those predictions
- based on the stuff we did, more stuff happens to us; return to step 1
While this ability lets humans do lots of great stuff, like math and physics and agriculture and so on, we can just as easily build bad stories and bad models. We can easily trick ourselves into thinking that our predictive abilities are more powerful than they are.
The existence of magic-seeming levels of prediction in fields like chemistry and physics and statistics, in addition to the practical usefulness of rough estimates and heuristics in daily life, itself easily creates a misleading pattern. "I see all these patterns and make all these predictions and I'm right a lot of the time, so if I just kind of wing it and predict some more stuff, I'll also be right about that stuff."
This leaves us very vulnerable to things like mean world syndrome. Mean world syndrome itself is specifically about danger, but I believe it is a manifestation of an even broader phenomenon which I would term "the apophenia of despair".
Confirmation bias is an inherent part of human cognition, but the internet has turbocharged it. Humans have immediate access to more information than we ever had in the past. In order to cope with that information, we have also built ways to filter that information. Even disregarding things like algorithmic engagement maximization and social media filter bubbles, the simple fact that when you search for things, you are a lot more likely to find the thing that you're searching for than to find arguments refuting it, can provide a very strong sense that you're right about whatever you're researching.
All of this is to say: if you decide that something in the world is getting worse, you can very easily convince yourself that it is getting much, much worse, very rapidly. Especially because there are things which are, unambiguously, getting worse.
However, Pollyanna-ism is just the same phenomenon in reverse and I don't want to engage in that. The ice sheets really are melting, globally, fascism really is on the rise. I am not here to deny reality or to cherry pick a bunch of statistics to lull people into complacency.
I believe that while dwelling on a negative reality is bad, I also believe that in the face of constant denial, it is sometimes necessary to simply emphasize those realities, however unpleasant they may be. Distinguishing between unhelpful rumination on negativity and repetition of an unfortunate but important truth to correct popular perception is subjective and subtle, but the difference is nevertheless important.
As our ability to acquire information about things getting worse has grown, our ability to affect those things has not. Knowledge is not power; power is power, and most of us don't have a lot of it, so we need to be strategic in the way that we deploy our limited political capital and personal energy.
Overexposure to negative news can cause symptoms of depression; depressed people have reduced executive function and find it harder to do stuff. One of the most effective interventions against this general feeling of malaise? Hope.. Not "hope" in the sense of wishing. As this article in the American Psychological Association's "Monitor on Psychology" puts it:
"We often use the word 'hope' in place of wishing, like you hope it rains today or you hope someone's well," said Chan Hellman, PhD, a professor of psychology and founding director of the Hope Research Center at the University of Oklahoma. "But wishing is passive toward a goal, and hope is about taking action toward it."
Here, finally, I can get around to my point.
If you have an audience, and you have some negative thoughts about some social trend, talking about it in a way which is vague and non-actionable is potentially quite harmful. If you are doing this, you are engaged in the political project of robbing a large number of people of hope. You are saying that the best should have less conviction, while the worst will surely remain just as full of passionate intensity.
I do not mean to say that it is unacceptable to ever publicly share thoughts of sadness, or even hopelessness. If everyone in public is forced to always put on a plastic smile and pretend that everything is going to be okay if we have grit and determination, then we have an Instagram culture of fake highlight reels where anyone having their own struggles with hopelessness will just feel even worse in their isolation. I certainly posted my way through my fair share of pretty bleak mental health issues during the worst of the pandemic.
But we should recognize that while sadness is a feeling, hopelessness is a problem, a bad reaction to that feeling, one that needs to be addressed if we are going to collectively dig ourselves out of the problem that creates the sadness in the first place. We may not be able to conjure hope all the time, but we should always be trying.
When we try to address these feelings, as I said earlier, Pollyanna-ism doesn't help. The antidote to hopelessness is not optimism, but curiosity. If you have a strong thought like "people these days just don't care about other people1", yelling "YES THEY DO" at yourself (or worse, your audience) is unlikely to make much of a change, and certainly not likely to be convincing to an audience. Instead, you could ask yourself some questions, and use them for a jumping-off point for some research:
- Why do I think this - is the problem in my perception, or in the world?
- If there is a problem in my perception, is this a common misperception? If it's common, what is leading to it being common? If it's unique to me, what sort of work do I need to do to correct it?
- If the problem is real, what are its causes? Is there anything that I, or my audience, could do to address those causes?
The answers to these questions also inform step 6 of the process I outlined above: the doing stuff part of the process.
At some level, all communication is persuasive communication. Everything you say that another person might hear, everything you say that a person might hear, is part of a sprachspiel where you are attempting to achieve something. There is always an implied call to action; even "do nothing, accept the status quo" is itself an action. My call to action right now is to ask you to never make your call to action "you should feel bad, and you should feel bad about feeling bad". When you communicate in public, your words have power.
Use that power for good.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! Special thanks also to Cassandra Granade, who provided some editorial feedback on this post; any errors, of course, remain my own.
-
I should also note that vague sentiments of this form, "things used to be better, now they're getting worse", are at their core a reactionary yearning for a prelapsarian past, which is both not a good look and also often wrong in a very common way. Complaining about how "people" are getting worse is a very short journey away from complaining about kids these days, which has a long and embarrassing history of being comprehensively incorrect in every era. ↩
07 May 2024 10:26pm GMT
30 Mar 2024
Planet Twisted
Glyph Lefkowitz: Software Needs To Be More Expensive
The Cost of Coffee
One of the ideas that James Hoffmann - probably the most influential… influencer in the coffee industry - works hard to popularize is that "coffee needs to be more expensive".
The coffee industry is famously exploitative. Despite relatively thin margins for independent café owners1, there are no shortage of horrific stories about labor exploitation and even slavery in the coffee supply chain.
To summarize a point that Mr. Hoffman has made over a quite long series of videos and interviews2, some of this can be fixed by regulatory efforts. Enforcement of supply chain policies both by manufacturers and governments can help spot and avoid this type of exploitation. Some of it can be fixed by discernment on the part of consumers. You can try to buy fair-trade coffee, avoid brands that you know have problematic supply-chain histories.
Ultimately, though, even if there is perfect, universal, zero-cost enforcement of supply chain integrity… consumers still have to be willing to, you know, pay more for the coffee. It costs more to pay wages than to have slaves.
The Price of Software
The problem with the coffee supply chain deserves your attention in its own right. I don't mean to claim that the problems of open source maintainers are as severe as those of literal child slaves. But the principle is the same.
Every tech company uses huge amounts of open source software, which they get for free.
I do not want to argue that this is straightforwardly exploitation. There is a complex bargain here for the open source maintainers: if you create open source software, you can get a job more easily. If you create open source infrastructure, you can make choices about the architecture of your projects which are more long-term sustainable from a technology perspective, but would be harder to justify on a shorter-term commercial development schedule. You can collaborate with a wider group across the industry. You can build your personal brand.
But, in light of the recent xz Utils / SSH backdoor scandal, it is clear that while the bargain may not be entirely one-sided, it is not symmetrical, and significant bad consequences may result, both for the maintainers themselves and for society.
To fix this problem, open source software needs to get more expensive.
A big part of the appeal of open source is its implicit permission structure, which derives both from its zero up-front cost and its zero marginal cost.
The zero up-front cost means that you can just get it to try it out. In many companies, individual software developers do not have the authority to write a purchase order, or even a corporate credit card for small expenses.
If you are a software engineer and you need a new development tool or a new library that you want to purchase for work, it can be a maze of bureaucratic confusion in order to get that approved. It might be possible, but you are likely to get strange looks, and someone, probably your manager, is quite likely to say "isn't there a free option for this?" At worst, it might just be impossible.
This makes sense. Dealing with purchase orders and reimbursement requests is annoying, and it only feels worth the overhead if you're dealing with a large enough block of functionality that it is worth it for an entire team, or better yet an org, to adopt. This means that most of the purchasing is done by management types or "architects", who are empowered to make decisions for larger groups.
When individual engineers need to solve a problem, they look at open source libraries and tools specifically because it's quick and easy to incorporate them in a pull request, where a proprietary solution might be tedious and expensive.
That's assuming that a proprietary solution to your problem even exists. In the infrastructure sector of the software economy, free options from your operating system provider (Apple, Microsoft, maybe Amazon if you're in the cloud) and open source developers, small commercial options have been marginalized or outright destroyed by zero-cost options, for this reason.
If the zero up-front cost is a paperwork-reduction benefit, then the zero marginal cost is almost a requirement. One of the perennial complaints of open source maintainers is that companies take our stuff, build it into a product, and then make a zillion dollars and give us nothing. It seems fair that they'd give us some kind of royalty, right? Some tiny fraction of that windfall? But once you realize that individual developers don't have the authority to put $50 on a corporate card to buy a tool, they super don't have the authority to make a technical decision that encumbers the intellectual property of their entire core product to give some fraction of the company's revenue away to a third party. Structurally, there's no way that this will ever happen.
Despite these impediments, keeping those dependencies maintained does cost money.
Some Solutions Already Exist
There are various official channels developing to help support the maintenance of critical infrastructure. If you work at a big company, you should probably have a corporate Tidelift subscription. Maybe ask your employer about that.
But, as they will readily admit there are a LOT of projects that even Tidelift cannot cover, with no official commercial support, and no practical way to offer it in the short term. Individual maintainers, like yours truly, trying to figure out how to maintain their projects, either by making a living from them or incorporating them into our jobs somehow. People with a Ko-Fi or a Patreon, or maybe just an Amazon wish-list to let you say "thanks" for occasional maintenance work.
Most importantly, there's no path for them to transition to actually making a living from their maintenance work. For most maintainers, Tidelift pays a sub-hobbyist amount of money, and even setting it up (and GitHub Sponsors, etc) is a huge hassle. So even making the transition from "no income" to "a little bit of side-hustle income" may be prohibitively bureaucratic.
Let's take myself as an example. If you're a developer who is nominally profiting from my infrastructure work in your own career, there is a very strong chance that you are also a contributor to the open source commons, and perhaps you've even contributed more to that commons than I have, contributed more to my own career success than I have to yours. I can ask you to pay me3, but really you shouldn't be paying me, your employer should.
What To Do Now: Make It Easy To Just Pay Money
So if we just need to give open source maintainers more money, and it's really the employers who ought to be giving it, then what can we do?
Let's not make it complicated. Employers should just give maintainers money. Let's call it the "JGMM" benefit.
Specifically, every employer of software engineers should immediately institute the following benefits program: each software engineer should have a monthly discretionary budget of $50 to distribute to whatever open source dependency developers they want, in whatever way they see fit. Venmo, Patreon, PayPal, Kickstarter, GitHub Sponsors, whatever, it doesn't matter. Put it on a corp card, put the project name on the line item, and forget about it. It's only for open source maintenance, but it's a small enough amount that you don't need intense levels of approval-gating process. You can do it on the honor system.
This preserves zero up-front cost. To start using a dependency, you still just use it4. It also preserves zero marginal cost: your developers choose which dependencies to support based on perceived need and popularity. It's a fixed overhead which doesn't scale with revenue or with profit, just headcount.
Because the whole point here is to match the easy, implicit, no-process, no-controls way in which dependencies can be added in most companies. It should be easier to pay these small tips than it is to use the software in the first place.
This sub-1% overhead to your staffing costs will massively de-risk the open source projects you use. By leaving the discretion up to your engineers, you will end up supporting those projects which are really struggling and which your executives won't even hear about until they end up on the news. Some of it will go to projects that you don't use, things that your engineers find fascinating and want to use one day but don't yet depend upon, but that's fine too. Consider it an extremely cheap, efficient R&D expense.
A lot of the options for developers to support open source infrastructure are already tax-deductible, so if they contribute to something like one of the PSF's fiscal sponsorees, it's probably even more tax-advantaged than a regular business expense.
I also strongly suspect that if you're one of the first employers to do this, you can get a round of really positive PR out of the tech press, and attract engineers, so, the race is on. I don't really count as the "tech press" but nevertheless drop me a line to let me know if your company ends up doing this so I can shout you out.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics such as "How do I figure out which open source projects to give money to?".
-
I don't have time to get into the margins for Starbucks and friends, their relationship with labor, economies of scale, etc. ↩
-
While this is a theme that pervades much of his work, the only place I can easily find where he says it in so many words is on a podcast that sometimes also promotes right-wing weirdos and pseudo-scientific quacks spreading misinformation about autism and ADHD. So, I obviously don't want to link to them; you'll have to take my word for it. ↩
-
and I will, since as I just recently wrote about, I need to make sure that people are at least aware of the option ↩
-
Pending whatever legal approval program you have in place to vet the license. You do have a nice streamlined legal approvals process, right? You're not just putting WTFPL software into production, are you? ↩
30 Mar 2024 11:00pm GMT
29 Mar 2024
Planet Twisted
Glyph Lefkowitz: The Hat
This year I will be going to two conferences: PyCon 2024, and North Bay Python 2024.
At PyCon, I will be promoting my open source work and my writing on this blog. As I'm not giving a talk this year, I am looking forward to organizing and participating in some open spaces about topics like software architecture, open source sustainability, framework interoperability, and event-driven programming.
At North Bay Python, though, I will either be:
- doing a lot more of that, or
- looking for new full-time employment, pausing the patreon, and winding down this experiment.
If you'd like to see me doing the former, now would be a great time to sign up to my Patreon to support the continuation of my independent open source work and writing.
The Bad News
It has been nearly a year since I first mentioned that I have a Patreon on this blog. That year has been a busy one, with consulting work and personal stuff consuming more than enough time that I have never been full time on independent coding & blogging. As such, I've focused more on small infrastructure projects and less on user-facing apps than I'd like, but I have spent the plurality of my time on it.
For that to continue, let alone increase, this work needs to-at the very least-pay for health insurance. At my current consulting rate, a conservative estimate based on some time tracking is that I am currently doing this work at something like a 98.5% discount. I do love doing it! I would be happy to continue doing it at a significant discount! But "significant" and "catastrophic" are different thresholds.
As I have said previously, this is an appeal to support my independent work; not to support me. I will be fine; what you will be voting for with your wallet is not my well-being but a choice about where I spend my time.
Hiding The Hat
When people ask me what I do these days, I sometimes struggle to explain. It is confusing. I might say I have a Patreon, I do a combination of independent work and consulting, or if I'm feeling particularly sarcastic I might say I'm an ✨influencer✨. But recently I saw the very inspiring Matt Ricardo describing the way he thinks about his own Patreon, and I realized what I am actually trying to do, which is software development busking.
Previously, when I've been thinking about making this "okay, it's been a year of Patreon, let's get serious now" post, I've been thinking about adding more reward products to my Patreon, trying to give people better value for their money before asking for anything more, trying to finish more projects to make a better sales pitch, maybe making merch available for sale, and so on. So aside from irregular weekly posts on Friday and acknowledgments sections at the bottom of blog posts, I've avoided mentioning this while I think about adding more private rewards.
But busking is a public performance, and if you want to support my work then it is that public aspect that you presumably want to support. And thus, an important part of the busking process is to actually pass the hat at the end. The people who don't chip in still get to see the performance, but everyone else need to know that they can contribute if they liked it.1
I'm going to try to stop hiding the metaphorical hat. I still don't want to overdo it, but I will trust that you'll tell me if these reminders get annoying. For my part today, in addition to this post, I'm opening up a new $10 tier on Patreon for people who want to provide a higher level of support, and officially acknowledging the rewards that I already provide.
What's The Deal?
So, what would you be supporting?
What You Give (The Public Part)
- I have tended to focus on my software, and there has been a lot of it! You'd be supporting me writing libraries and applications and build infrastructure to help others do the same with Python, as well as maintaining existing libraries (like the Twisted ecosystem libraries) sometimes. If I can get enough support together to more than bare support for myself, I'd really like to be able to do things like pay people to others to help with aspects of applications that I would struggle to complete myself, like accessibility or security audits.
- I also do quite a bit of writing though, about software and about other things. To make it easier to see what sort of writing I'm talking about, I've collected just the stuff that I've written during the period where I have had some patrons, under the supported tag.
- Per my earlier sarcastic comment about being an "influencer", I also do quite a bit of posting on Mastodon about software and the tech industry.
What You Get (Just For Patrons)
As I said above, I will be keeping member benefits somewhat minimal.
- I will add you to SponCom so that your name will be embedded in commit messages like this one on a cadence appropriate to your support level.
- You will get access to my private Patreon posts where I discuss what I've been working on. As one of my existing patrons put it:
I figure I'm getting pretty good return on it, getting not only targeted project tracking, but also a peek inside your head when it comes to Sores Business Development. Maybe some of that stuff will rub off on me :)
- This is a somewhat vague and noncommittal benefit, but it might be the best one: if you are interested in the various different projects that I am doing, you can help me prioritize! I have a lot of things going on. What would you prefer that I focus on? You can make suggestions in the comments of Patreon posts, which I pay a lot more attention to than other feedback channels.
- In the near future2 I am also planning to start doing some "office hours" type live-streaming, where I will take audience questions and discuss software design topics, or maybe do some live development to showcase my process and the tools I use. When I figure out the mechanics of this, I plan to add some rewards to the existing tiers to select topics or problems for me to work on there.
If that sounds like a good deal to you, please sign up now. If you're already supporting me, sharing this and giving a brief testimonial of something good I've done would be really helpful. Github is not an algorithmic platform like YouTube, despite my occasional jokey "remember to like and subscribe", nobody is getting recommended DBXS, or Fritter, or Twisted, or Automat, or this blog unless someone goes out and shares it.
-
A year into this, after what feels like endlessly repeating this sales pitch to the point of obnoxiousness, I still routinely interact with people who do not realize that I have a Patreon at all. ↩
-
Not quite comfortable putting this on the official patreon itemized inventory of rewards yet, but I do plan to add it once I've managed to stream for a couple of weeks in a row. ↩
29 Mar 2024 11:56pm GMT
Glyph Lefkowitz: DBXS 0.1.0
New Release
Yesterday I published a new release of DBXS for you all. It's still ZeroVer, but it has graduated from double-ZeroVer as this is the first nonzero minor version.
More to the point though, the meaning of that version increment this version introduces some critical features that I think most people would need to give it a spin on a hobby project.
What's New
-
It has support for MySQL and PostgreSQL using native
asyncio
drivers, which means you don't need to take a Twisted dependency in production. -
While Twisted is still used for some of the testing internals,
Deferred
is no longer exposed anywhere in the public API, either; your tests can happily pretend that they're doing asyncio, as long as they can run against SQLite. -
There is a new
repository
convenience function that automatically wires together multiple accessors and transaction discipline. Have a look at the docstring for a sense of how to use it. -
Several papercuts, like confusing error messages when messing up query result handling, and lack of proper handling of default arguments in access protocols, are now addressed.
It's A Good Time To Contribute!
If you've been looking for an open source project to try your hand at contributing to, DBXS might be a great opportunity, for a few reasons:
-
The team is quite small (just me, right now!), so it's easy to get involved.
-
It's quite generally useful, so there's a potential for an audience, but right now it doesn't really have any production users; there's still time to change things without a lot of ceremony.
-
Unlike many other small starter projects, it's got a test suite with 100% coverage, so you can contribute with confidence that you're not breaking anything.
-
There's not that much code (a bit over 2 thousand SLOC), so it's not hard to get your head around.
-
There are a few obvious next steps for improvement, which I've filed as issues if you want to pick one up.
Share and enjoy, and please let me know if you do something fun with it.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics such as "How do I shot SQL?".
29 Mar 2024 10:19pm GMT