17 Feb 2026

feedPlanet Python

PyCoder’s Weekly: Issue #722: Itertools, Circular Imports, Mock, and More (Feb. 17, 2026)

#722 - FEBRUARY 17, 2026
View in Browser »

The PyCoder’s Weekly Logo


5 Essential Itertools for Data Science

Learn 5 essential itertools methods to eliminate manual feature engineering waste. Replace nested loops with systematic functions for interactions, polynomial features, and categorical combinations.
CODECUT.AI • Shared by Khuyen Tran

A Fun Python Puzzle With Circular Imports

A deep inspection of just what happens when you write from ... import ... and how that impacts circular import references in your code.
CHRIS SIEBENMANN

B2B MCP Auth Support

alt

Your users are asking if they can connect their AI agent to your product, but you want to make sure they can do it safely and securely. PropelAuth makes that possible →
PROPELAUTH sponsor

Improving Your Tests With the Python Mock Object Library

Master Python testing with unittest.mock. Create mock objects to tame complex logic and unpredictable dependencies.
REAL PYTHON course

Python 3.15.0 Alpha 6 Released

CPYTHON DEV BLOG

Python Jobs

Python + AI Content Specialist (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials

Introducing the PSF Community Partner Program

The Python Software Foundation has announced the new Community Partner Program, a way for the PSF to support Python events and initiatives with non-financial support such as promotion and branding.
PYTHON SOFTWARE FOUNDATION

Better Python Tests With inline-snapshot

inline-snapshot lets you quickly and easily write rigorous tests that automatically update themselves. It combines nicely with dirty-equals to handle dynamic data that's a pain to normalize.
PYDANTIC.DEV • Shared by Alex Hall

See Why Your CI Is Slow

alt

Your GitHub Actions workflows are burning time and money, but you're flying blind. Depot's new Analytics shows exactly where your CI spends resources. Track trends, find bottlenecks, optimize across your org. Get visibility with Depot →
DEPOT sponsor

Django's Test Runner Is Underrated

Loopwerk never made the switch from unittest to pytest for their Django projects. And after years of building and maintaining Django applications, they still don't feel like they're missing out.
LOOPWERK

Webmentions With Batteries Included

A webmention is a W3 standard for one post to refer to another and interlink. This article introduces you to a Python library that helps you implement this feature on your site.
FABIO MANGANIELLO

Python 3.12 vs 3.13 vs 3.14

Compare Python 3.12, 3.13, and 3.14: free-threading, JIT, t-strings, performance, and library changes. Which version should you actually use in 2026?
MATHEUS

Django Steering Council 2025 Year in Review

Want to know what is happening in the world of the Django project? This post talks about all the things the Django Steering Council did in 2025.
FRANK WILES

What Exactly Is the Zen of Python?

The Zen of Python is a collection of 19 guiding principles for writing good Python code. Learn its history, meaning, and hidden jokes.
REAL PYTHON

Quiz: What Exactly Is the Zen of Python?

REAL PYTHON

Open Source AI We Use to Work on Wagtail

One of the core maintainers at Wagtail CMS shares what open source models having been working best for the project so far.
WAGTAIL.ORG • Shared by Meagen Voss

Need Switch-Case in Python? It's Not Match-Case!

Python's match-case is not a switch-case statement. If you need switch-case, you can often use a dictionary instead.
TREY HUNNER

Python Time & Space Complexity Reference

Open-source reference documenting time and space O(n) complexity for Python built-in and stdlib operations.
PYTHONCOMPLEXITY.COM • Shared by Heikki Toivonen

Projects & Code

pycaniuse: Query caniuse.com From the Terminal

GITHUB.COM/VISESHRP • Shared by Visesh Prasad

silkworm-rs: Free-Threaded Compatible Async Web Scraper

GITHUB.COM/BITINGSNAKES • Shared by Yehor Smoliakov

django-deadcode: Tracks URLs, Templates, and Django Views

GITHUB.COM/NANOREPUBLICA

oxyde: Type-Safe, Pydantic-Centric Async ORM

GITHUB.COM/MR-FATALYST

Skylos: Quiet Static Analysis + Optional Agent Mode

GITHUB.COM/DURIANTACO • Shared by aaron oh

Events

Weekly Real Python Office Hours Q&A (Virtual)

February 18, 2026
REALPYTHON.COM

PyData Bristol Meetup

February 19, 2026
MEETUP.COM

PyLadies Dublin

February 19, 2026
PYLADIES.COM

PyCon Namibia 2026

February 20 to February 27, 2026
PYCON.ORG

Chattanooga Python User Group

February 20 to February 21, 2026
MEETUP.COM

PyCon Mini Shizuoka 2026

February 21 to February 22, 2026
PYCON.JP


Happy Pythoning!
This was PyCoder's Weekly Issue #722.
View in Browser »

alt


[ Subscribe to 🐍 PyCoder's Weekly 💌 - Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

17 Feb 2026 7:30pm GMT

Real Python: Write Python Docstrings Effectively

Writing clear, consistent docstrings in Python helps others understand your code's purpose, parameters, and outputs. In this video course, you'll learn about best practices, standard formats, and common pitfalls to avoid, ensuring your documentation is accessible to users and tools alike.

By the end of this video course, you'll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

17 Feb 2026 2:00pm GMT

Python Software Foundation: Join the Python Security Response Team!

Thanks to the work of the Security Developer-in-Residence Seth Larson, the Python Security Response Team (PSRT) now has an approved public governance document (PEP 811). Following the new governance structure the PSRT now publishes a public list of members, has documented responsibilities for members and admins, and a defined process for onboarding and offboarding members to balance the needs of security and sustainability. The document also clarifies the relationship between the Python Steering Council and the PSRT.

And this new onboarding process is already working! The PSF Infrastructure Engineer, Jacob Coffee, has just joined the PSRT as the first new non-"Release Manager" member since Seth joined the PSRT in 2023. We expect new members to join further bolstering the sustainability of security work for the Python programming language.

Thanks to Alpha-Omega for their support of Python ecosystem security by sponsoring Seth's work as the Security Developer-in-Residence at the Python Software Foundation.

What is the Python Security Response Team?

Security doesn't happen by accident: it's thanks to the work of volunteers and paid Python Software Foundation staff on the Python Security Response Team to triage and coordinate vulnerability reports and remediations keeping all Python users safe. Just last year the PSRT published 16 vulnerability advisories for CPython and pip, the most in a single year to date!

And the PSRT usually can't do this work alone, PSRT coordinators are encouraged to involve maintainers and experts on the projects and submodules. By involving the experts directly in the remediation process ensures fixes adhere to existing API conventions and threat-models, are maintainable long-term, and have minimal impact on existing use-cases.

Sometimes the PSRT even coordinates with other open source projects to avoid catching the Python ecosystem off-guard by publishing a vulnerability advisory that affects multiple other projects. The most recent example of this is PyPI's ZIP archive differential attack mitigation.

This work deserves recognition and celebration just like contributions to source code and documentation. Seth and Jacob are developing further improvements to workflows involving "GitHub Security Advisories" to record the reporter, coordinator, and remediation developers and reviewers to CVE and OSV records to properly thank everyone involved in the otherwise private contribution to open source projects.

How can I join the Python Security Response Team?

Maybe you've read all this and are interested in directly helping the Python programming language be more secure! The process is similar to the Core Team nomination process, you need an existing PSRT member to nominate you and for your nomination to receive at least ⅔ positive votes from existing PSRT members.

You do not need to be a core developer, team member, or triager to be a member of the Python Security Response Team. Anyone with security expertise that is known and highly-trusted within the Python community and has time to volunteer or donate through their employer would make a good candidate for the PSRT. Please note that all PSRT team members have documented responsibilities and are expected to contribute meaningfully to the remediation of vulnerabilities.

Being a member of the PSRT is not required to be notified of vulnerabilities and shouldn't be to receive "early notification" of vulnerabilities affecting CPython and pip. The Python Software Foundation is a CVE Numbering Authority and publishes CVE and OSV records with up-to-date information about vulnerabilities affecting CPython and pip.


17 Feb 2026 2:30am GMT

16 Feb 2026

feedPlanet Python

Chris Warrick: I Wrote YetAnotherBlogGenerator

Writing a static site generator is a developer rite of passage. For the past 13 years, this blog was generated using Nikola. This week, I finished implementing my own generator, the unoriginally named YetAnotherBlogGenerator.

Why would I do that? Why would I use C# for it? And how fast is it? Continue reading to find out.

OK, but why?

You might have noticed I'm not happy with the Python packaging ecosystem. But the language itself is no longer fun for me to code in either. It is especially not fun to maintain projects in. Elementary quality-of-life features get bogged down in months of discussions and design-by-committee. At the same time, there's a new release every year, full of removed and deprecated features. A lot of churn, without much benefit. I just don't feel like doing it anymore.

Python is praised for being fast to develop in. That's certainly true, but a good high-level statically-typed language can yield similar development speed with more correctness from day one. For example, I coded an entire table-of-contents-sidebar feature in one evening (and one more evening of CSS wrangling to make it look good). This feature extracts headers from either the Markdown AST or the HTML fragment. I could do it in Python, but I'd need to jump through hoops to get Python-Markdown to output headings with IDs. In C#, introspecting what a class can do is easier thanks to great IDE support and much less dynamic magic happening at runtime. There are also decompiler tools that make it easy to look under the hood and see what a library is doing.

Writing a static site generator is also a learning experience. A competent SSG needs to ingest content in various formats (as nobody wants to write blog posts in HTML by hand) and generate HTML (usually from templates) and XML (which you could, in theory, do from templates, but since XML parsers are not at all lenient, you don't want to). Image processing to generate thumbnails is needed too. And to generate correct RSS feeds, you need to parse HTML to rewrite links. The list of small-but-useful things goes on.

Is C#/.NET a viable technology stack for a static site generator?

C#/.NET is certainly not the most popular technology stack for static site generators. JamStack.org have gathered a list of 377 SSGs. Grouping by language, there are 154 generators written in JavaScript or TypeScript, 55 generators written in Python, and 28 written in PHP of all languages. C#/.NET is in sixth place with 13 (not including YABG; I'm probably not submitting it).

However, it is a pretty good choice. Language-level support for concurrency with async/await (based on a thread pool) and JIT compilation help to make things fast. But it is still a high-level, object-oriented language where you don't need to manually manage memory (hi Rustaceans!).

The library ecosystem is solid too. There are plenty of good libraries for working with data serialization formats: CsvHelper, YamlDotNet, Microsoft.Data.Sqlite, and the built-in System.Text.Json and System.Xml.Linq. Markdig handles turning Markdown into HTML. Fluid is an excellent templating library that implements the Liquid templating language. HtmlAgilityPack is solid for manipulating HTML, and Magick.NET wraps the ImageMagick library.

<PackageReference Include="CsvHelper" Version="33.1.0"/>
<PackageReference Include="Fluid.Core" Version="2.31.0"/>
<PackageReference Include="Fluid.ViewEngine" Version="2.31.0"/>
<PackageReference Include="HtmlAgilityPack" Version="1.12.4"/>
<PackageReference Include="Magick.NET-Q8-AnyCPU" Version="14.10.2"/>
<PackageReference Include="Markdig" Version="0.45.0"/>
<PackageReference Include="Microsoft.Data.Sqlite" Version="10.0.3"/>
<PackageReference Include="Microsoft.Extensions.FileProviders.Physical" Version="10.0.3"/>
<PackageReference Include="Microsoft.Extensions.Logging.Console" Version="10.0.3"/>
<PackageReference Include="YamlDotNet" Version="16.3.0"/>

There's one major thing missing from the above list: code highlighting. There are a few highlighting libraries on NuGet, but I decided to stick with Pygments. I still need the Pygments stylesheets around since I'm not converting old reStructuredText posts to Markdown (I'm copying them as HTML directly from Nikola's cache), so using Pygments for new content keeps things consistent. Staying with Pygments means I still maintain a bit of Python code, but much less: 230 LoC in pygments_better_html and 89 in yabg_pygments_adapter, with just one third-party dependency. Calling a subprocess while rendering listings is slow, but it's a price worth paying.

Paid libraries in the .NET ecosystem

All the above libraries are open source (MIT, Apache 2.0, BSD-2-Clause). However, one well-known issue of the .NET ecosystem is the number of packages that suddenly become commercial. This trend was started by ImageSharp, a popular 2D image manipulation library. I could probably use it, since it's licensed to open-source projects under Apache 2.0, but I'd rather not. I initially tried SkiaSharp, but it has terrible image scaling algorithms, so I settled on Magick.NET.

Open-source sustainability is hard, maybe impossible. But I don't think transitioning from open-source to pay-for-commercial-use is the answer. In practice, many businesses just use the last free version or switch to a different library. I'd rather support open-source projects developed by volunteers in their spare time. They might not be perfect or always do exactly what I want, but I'm happy to contribute fixes and improve things for everyone. I will avoid proprietary or dual-licensed libraries, even for code that never leaves my computer. Some people complain when Microsoft creates a library that competes with a third-party open-source library (e.g. Microsoft.AspNetCore.OpenApi, which was built to replace Swashbuckle.AspNetCore), but I am okay with that, since libraries built or backed by large corporations (like Microsoft) tend to be better maintained.

But at least sometimes trash libraries take themselves out.

Is it fast?

One of the things that set Nikola apart from other Python static site generators is that it only rebuilds files that need to be rebuild. This does make Nikola fast when rebuilding things, but it comes at a cost: Nikola needs to track all dependencies very closely. Also, some features that are present in other SSGs are not easy to achieve in Nikola, because they would cause many pages to be rebuilt.

YetAnotherBlogGenerator has almost no caching. The only thing currently cached is code listings, since they're rendered using Pygments in a subprocess. Additionally, the image scaling service checks the file modification date to skip regenerating thumbnails if the source image hasn't changed. And yet, even if it rewrites everything, YABG finishes faster than Nikola when the site is fully up-to-date (there is nothing to do).

I ran some quick benchmarks comparing the performance of rendering the final Nikola version of this blog against the first YABG version (before the Bootstrap 5 redesign).

Testing methodology

Here's the testing setup:

I ran three tests. Each test was run 11 times. The first attempt was discarded (as a warmup and to let me verify the log). The other ten attempts were averaged as the final result. I used PowerShell's Measure-Command cmdlet for measurements.

The tests were as follows:

  1. Clean build (no cache, no output)
    • Removing .doit.db, cache, and output from the Nikola site, so that everything has to be rebuilt from scratch.
    • Removing .yabg_cache.sqlite3 and output from the YABG site, so that everything has to be reuilt from scratch, most notably the Pygments code listings have to be regenerated via a subprocess.
  2. Build with cache, but no output
    • Removing output from the Nikola site, so that posts rendered to HTML by docutils/Python-Markdown are cached, but the final HTML still need to be built.
    • Removing output from the YABG site, so that the code listings rendered to HTML by Pygments are cached, but everything else needs to be built.
  3. Rebuild (cache and output intact)
    • Not removing anything from the Nikola site, so that there is nothing to do.
    • Not removing anything from the YABG site. Things are still rebuilt, except for Pygments code listings and thumbnails.

For YetAnotherBlogGenerator, I tested two builds: one in Release mode (standard), and another in ReadyToRun mode, trading build time and executable size for faster execution.

All the scripts I used for setup and testing can be found in listings.

Test results

Platform Build type Nikola YABG (ReadyToRun) YABG (Release)
Linux Clean build (no cache, no output) 6.438 1.901 2.178
Linux Build with cache, but no output 5.418 0.980 1.249
Linux Rebuild (cache and output intact) 0.997 0.969 1.248
Windows Clean build (no cache, no output) 9.103 2.666 2.941
Windows Build with cache, but no output 7.758 1.051 1.333
Windows Rebuild (cache and output intact) 1.562 1.020 1.297

Design details and highlights

Here are some fun tidbits from development.

Everything is an item

In Nikola, there are several different entities that can generate HTML files. Posts and Pages are both Post objects. Listings and galleries each have their own task generators. There's no Listing class, everything is handled within the listing plugin. Galleries can optionally have a Post object attached (though that Post is not picked up by the file scanner, and it is not part of the timeline). The listings and galleries task generators both have ways to build directory trees.

In YABG, all of the above are Items. Specifically, they start as SourceItems and become Items when rendered. For listings, the source is just the code and the rendered content is Pygments-generated HTML. For galleries, the source is a TSV file with a list of included gallery images (order, filenames, and descriptions), and the generated content comes from a meta field named galleryIntroHtml. Gallery objects have a GalleryData object attached to their Item object as RichItemData.

This simplifies the final rendering pipeline design. Only four classes (actual classes, not temporary structures in some plugin) can render to HTML: Item, ItemGroup (tags, categories, yearly archives, gallery indexes), DirectoryTreeGroup (listings), and LinkGroup (archive and tag indexes). Each has a corresponding template model. Nikola's sitemap generator recurses through the output directory to find files, but YABG can just use the lists of items and groups. The sitemap won't include HTML files from the files folder, but I don't need them there (though I could add them if needed).

Windows first, Linux in zero time

I developed YABG entirely on Windows. This forced me to think about paths and URLs as separate concepts. I couldn't use most System.IO.Path facilities for URLs, since they would produce backslashes. As a result, there are zero bugs where backslashes leak into output on Windows. Nikola has such bugs pop up occasionally; indeed, I fixed one yesterday.

But when YABG was nearly complete, I ran it on Linux. And it just worked. No code changes needed. No output differences. (I had to add SkiaSharp.NativeAssets.Linux and apt install libfontconfig1 since I was stilll using SkiaSharp at that point, but that's no longer needed with Magick.NET.)

Not everything is perfect, though. I added a --watch mode based on FileSystemWatcher, but it doesn't work on Linux. I don't need it there; I'd have to switch to polling to make it work.

Dependency injection everywhere

A good principle used in object-oriented development (though not very often in Python) is dependency injection. I have several grouping services, all implementing either IPostGrouper or IItemGrouper. They're registered in the DI container as implementations of those interfaces. The GroupEngine doesn't need to know about specific group types, it just gets them from the container and passes the post and item arrays.

.AddScoped<IPostGrouper, ArchiveGrouper>()
.AddScoped<IPostGrouper, GuideGrouper>()
.AddScoped<IPostGrouper, IndexGrouper>()
.AddScoped<IPostGrouper, NavigationGrouper>()
.AddScoped<IPostGrouper, TagCategoryGrouper>()
.AddScoped<IItemGrouper, GalleryIndexGrouper>()
.AddScoped<IItemGrouper, ListingIndexGrouper>()
.AddScoped<IItemGrouper, ProjectGrouper>()
internal class GroupEngine(
IEnumerable<IItemGrouper> itemGroupers,
IEnumerable<IPostGrouper> postGroupers)
: IGroupEngine {
public IEnumerable<IGroup> GenerateGroups(Item[] items) {
var sortedItems = items
.OrderByDescending(i => i.Published)
.ThenBy(i => i.SourcePath)
.ToArray();
var sortedPosts = sortedItems
.Where(item => item.Type == ItemType.Post)
.ToArray();
var itemGroups = itemGroupers.SelectMany(g => g.GroupItems(sortedItems));
var postGroups = postGroupers.SelectMany(g => g.GroupPosts(sortedPosts));
return itemGroups.Concat(postGroups);
}
}

The ItemRenderEngine has a slightly different challenge: it needs to pick the correct renderer for the post (Gallery, HTML, Listing, Markdown). The renderers are registered as keyed services. The render engine does not need to know anything about the specific renderer types, it just gets the renderer name from the SourceItem's ScanPattern (so ultimately from the configuration file) and asks the DI container to provide it with the right implementation.

.AddKeyedScoped<IItemRenderer, GalleryItemRenderer>(GalleryItemRenderer.Name)
.AddKeyedScoped<IItemRenderer, HtmlItemRenderer>(HtmlItemRenderer.Name)
.AddKeyedScoped<IItemRenderer, ListingItemRenderer>(ListingItemRenderer.Name)
.AddKeyedScoped<IItemRenderer, MarkdownItemRenderer>(MarkdownItemRenderer.Name)
public async Task<IEnumerable<Item>> Render(IEnumerable<SourceItem> sourceItems) {
var renderTasks = sourceItems
.GroupBy(i => i.ScanPattern.RendererName)
.Select(group => {
var renderer = _keyedServiceProvider
.GetRequiredKeyedService<IItemRenderer>(group.Key);
return renderer switch {
IBulkItemRenderer bulkRenderer => bulkRenderer.RenderItems(group),
ISingleItemRenderer singleRenderer => Task.WhenAll(
group.Select(singleRenderer.RenderItem)),
_ => throw new InvalidOperationException("Unexpected renderer type")
};
});
}

In total, there are 37 specific service implementations registered (plus system services like TimeProvider and logging). Beyond these two examples, the main benefit is testability. I can write unit tests without dependencies on unrelated services, and without monkey-patching random names. (In Python, unittest.mock does both monkey-patching and mocking.)

Okay, I haven't written very many tests, but I could easily ask an LLM to do it.

Immutable data structures and no global state

All classes are immutable. This helps in several ways. It's easier to reason about state when SourceItem becomes Item during rendering, compared to a single class with a nullable Content property. Immutability also makes concurrency safer. But the biggest win is how easy it was to develop the --watch mode. Every service has Scoped lifetime, and main logic lives in IMainEngine. I can just create a new scope, get the engine, and run it without state leaking between executions. No subprocess launching, no state resetting - everything disappears when the scope is disposed.

Can anyone use it?

On one hand, it's open source under the 3-clause BSD license and available on GitHub.

On the other hand, it's more of a source-available project. There are no docs, and it was designed specifically for this site (so some things are probably too hardcoded for your needs). In fact, this blog's configuration and templates were directly hardcoded in the codebase until the day before launch. But I'm happy to answer questions and review pull requests!

16 Feb 2026 9:15pm GMT

Anarcat: Keeping track of decisions using the ADR model

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

16 Feb 2026 8:21pm GMT

PyBites: We’re launching 60 Rust Exercises Designed for Python Devs

"Rust is too hard."

We hear it all the time from Python developers.

But after building 60 Rust exercises specifically designed for Pythonistas, we've come to a clear conclusion: Rust isn't harder than Python per se, it's just a different challenge.

And with the right bridges, you can learn it faster than you think.

Why We Built This

Most Rust learning resources start from zero. They assume you've never seen a programming language before, or they assume you're coming from C++.

Neither fits the Python developer who already knows how to think in code but needs to learn Rust's ownership model, type system, and borrow checker.

We took a different approach: you already know the pattern, here's how Rust does it.

Every exercise starts with the Python concept you're familiar with - list comprehensions, context managers, __str__, defaultdict - and shows you the Rust equivalent.

No starting from scratch. No wasted time on concepts you already understand.

What's Inside

60 exercises across 10 tracks:

image

Each exercise has a teaching description with Python comparisons, a starter template, and a full test suite that validates your solution.

The Python → Rust Map

Every exercise bridges a concept you already know:

You know this in Python You'll learn this in Rust Track
__str__ / __repr__ Display / Debug traits Traits & Generics
defaultdict, Counter HashMap entry API Collections
list comprehensions .map().filter().collect() Iterators & Closures
try / except Result<T, E> + ? operator Error Handling
with context managers RAII + ownership Ownership
lambda closures (|x| x + 1) Iterators & Closures
Optional / None checks Option<T> + combinators Error Handling
import / from x import y mod / use Modules

What the Bridges Look Like

Here's a taste. When teaching functions, we start with what you already know:

def area(width: int, height: int) -> int:
    return width * height

Then have you convert it into Rust:

fn area(width: i32, height: i32) -> i32 {
    width * height
}

def becomes fn. Type hints become required. And the last expression - without a semicolon - is the return value. No return needed.

Add a semicolon by accident? The compiler catches it instantly. That's your first lesson in how Rust turns runtime surprises into compile-time errors.

Or take branching. In Python, if is a statement - it does things. In Rust, if is an expression - it returns things:

Python:

if celsius >= 30:
    label = "Hot"
elif celsius >= 15:
    label = "Mild"
else:
    label = "Cold"

Rust:

let label = if celsius >= 30 {
    "Hot"
} else if celsius >= 15 {
    "Mild"
} else {
    "Cold"
};

Same logic, but now the result goes straight into label. No ternary operator needed - if itself returns a value.

You'll learn the Rust language bit by bit, and we hope that by making it more relatable to your Python knowledge, it will stick faster.

Write, Test, Learn - All in the Browser

No local Rust installation needed. Each exercise gives you a split-screen editor: the teaching description with Python comparisons on the left, a code editor with your starter template on the right (switched to dark mode):

Screenshot 2026 02 16 at 16.31.19 scaled

Write your solution, hit Run Tests, and get instant feedback from the compiler and test suite:

Screenshot 2026 02 16 at 16.32.08 Screenshot 2026 02 16 at 16.32.20

Errors show you exactly what went wrong. Iterate until all tests pass - then check the solution to see if there is anything you can do in a different or more idiomatic way.

Mirroring our Python coding platform, code persists automatically, so you can pick up where you left off. And as you solve exercises, you earn points and progress through ninja belts. 📈

Screenshot 2026 02 16 at 16.33.40

Why Learn Rust in 2026

Three reasons Python developers should care:

Career. Rust has been the most admired language for 8 years running in Stack Overflow surveys. AWS, Microsoft, Google, Discord, and Cloudflare are all investing heavily in Rust. The demand is real and growing.

Ecosystem. Python + Rust is becoming the standard stack for performance-critical Python. The tools you already use - pydantic, ruff, uv, cryptography - are Rust under the hood. Understanding Rust means understanding the layer beneath your Python.

Becoming a better developer. Learning Rust's ownership model changes how you think about code. You start reasoning about data flow, memory, and error handling more carefully - and that makes your Python better too. It's one of the best investments you can make in your craft.

Beyond Exercises: The Cohort

If you want to go deeper, our Rust Developer Cohort takes these concepts and applies them to a real project: building a JSON parser from scratch over 6 weeks. You'll go from tokenizing strings to recursive descent parsing, with PyO3 integration to call your Rust parser from Python.

The exercises are the foundation. The cohort is where you learn app development end-to-end, building something real.

How Developers Experience The Platform

"Who said learning Rust is gonna be difficult? Had tons of fun learning Rust by going through the exercises!" - Aris N

"As someone who is primarily a self taught developer, I learned the importance of learning by doing by completing so many of the 'Bites' challenges on the PyBites platform. Now, as someone learning Rust, I've come across the Rust platform and have used the exercises in the same way. Some things I will know and be able to solve quickly, while others require me to research and learn more about the language. The new concepts solidify and build over time. They are a great way to be hands on and learn by doing." - Jesse B

The Rust Bites are a great way to start learning Rust hands-on. Whether you're just starting with Rust or already have some experience, they help build real skills and challenge you to understand all the basic data types and design patterns of Rust. Things that are tough to understand, like pattern matching, result handling, and ownership, will feel more understandable and natural after going through these exercises, and they'll help you be a better programmer in other languages too! Highly recommended! - Dan D

Key Takeaways

Where to Start

New to Rust? Start with the Intro track - first 10 exercises are free and cover the fundamentals: variables, types, control flow, enums, and pattern matching. They will get your feet wet.

Know the basics already? Jump straight to Ownership - that's where Rust gets genuinely different from Python, and where the Python bridges help most. Once ownership clicks, the rest of Rust falls into place.

Want a challenge? The Iterators & Closures and Error Handling tracks are where Python developers tend to have the most "aha" moments. More advanced concepts like lifetimes we'll add later.

Try It Yourself

Start with the exercises at Rust Platform - pick a track that matches where you are, and see how the Python bridges make Rust feel less foreign than you expected.

If you're ready to commit to the full journey, check out the Rust Developer Cohort - our 6-week guided program where you build a real project from the ground up.

Rust isn't the enemy. It's your next superpower.


We're not aware of any other platform that teaches Rust specifically through the lens of Python. If you're a Python developer curious about Rust, this is built for you.

16 Feb 2026 3:41pm GMT

Real Python: TinyDB: A Lightweight JSON Database for Small Projects

TinyDB is a Python implementation of a NoSQL, document-oriented database. Unlike a traditional relational database, which stores records across multiple linked tables, a document-oriented database stores its information as separate documents in a key-value structure. The keys are similar to the field headings, or attributes, in a relational database table, while the values are similar to the table's attribute values.

TinyDB uses the familiar Python dictionary for its document structure and stores its documents in a JSON file.

TinyDB is written in Python, making it easily extensible and customizable, with no external dependencies or server setup needed. Despite its small footprint, it still fully supports the familiar database CRUD features of creating, reading, updating, and deleting documents using an API that's logical to use.

The table below will help you decide whether TinyDB is a good fit for your use case:

Use Case TinyDB Possible Alternatives
Local, small dataset, single-process use (scripts, CLIs, prototypes) simpleJDB, Python's json module, SQLite
Local use that requires SQL, constraints, joins, or stronger durability - SQLite, PostgreSQL
Multi-user, multi-process, distributed, or production-scale systems - PostgreSQL, MySQL, MongoDB

Whether you're looking to use a small NoSQL database in one of your projects or you're just curious how a lightweight database like TinyDB works, this tutorial is for you. By the end, you'll have a clear sense of when TinyDB shines, and when it's better to reach for something else.

Get Your Code: Click here to download the free sample code you'll use in this tutorial to explore TinyDB.

Take the Quiz: Test your knowledge with our interactive "TinyDB: A Lightweight JSON Database for Small Projects" quiz. You'll receive a score upon completion to help you track your learning progress:


TinyDB: A Lightweight JSON Database for Small Projects

Interactive Quiz

TinyDB: A Lightweight JSON Database for Small Projects

If you're looking for a JSON document-oriented database that requires no configuration for your Python project, TinyDB could be what you need.

Get Ready to Explore TinyDB

TinyDB is a standalone library, meaning it doesn't rely on any other libraries to work. You'll need to install it, though.

You'll also use the pprint module to format dictionary documents for easier reading, and Python's csv module to work with CSV files. You don't need to install either of these because they're included in Python's standard library.

So to follow along, you only need to install the TinyDB library in your environment. First, create and activate a virtual environment, then install the library using pip:

Shell
(venv) $ python -m pip install tinydb

Alternatively, you could set up a small pyproject.toml file and manage your dependencies using uv.

When you add documents to your database, you often do so manually by creating Python dictionaries. In this tutorial, you'll do this, and also learn how to work with documents already stored in a JSON file. You'll even learn how to add documents from data stored in a CSV file.

These files will be highlighted as needed and are available in this tutorial's downloads. You might want to download them to your program folder before you start to keep them handy:

Get Your Code: Click here to download the free sample code you'll use in this tutorial to explore TinyDB.

Regardless of the files you use or the documents you create manually, they all rely on the same world population data. Each document will contain up to six fields, which become the dictionary keys used when the associated values are added to your database:

Field Description
continent The continent the country belongs to
location Country
date Date population count made
% of world Percentage of the world's population
population Population
source Source of population

As mentioned earlier, the four primary database operations are Create, Read, Update, and Delete-collectively known as the CRUD operations. In the next section, you'll learn how you can perform each of them.

To begin with, you'll explore the C in CRUD. It's time to get creative.

Create Your Database and Documents

The first thing you'll do is create a new database and add some documents to it. To do this, you create a TinyDB() object that includes the name of a JSON file to store your data. Any documents you add to the database are then saved in that file.

Documents in TinyDB are stored in tables. Although it's not necessary to create a table manually, doing so can help you organize your documents, especially when working with multiple tables.

To start, you create a script named create_db.py that initializes your first database and adds documents in several different ways. The first part of your script looks like this:

Read the full article at https://realpython.com/tinydb-python/ »


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

16 Feb 2026 2:00pm GMT

Real Python: Quiz: TinyDB: A Lightweight JSON Database for Small Projects

In this quiz, you'll test your understanding of the TinyDB database library and what it has to offer, and you'll revisit many of the concepts from the TinyDB: A Lightweight JSON Database for Small Projects tutorial.

Remember that the official documentation is also a great reference.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

16 Feb 2026 12:00pm GMT

Tryton News: End of Windows 32-bit Builds

The MSYS2 project has discontinued building cx-Freeze for the mingw32 platform. We depend on these packages to build our Windows client, and we currently do not have the resources to maintain the required packages for Windows 32-bit ourselves.

As a result, we will no longer publish Windows 32-bit builds for new releases of the supported series.

1 post - 1 participant

Read full topic

16 Feb 2026 7:00am GMT

Anarcat: Kernel-only network configuration on Linux

What if I told you there is a way to configure the network on any Linux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and a boot loader (no systemd-networkd, ifupdown, NetworkManager, nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is still totally worth your time.

Known options in Debian

People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:

At this point, I feel ifupdown is on its way out, possibly replaced by systemd-networkd. NetworkManager already manages most desktop configurations.

A "new" network configuration system

The method is this:

So by "new" I mean "new to me". This option is really old. The nfsroot.txt where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.

The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.

What are you doing.

The trick is to add an ip= parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the useless ones:

We're ignoring the options:

Note that the Red Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (although server-id is weird), and the autoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file /etc/default/grub (ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

For systemd-boot UKI setups, it's simpler: just add the setting to the /etc/kernel/cmdline file. Don't forget to include anything that's non-default from /proc/cmdline.

This assumes that is the Cmdline=@ setting in /etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:

It's interesting that /etc/default/grub is consistent across all distributions above, while the systemd-boot setups are all over the place (except for the UKI case), while I would have expected those be more standard than GRUB.

dropbear-initramfs

If dropbear-initramfs is setup, it already requires you to have such a configuration, and it might not work out of the box.

This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).

To fix this, you need to disable that "feature":

IFDOWN="none"

This will keep dropbear-initramfs from disabling the configured interface.

Why?

Traditionally, I've always setup my servers with ifupdown on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using systemd-networkd because ifupdown has ... issues, particularly with reloading network configurations. ifupdown is a old hack, feels like legacy, and is Debian-specific.

Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.

I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.

  2. This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.

  3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.

  4. It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)

  6. It will not automatically reconfigure the interface on link changes, but ifupdown does not either.

  7. It will not write /etc/resolv.conf for you but the dns0-ip and dns1-ip do end up in /proc/net/pnp which has a compatible syntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
    
  8. I have not really tested this at scale: only a single, test server at home.

Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.

Cleanup

Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note that ifupdown (and probably others) leave stray files in (e.g.) /etc/network which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty.

Credits

This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!

16 Feb 2026 4:18am GMT

PyBites: How to Automate Python Performance Benchmarking in Your CI/CD Pipeline

The issue with traditional performance tracking is that it is often an afterthought. We treat performance as a debugging task, (something we do after users complain), rather than a quality gate.

Worse, when we try to automate it, we run into the "Noisy Neighbour" problem. If you run a benchmark in a GitHub Action, and the container next to you is mining Bitcoin, your metrics will be rubbish.

To become a Senior Engineer, you need to start treating performance exactly like you treat test coverage.

The Solution: Continuous Performance Guardrails

If you want to stop shipping slow code, you need to shift your mindset on Python Performance Benchmarking in three specific ways:

  1. Eliminate the Variance (The "Noise" Problem): Standard benchmarking measures "wall clock" time. In a cloud CI environment, this is useless. Cloud providers over-provision hardware, meaning your test runner shares L3 caches with other users. To get a reliable signal, you need deterministic benchmarking. Instead of measuring time, you should measure instruction counts and simulated memory access. By simulating the CPU architecture (L1, L2, and L3 caches), you can reduce variance to less than 1%, making your benchmarks reproducible regardless of what the server "neighbours" are doing.
  2. Treat Performance Like Code Coverage: We all know the drill… if a PR drops code coverage below 90%, the build fails. Why don't we do this for latency? You need to integrate benchmarking into your PR workflow. If a developer introduces a change that makes a core endpoint 10% slower, the CI should flag it immediately before it merges. This allows you to catch silent killers, like accidental N+1 queries or inefficient loops, while the code is still fresh in your mind.
  3. The AI Code Guardrail: We are writing code faster than ever thanks to AI agents. But AI agents prioritise generation speed and syntax correctness, not runtime efficiency. An AI might solve a problem by generating a massive regex or a brute-force loop because it "looks" correct. As we lean more on AI coding assistants, automated performance guardrails become the only line of defence against a slowly degrading codebase.

We dug deep into this topic with Arthur Pastel, the creator of CodSpeed.

Arthur built a tool that solved this exact variance problem because he was tired of his robotics pipelines breaking due to silent performance regressions. He explained how Pydantic uses these exact techniques to keep their library lightning-fast for the rest of us.

Listen to the Episode

If you want to understand how to set up a deterministic benchmarking pipeline and stop performance regressions from reaching production, listen to the full breakdown using the links below, or the player at the top of the page.

16 Feb 2026 12:42am GMT

13 Feb 2026

feedPlanet Python

Python Morsels: Setting default dictionary values in Python

There are many ways to set default values for dictionary key lookups in Python. Which way you should use will depend on your use case.

Table of contents

  1. The get method: lookups with a default
  2. The setdefault method: setting a default
  3. The fromkeys method: initializing defaults
  4. Caution with mutable defaults
  5. A dictionary comprehension
  6. The collections.defaultdict class
  7. The collections.Counter class
  8. Start simple with your defaulting

The get method: lookups with a default

The get method is the classic way to look up a value for a dictionary key without raising an exception for missing keys.

>>> quantities = {"pink": 3, "green": 4}

Instead of this:

try:
    count = quantities[color]
except KeyError:
    count = 0

Or this:

if color in quantities:
    count = quantities[color]
else:
    count = 0

We can do this:

count = quantities.get(color, 0)

Here's what this would do for a key that's in the dictionary and one that isn't:

>>> quantities.get("pink", 0)
3
>>> quantities.get("blue", 0)
0

The get method accepts two arguments: the key to look up and the default value to use if that key isn't in the dictionary. The second argument defaults to None:

>>> quantities.get("pink")
3
>>> quantities.get("blue")
None

The setdefault method: setting a default

The get method doesn't modify …

Read the full article: https://www.pythonmorsels.com/default-dictionary-values/

13 Feb 2026 4:45pm GMT

Real Python: The Real Python Podcast – Episode #284: Running Local LLMs With Ollama and Connecting With Python

Would you like to learn how to work with LLMs locally on your own computer? How do you integrate your Python projects with a local model? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

13 Feb 2026 12:00pm GMT

Peter Hoffmann: Garmin Inreach Mini 2 Leaflet checkin map

We will be trekking the eastern part of the Great Himalaya Trail in Nepal in March/April. Details on the route and our plans can be found at https://greathimalayatrail.de. Our intent is to keep friends and family updated on our progress. Given that we'll be hiking in quite remote areas, a satellite phone/pager will be our sole means of communication.

After the Garmin inReach Mini 3 was released recently, the Inreach Mini 2 was on heavy sale. The inReach Mini 2 has all the features I need: satellite messaging, check-ins, offline mode with navigation, and track recording.

Plans

I'm on the Garmin Essential plan for 18 euros per month. It includes 50 free text messages or weather requests each month, plus unlimited check-in messages. The smaller Enabled plan (10 Euros) is missing the unlimited checkins, while the the Standard plan (34 Euros) gives you 150 free messages and unlimited live tracking. More details are on the Garmin page

Messaging

There are three different type of messages that you can send:

Check-In Messages: There are three preset messages. You can configure the recipients at explore.garmin.com. Depending on your Garmin subscription, sending check-in messages are free of charge. In the configuration section, you can enable the option to include your latitude/longitude and a link to the Garmin map in each SMS message. This information is always included for email recipients

Quick Messages: You can create up to 20 predefined messages so you don't have to type them while you're on the trail. The number of free messages you get depends on your Garmin subscription; any additional messages are billed per use. You can create or edit these messages at explore.garmin.com.

Normal Messages: In the Garmin Messenger iPhone app, you can type any custom message and send it to both SMS and email recipients. These messages are billed the same way as quick messages.

You can configure the system to send all messages to any email/sms recipients. The great thing is that the unlimited check-in messages also include latitude/longitude information. Here is a sample message.

Arrived at Camp

View the location or send a reply to Peter Hoffmann:
https://inreachlink.com/<unique_code>

Peter Hoffmann sent this message from: Lat 48.996386 Lon 8.468849

Do not reply directly to this message.

This message was sent to you using the inReach two-way satellite communicator with GPS. To learn more, visit http://explore.garmin.com/inreach.

As we do not want to spam all our friends with daily checkins I have build a little leaflet-checkin plugin and an imap scraper to pull and visualize the checkin/messages.

Build your own Tracking with Check-In Messages

For battery life reasons, we are not interested in real-time live tracking.
Instead, I've created a small script that checks a dedicated IMAP email account for check-in messages and publishes them to a server, which then displays the location of our most recent check-in. Sending a check-in once a day or during each break when we are in more remote areas-should give our friends enough information in case any problems arise.

A straightforward Python script connects to my IMAP server, retrieves all emails from the Garmin InReach service, parses the message, timestamp, and latitude/longitude, and then updates a positions.json file on my webserver.

Then a simple static html file with a leaflet map pulls the positions.json file and displays the messages/checkins on the map.

A demo of the map is available at:

https://hoffmann.github.io/garmin-inreach-checkin-map/html/map.html

and you can checkout the code

https://github.com/hoffmann/garmin-inreach-checkin-map

#!/usr/bin/env python3
"""Poll IMAP inbox for Garmin inReach emails and extract positions into positions.json."""

import email
import email.utils
import imaplib
import json
import os
import re
import sys
from datetime import datetime, timezone

BOILERPLATE_PREFIXES = (
    "View the location",
    "Do not reply",
    "This message was sent",
)

POSITIONS_FILE = os.path.join(
    os.path.dirname(os.path.abspath(__file__)), "positions.json"
)


def connect(host, user, password):
    imap = imaplib.IMAP4_SSL(host)
    imap.login(user, password)
    return imap


def search_inreach_emails(imap):
    imap.select("INBOX")
    status, data = imap.search(
        None, '(OR FROM "no.reply.inreach@garmin.com" SUBJECT "inReach message")'
    )
    if status != "OK":
        return []
    msg_ids = data[0].split()
    return msg_ids


def get_text_body(msg):
    if msg.is_multipart():
        for part in msg.walk():
            if part.get_content_type() == "text/plain":
                charset = part.get_content_charset() or "utf-8"
                return part.get_payload(decode=True).decode(charset)
    else:
        charset = msg.get_content_charset() or "utf-8"
        return msg.get_payload(decode=True).decode(charset)
    return ""


def parse_timestamp(msg):
    date_str = msg.get("Date")
    if not date_str:
        return None
    dt = email.utils.parsedate_to_datetime(date_str)
    dt_utc = dt.astimezone(timezone.utc)
    return dt_utc.strftime("%Y-%m-%dT%H:%M:%SZ")


def parse_body(body):
    lines = body.strip().splitlines()

    # Extract message: first non-empty line
    message = ""
    for line in lines:
        stripped = line.strip()
        if stripped:
            message = stripped
            break

    # Check if the message is boilerplate
    if any(message.startswith(prefix) for prefix in BOILERPLATE_PREFIXES):
        message = ""

    # Extract lat/lon
    lat, lon = None, None
    m = re.search(r"Lat\s+([-\d.]+)\s+Lon\s+([-\d.]+)", body)
    if m:
        lat = float(m.group(1))
        lon = float(m.group(2))

    return message, lat, lon


def parse_email(msg_data):
    msg = email.message_from_bytes(msg_data)

    timestamp = parse_timestamp(msg)
    if not timestamp:
        return None

    body = get_text_body(msg)
    if not body:
        return None

    message, lat, lon = parse_body(body)
    if lat is None or lon is None:
        return None

    entry = {
        "timestamp": timestamp,
        "lat": lat,
        "lon": lon,
    }
    if message:
        entry["msg"] = message
    

    return entry


def load_positions():
    if os.path.exists(POSITIONS_FILE):
        with open(POSITIONS_FILE) as f:
            return json.load(f)
    return []


def save_positions(positions):
    with open(POSITIONS_FILE, "w") as f:
        json.dump(positions, f, indent=2)
        f.write("\n")


def main():
    host = os.environ.get("IMAP_HOST")
    user = os.environ.get("IMAP_USER")
    password = os.environ.get("IMAP_PASSWORD")

    if not all([host, user, password]):
        print("Error: Set IMAP_HOST, IMAP_USER, and IMAP_PASSWORD environment variables.")
        sys.exit(1)

    imap = connect(host, user, password)
    try:
        msg_ids = search_inreach_emails(imap)
        print(f"Found {len(msg_ids)} inReach email(s)")

        new_entries = []
        for msg_id in msg_ids:
            status, data = imap.fetch(msg_id, "(RFC822)")
            if status != "OK":
                continue
            entry = parse_email(data[0][1])
            if entry:
                new_entries.append(entry)
    finally:
        imap.logout()

    existing = load_positions()
    existing_timestamps = {p["timestamp"] for p in existing}

    added = 0
    for entry in new_entries:
        if entry["timestamp"] not in existing_timestamps:
            existing.append(entry)
            existing_timestamps.add(entry["timestamp"])
            added += 1

    existing.sort(key=lambda p: p["timestamp"])
    save_positions(existing)

    print(f"Added {added} new position(s) ({len(existing)} total)")


if __name__ == "__main__":
    main()

13 Feb 2026 12:00am GMT

Armin Ronacher: The Final Bottleneck

Historically, writing code was slower than reviewing code.

It might not have felt that way, because code reviews sat in queues until someone got around to picking it up. But if you compare the actual acts themselves, creation was usually the more expensive part. In teams where people both wrote and reviewed code, it never felt like "we should probably program slower."

So when more and more people tell me they no longer know what code is in their own codebase, I feel like something is very wrong here and it's time to reflect.

You Are Here

Software engineers often believe that if we make the bathtub bigger, overflow disappears. It doesn't. OpenClaw right now has north of 2,500 pull requests open. That's a big bathtub.

Anyone who has worked with queues knows this: if input grows faster than throughput, you have an accumulating failure. At that point, backpressure and load shedding are the only things that retain a system that can still operate.

If you have ever been in a Starbucks overwhelmed by mobile orders, you know the feeling. The in-store experience breaks down. You no longer know how many orders are ahead of you. There is no clear line, no reliable wait estimate, and often no real cancellation path unless you escalate and make noise.

That is what many AI-adjacent open source projects feel like right now. And increasingly, that is what a lot of internal company projects feel like in "AI-first" engineering teams, and that's not sustainable. You can't triage, you can't review, and many of the PRs cannot be merged after a certain point because they are too far out of date. And the creator might have lost the motivation to actually get it merged.

There is huge excitement about newfound delivery speed, but in private conversations, I keep hearing the same second sentence: people are also confused about how to keep up with the pace they themselves created.

We Have Been Here Before

Humanity has been here before. Many times over. We already talk about the Luddites a lot in the context of AI, but it's interesting to see what led up to it. Mark Cartwright wrote a great article about the textile industry in Britain during the industrial revolution. At its core was a simple idea: whenever a bottleneck was removed, innovation happened downstream from that. Weaving sped up? Yarn became the constraint. Faster spinning? Fibre needed to be improved to support the new speeds until finally the demand for cotton went up and that had to be automated too. We saw the same thing in shipping that led to modern automated ports and containerization.

As software engineers we have been here too. Assembly did not scale to larger engineering teams, and we had to invent higher level languages. A lot of what programming languages and software development frameworks did was allow us to write code faster and to scale to larger code bases. What it did not do up to this point was take away the core skill of engineering.

While it's definitely easier to write C than assembly, many of the core problems are the same. Memory latency still matters, physics are still our ultimate bottleneck, algorithmic complexity still makes or breaks software at scale.

Giving Up?

When one part of the pipeline becomes dramatically faster, you need to throttle input. Pi is a great example of this. PRs are auto closed unless people are trusted. It takes OSS vacations. That's one option: you just throttle the inflow. You push against your newfound powers until you can handle them.

Or Giving In

But what if the speed continues to increase? What downstream of writing code do we have to speed up? Sure, the pull request review clearly turns into the bottleneck. But it cannot really be automated. If the machine writes the code, the machine better review the code at the same time. So what ultimately comes up for human review would already have passed the most critical possible review of the most capable machine. What else is in the way? If we continue with the fundamental belief that machines cannot be accountable, then humans need to be able to understand the output of the machine. And the machine will ship relentlessly. Support tickets of customers will go straight to machines to implement improvements and fixes, for other machines to review, for humans to rubber stamp in the morning.

A lot of this sounds both unappealing and reminiscent of the textile industry. The individual weaver no longer carried responsibility for a bad piece of cloth. If it was bad, it became the responsibility of the factory as a whole and it was just replaced outright. As we're entering the phase of single-use plastic software, we might be moving the whole layer of responsibility elsewhere.

I Am The Bottleneck

But to me it still feels different. Maybe that's because my lowly brain can't comprehend the change we are going through, and future generations will just laugh about our challenges. It feels different to me, because what I see taking place in some Open Source projects, in some companies and teams feels deeply wrong and unsustainable. Even Steve Yegge himself now casts doubts about the sustainability of the ever-increasing pace of code creation.

So what if we need to give in? What if we need to pave the way for this new type of engineering to become the standard? What affordances will we have to create to make it work? I for one do not know. I'm looking at this with fascination and bewilderment and trying to make sense of it.

Because it is not the final bottleneck. We will find ways to take responsibility for what we ship, because society will demand it. Non-sentient machines will never be able to carry responsibility, and it looks like we will need to deal with this problem before machines achieve this status. Regardless of how bizarre they appear to act already.

I too am the bottleneck now. But you know what? Two years ago, I too was the bottleneck. I was the bottleneck all along. The machine did not really change that. And for as long as I carry responsibilities and am accountable, this will remain true. If we manage to push accountability upwards, it might change, but so far, how that would happen is not clear.

13 Feb 2026 12:00am GMT

12 Feb 2026

feedPlanet Python

Real Python: Quiz: Python's list Data Type: A Deep Dive With Examples

Get hands-on with Python lists in this quick quiz. You'll revisit indexing and slicing, update items in place, and compare list methods.

Along the way, you'll look at reversing elements, using the list() constructor and the len() function, and distinguishing between shallow and deep copies. For a refresher, see the Real Python guide to Python lists.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

12 Feb 2026 12:00pm GMT