28 Mar 2026

feedPlanet Python

EuroPython: Humans of EuroPython: Jodie Burchell

What does it take to run Europe's largest Python conference? 🐍 Not budgets or venues-it's people.

EuroPython isn't powered by code alone, but by a vibrant network of volunteers who shape every session and welcome every attendee. From ensuring talks run seamlessly to curating world-class content these are the unsung heroes building community, one contribution at a time.

We're shining a spotlight on the people behind the magic. Read our full conversation with Jodie Burchell, co-lead of the EuroPython 2025 Programme Team and discover what drives those who give their time to grow our community.

altJodie Burchell, Co-Lead of the Programme Team at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython?

I first attended EuroPython in 2023, and was asked by my friends Cheuk and Lais to help run the Humble Data workshop. I had so much fun, and really liked all the people I met, so I decided to help out with comms and other things in 2024, and ended up working on the Programme Team and helping run the Beginners' Day in 2025.

EP: What was your primary role as a volunteer, and what did a typical day look like for you?

I was one of two co-team leads of the Programme Team in 2025. This team tends to touch a lot of the conference, although the tasks vary from week-to-week. We actually started work all the way back in December, and worked up until the end of the conference! My team's role included running the CfP, selecting talks, and assembling the schedule, finding keynote speakers, organising special events, coordinating the open spaces, and finding last minute speakers when people cannot make it. It involved a lot of logistics, following up with other teams at EuroPython, and communicating with speakers.

I think one of my favourite things I organised at the conference was the international snack exchange. Seeing people sharing snacks from their home countries was so much fun, and really made us feel like a big international family.

EP: What&aposs your favorite memory from volunteering at EuroPython?

I actually can't pick just one!

One of my favourites was seeing the programme team in person after so many months of working together, and sharing some international snacks together to celebrate.

Watching the excerpt of "Python: the Documentary" that CultRepo created for us, and seeing the reaction of the audience to the film and the panel was very moving.

And of course, running Humble Data at the Beginner's Day during the sprints. As someone with a non-traditional path into tech myself, I am really passionate about helping beginners and making them feel welcome, and having beginners starting to learn Python, and then speaking with core developers of well-established projects was really special.

EP: Did you make any lasting friendships or professional connections through volunteering?

Many! The Python community is incredible, and I am lucky to have found some of my closest friends through the EuroPython, Humble Data and wider Python community. I look forward to EuroPython every year (in whatever capacity I attend) so I can see all of these amazing, special friends.

EP: What&aposs one misconception about conference volunteering you&aposd like to clear up?

I think one of the biggest misconceptions that people have about community conferences like EuroPython is that they're run by professionals. While the EuroPython Society does have one (very talented) paid employee, most of the work you see at these conferences is done by members of the community, just like you and me. So if you feel inspired to contribute to EuroPython or another Python conference, reach out and find out how you can help! Although it can be a lot of work, it's also very meaningful to know you've shaped an event that means a lot to the Python community.

EP: Thank you for your work, Jodie!

28 Mar 2026 6:20pm GMT

27 Mar 2026

feedDjango community aggregator: Community blog posts

Django News - Balancing the AI Flood in Django - Mar 27th 2026

News

Calling for research participants from Django, Laravel, Ruby on Rails, Next.js and Spring Boot communities

Former DSF President and researcher Anna Makarudze is seeking Django developers to share insights on dependency vulnerabilities and supply chain risks in open source.

pychronicles.com

Djangonaut Space News

Djangonaut Space Financial Report 2025

Djangonaut Space's 2025 report highlights a community-powered year of $2.2k in donations funding tools and conference access, while setting sights on sending contributors to even more events in 2026.

djangonaut.space

Djangonaut diaries, week 3 - Working on an ORM issue

A deep dive into Django's ManyToMany indexes reveals an unnecessary extra index, showing how databases already optimize with composite indexes and setting the stage for a cleaner ORM fix.

dev.to

Wagtail CMS News

Wagtail Routable Pages and Layout Configuration

Build flexible Wagtail routable pages that use StreamField layouts to dynamically control how Django model data renders on detail views.

djangotricks.com

Updates to Django

Today, "Updates to Django" is presented by Raffaella from Djangonaut Space! 🚀

Last week we had 18 pull requests merged into Django by 15 different contributors - including 4 first-time contributors! Congratulations to Juho Hautala, Huwaiza, (James) Kanin Kearpimy 🚀 and Praful Gulani for having their first commits merged into Django - welcome on board!

News in Django 6.1:

Django Newsletter

Django Fellow Reports

Fellow Report - Natalia

A significant portion of this week was dedicated to security work (yes, again). As usual, details here are intentionally kept at a high level, but the time went into triaging new reports, progressing in-flight likely confirmed issues, validating proposed fixes, and coordinating next steps with the team.

One additional challenge worth noting is the volume of near-duplicate reports; beyond triage, this often requires careful comparison across long submissions to identify what is actually new or meaningfully different.

djangoproject.com

Fellow Report - Jacob

Easy to miss in the release notes (as we only described the user-facing changes for edge cases), but last week we landed (with great joy) @charettes' defense-in-depth measure for the ORM that ensures user-provided aliases are always quoted.

In addition to the below, another steady week advancing pending security reports.

djangoproject.com

Sponsored Link 1

The deployment service for developers and teams.

appliku.com

Articles

Learning LLM Integration

A practical, from-scratch look at integrating LLMs into a Django app, highlighting why isolating the AI layer and writing precise prompts makes all the difference.

judkins.dev

Give Django your time and money, not your tokens

The Django community wants to collaborate with you, not a facade of you.

better-simple.com

Open Source Has a Bot Problem

The maintainer of awesome-mcp-servers came up with a solution, of sorts, to curating AI-generated PRs.

glama.ai

Why pylock.toml includes digital attestations

A Python project got hacked where malicious releases were directly uploaded to PyPI. I said on Mastodon that had the project used trusted publishing with digital attestations, then people using a pylock.toml file would have noticed something odd was going on thanks to the lock file including attestation data.

snarky.ca

Rewriting a 20-year-old Python library

A thoughtful deep dive into rewriting a 20-year-old Python library, covering async design, API ergonomics, and how to modernize without breaking users.

b-list.org

Playground embedding, packages and more

The nanodjango playground has several new exciting features which transform what you can achieve with it - you can now manage packages and secrets, share scripts from the command line, and embed live Django code in your own site.

nanodjango.dev

Human.json

A quick look at human.json, a lightweight protocol for sharing human-readable metadata, with a simple Django implementation and a healthy dose of skepticism about its long-term adoption.

screamingatmyscreen.com

Videos

PyCon US 2026 - Elaine Wong & Jon Banafato

A behind-the-scenes look at PyCon US 2026 with chair Elaine Wong and co-chair Jon Banafato, covering what's new, how to prepare, and tips to make the most of the biggest Python conference in North America.

djangotv.com

Django Job Board

Solutions Architect - Python (Client-facing) at JetBrains

Django Newsletter

Django Forum

Discouraging "the voice from nowhere" (~LLMs) in documentation

Forum discussion on maintaining a human (not LLM) voice in Django's documentation.

djangoproject.com

Projects

kujov/django-tw

Zero-config Tailwind CSS v4 for Django.

github.com

VojtechPetru/django-live-translations

In-browser translation editing for Django superusers.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

27 Mar 2026 3:00pm GMT

feedPlanet Python

Real Python: The Real Python Podcast – Episode #289: Limitations in Human and Automated Code Review

With the mountains of Python code that it's possible to generate now, how's your code review going? What are the limitations of human review, and where does machine review excel? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

27 Mar 2026 12:00pm GMT

Real Python: Quiz: Interacting With REST APIs and Python

In this quiz, you'll test your understanding of Interacting with REST APIs in Python.

This quiz reviews REST principles, HTTP methods, status codes, and Python tools like requests, Flask, FastAPI, and Django REST Framework.

Test your understanding of consuming and designing REST APIs, Pydantic validation, and endpoint design. For more practice, revisit the course page for guided lessons and examples.


[ Improve Your Python With 🐍 Python Tricks 💌 - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

27 Mar 2026 12:00pm GMT

25 Mar 2026

feedDjango community aggregator: Community blog posts

LLMs for Open Source maintenance: a cautious case

LLMs for Open Source maintenance: a cautious case

When ChatGPT appeared on the scene I was very annoyed at all the hype surrounding it. Since I'm working in the fast moving and low margin business of communication and campaigning agencies I'm surrounded by people eager to jump on the hype train when a tool promises to lessen the workload and take stuff from everyone's plate.

These discussions coupled with the fact that the training of these tools required unfathomable amounts of stealing were the reason for a big reluctance on my part when trying them out. I'm using the word stealing here on purpose, since that's exactly the crime Aaron Swartz was accused of by the attorney's office of the district of Massachusetts. It's frustrating that some people can get away with the same crime when it is so much bigger. For example, OpenAI and Anthropic downloaded much more data than Aaron ever did.

A somewhat related thing happened with the too-big-to-fail banks: There, the people at the top were even compensated with golden parachutes at the end. LLM companies seem to be above accountability too.

Despite all this, I have slowly started integrating these tools into my workflows. I don't remember the exact point in time, but since some time in 2025 my opinions on their utility has started to change. At the beginning, I always removed the attribution and took great care to write and rewrite the code myself, only using the LLMs for inspiration and maybe to generate integration tests. More and more I have to admit that they are useful, especially in time constrained projects with a clear focus and purpose.

Last month I fixed and/or closed all open issues in the django-tree-queries repository with the help of Claude Code. Is that a good thing? It could be argued I should have done the work myself. But I wouldn't have - I have other things I want to do with my time. I don't want to (always) work on Open Source software in the evening. I definitely also have leaned heavily on LLMs when working on django-prose-editor.

Is faster better?

We can produce more code, more features and close tickets faster than before. In my experience the speed up isn't as big as some people may want us to believe, but it's there. And contrary to what people in my LinkedIn feed say, that's not an obviously good thing. Is it a race to the bottom where we drown in LLM-generated slop in quantities impossible to maintain? It doesn't feel like that - but it's a race that could go both ways. Throwaway code can be thrown away though, and well tested code does what the tests say, which is good enough according to my rules for releasing open source software.

Speaking as someone who has put more into the training set than they've taken out so far, I don't feel all that bad using the tools. Coding agents can already be run locally with reasonable hardware requirements, at least during inference, which is where the ongoing cost sits. Maybe using them is still rationalization. But contribution and profit needing to stay in some rough balance feels like the right frame. Total abstinence isn't the only ethical choice we have.

Community tensions

What makes me less comfortable is how communities are reacting. There are real concerns within the Django world, and not just the practical one of overworked maintainers wading through hastily generated patches that don't actually fix anything. The deeper worry is about the communal nature of contribution: that working on Django is supposed to be a learning experience, a way into the community, and that using an LLM as a vehicle rather than a tool hollows out that process. Reviewers end up interacting with what is essentially a facade, unable to tell whether anyone actually understood the problem. That's a real concern and I don't want to dismiss it.

But it maps onto a different situation from what I've been describing. Using Claude Code to close issues in projects I maintain and understand is not the same as using it to paper over gaps in comprehension on a ticket in someone else's project. Whether LLM-assisted contributions to Django itself are appropriate is a difficult question; whether it's appropriate to use them when maintaining your own software less so.

There's also a harder tension around quality. Django's conservatism has real value: rigorous review, minimal magic, a coherent philosophy. The ORM and template system don't need to reinvent themselves, they work well, are still evolving while staying rock-solid for all my use cases. And reading the release notes always brings me joy. But it could be more exciting more often. Quality isn't a strictly positive thing. Everything has costs. It's not great if the price of the bar is that legitimate bugs sit open for years because nobody has a few evenings to spend on them. It happened with django-tree-queries before I went through it with Claude Code. I think the bar for contributing to Django is too high. I would value a little more motion and a little less stability, even as someone running dozens of Django websites and apps.

Then there's the pile-on dynamic that plays out on Mastodon and GitHub. When the Harfbuzz and chardet maintainers disclosed LLM usage, the reaction from some corners was something to behold. People expressing what amounted to personal grievance over tooling choices in projects they may not even use. There's a particular kind of entitlement in telling a maintainer - who is keeping software alive, possibly even in their spare time - that the way they choose to do that work is an affront. Open source is a gift, whether paid or not, and nobody has to accept it, but disclosing your tooling isn't an invitation for complaints. The ethical concerns about training data, resource use and other negative externalities are legitimate and worth raising. Performative outrage directed at individual maintainers is not the same thing.

I don't have an easy conclusion. The tools are useful, the ethics are murky, and communities are still figuring out how to respond. A cautious, honest use of them feels better to me than the alternatives.

25 Mar 2026 5:00pm GMT

Building modern Django apps with Alpine AJAX, revisited

About nine months ago I wrote an article about my quest to simplify my web development stack. How I went from SvelteKit on the frontend and Django on the backend, to an all-Django stack for a new project, using Alpine AJAX to enable partial page updates.

I've now been using this new stack for a while, and my approach -as well as my opinion- has changed significantly. Let's get into what works, what doesn't, and where I ended up.

A quick recap

Alpine AJAX is a lightweight alternative to htmx, which you can use to enhance server-side rendered HTML with a few attributes, turning <a> and <form> tags into AJAX-powered versions. No more full page refreshes when you submit a form.

The key mechanic: when a form has x-target="comments", Alpine AJAX submits the form via AJAX, finds the element with that ID in the response, and swaps it into the page. The server returns HTML, not JSON.

In the original article I used django-template-partials (since merged into Django itself) to mark sections of a template as named partials using {% partialdef %}. Combined with a custom AlpineTemplateResponse the view could automatically return just the targeted partial when the request came from Alpine AJAX.

Where I began: template partials

Let's say you have an article page with the article body parsed from Markdown, a like button, and a comment section. The template looks something like this:

article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article_html|safe }} {% partialdef like_form inline %} <form method="post" id="like_form" x-target="like_form"> {% csrf_token %} <button type="submit" name="toggle-like"> {% if article.is_liked %}Unlike{% else %}Like{% endif %} </button> </form> {% endpartialdef %} {% partialdef comments inline %} <div id="comments"> {% for comment in article.comments.all %} <div>{{ comment.user }}: {{ comment.text }}</div> {% endfor %} <form method="post" x-target="comments"> {% csrf_token %} {{ comment_form }} <button type="submit" name="add-comment">Submit</button> </form> </div> {% endpartialdef %} </article> {% endblock %} 

Every form action POSTs to the same article view, which handles all the actions in one big post method:

views.pyclass ArticleView(View): def get_context(self, request, pk): article = get_object_or_404( Article.objects.prefetch_related("comments") .annotate_is_liked(request.user), pk=pk, ) return { "article": article, "article_html": markdown(article.body), "comment_form": CommentForm(), } def post(self, request, pk): context = self.get_context(request, pk) article = context["article"] if "toggle-like" in request.POST: if article.is_liked: article.unlike(request.user) article.is_liked = False else: article.like(request.user) article.is_liked = True return AlpineTemplateResponse(request, "article.html", context) if "add-comment" in request.POST: form = CommentForm(request.POST) if form.is_valid(): Comment.objects.create(article=article, user=request.user, ...) return AlpineTemplateResponse(request, "article.html", context) return redirect(article) def get(self, request, pk): context = self.get_context(request, pk) return AlpineTemplateResponse(request, "article.html", context) 

The AlpineTemplateResponse from the original article takes care of returning just the targeted partial when the request comes from Alpine AJAX. It works. I thought I was being smart to prevent template duplication this way, but there are two problems:

  1. The view does too much work. Every POST action calls get_context, which fetches everything: the article, the parsed Markdown body, the comments, the like state, the comment form. When the user clicks "Like", we do all this work we'll never use in the partial template. The template partial means the response is small, but the server-side work is exactly the same as rendering the full page.

  2. The template is a mess. Those {% partialdef %} blocks scattered throughout the template make it noisy and hard to read. In a small example it's fine, but in a real template with 200+ lines, it gets ugly fast.

When doubt set in: switching to Jinja2

To be honest though, the real killer of my motivation while working on this project has been the Django Template Language. I'm sorry, but I just hate it. I have since 2009, and I still do. The syntax is bad enough, but then you have to constantly fight its limitations. The fact I can't simply call a function is so incredibly annoying, and is causing way more boilerplate with tons of custom template tags and filters.

So, switch to Jinja2, right? Except that template partials aren't supported in combination with Jinja2. No more {% partialdef %}. Which means returning full page responses for AJAX requests, which isn't exactly ideal.

I did it anyway. I ripped out all the {% partialdef %} tags, migrated my templates to Jinja2, and my views just returned the full template for AJAX requests. Alpine AJAX is smart enough to extract the elements it needs by their IDs, and throws away the rest.

This was simpler and I was much happier writing Jinja2 templates. But the wastefulness got worse. Before, the server at least returned a small response. Now it rendered the entire page and sent all of it over the wire, just for the browser to use a tiny piece of it.

It was at this moment that I seriously thought about throwing the entire frontend away and rebuilding it in SvelteKit, with Django REST Framework returning JSON responses. But that seemed like a pretty big waste of effort, so instead I took a deep breath and thought about what I wanted:

  1. Jinja2 templates. Non-negotiable.
  2. Small, fast AJAX responses. No rendering the full page for a like toggle.
  3. No template duplication between the full page and the AJAX response.
  4. Simple views that only do the work they need to do.

Template partials gave me #2 and #3, but not #1 or #4. Switching to Jinja2 and returning the full template for AJAX requests gave me #1 and #3, but not #2 or #4. I needed a different approach.

Where I ended up: separate views with template includes

The answer turned out to be straightforward, and the one I initially discarded as "too much boilerplate": instead of one monolithic view handling all POST actions, split each action into its own view with its own URL. And instead of {% partialdef %}, use plain {% include %} tags to extract reusable template fragments.

Let me show you. Here's the simplified article template:

article.html{% extends "base.html" %} {% block body %} <article> <h1>{{ article.title }}</h1> {{ article.body }} {% include "articles/_like_form.html" %} {% include "articles/_comments.html" %} </article> {% endblock %} 

Clean and readable. Each include is a self-contained fragment. And here's the like form:

_like_form.html<form method="post" action="{{ url('toggle-like', args=[article.id]) }}" id="like_form" x-target="like_form"> {{ csrf_input }} {% if article.is_liked %} <button type="submit">Unlike</button> {% else %} <button type="submit">Like</button> {% endif %} </form> 

And finally, the view:

views.pyclass ToggleLikeView(LoginRequiredMixin, View): def post(self, request, pk): article = get_object_or_404( Article.objects.annotate_is_liked(request.user), pk=pk, ) if article.is_liked: article.unlike(request.user) article.is_liked = False article.like_count -= 1 else: article.like(request.user) article.is_liked = True article.like_count += 1 if is_alpine(request): return TemplateResponse( request, "articles/_like_form.html", {"article": article}, ) # For non-Alpine requests, we just redirect back return redirect(article) 

No comment queries. No form building. No Markdown parsing. Just the like state.

The is_alpine check provides a redirect fallback for non-JavaScript POST requests, keeping things progressive. And the ArticleView itself becomes GET-only. No more branching on POST keys. No get_context method that fetches everything for every action. Each view does one thing.

The trade-offs

More templates. For the article page, I went from one template to several: the include fragments (_like_form.html, _comments.html) that are shared between the full page and the AJAX responses. When an action needs to update multiple elements on the page, you also end up with small response templates that combine the right includes. For example, if submitting a comment should update both the comment list and a comment count elsewhere on the page:

_add_comment_response.html{% include "articles/_comments.html" %} {% include "articles/_engagement_counts.html" %} 

Trivial, but still a file you have to create and name.

More views and URL routes. Each action gets its own view class and its own path() entry. For a page with likes, comments, and subscriptions, that's three or four extra views.

But here's what I got in return:

Actual performance improvement. Not just smaller responses, but less work on the server. Each view only queries what it needs.

Jinja2. I'm using Jinja2 instead of the Django Template Language. I can call functions, I have proper expressions, and I don't need custom template tags for basic things. This alone was worth the switch.

Readable templates. The main article.html is short and shows the page structure at a glance. Each fragment is self-contained. No {% partialdef %} blocks scattered everywhere.

Simple views. Each view does exactly one thing. Easy to understand, easy to test, easy to optimize.

Conclusion

I went through three stages: template partials with Django Template Language, full-page responses with Jinja2, and finally separated views with template includes. Each step solved a real problem with the previous approach.

The pattern I've landed requires more files and views than I'd like, but each is simple and does one thing.

My overall feelings on Django + Alpine AJAX have also changed. I still believe there are benefits to using a simplified tech stack and using hypermedia as the engine of state. Just return HTML instead of returning JSON to a JavaScript framework which then has to turn it into HTML. Conceptually it just makes sense to me.

But the dream was to build a plain old Django application using simple views and simple templates, using old-fashioned MPA server-rendered pages. Sprinkle in a few Alpine AJAX attributes and magically your site gets SPA-like usability. And it simply hasn't played out that way for me. Yes, you could do that, if you're fine with the wastefulness of returning full pages as a response to AJAX requests. But when you want to do it better than that, you end up with more boilerplate to make it possible to return small bits of HTML.

And this isn't really about Alpine AJAX specifically; htmx would lead to the exact same place. The fundamental tension is in the HTML-over-the-wire approach itself: the server has to know which fragments of HTML to return, and that means structuring your views and templates around it. You trade the complexity of a JavaScript frontend for a different kind of complexity on the server.

Progressive enhancement adds to that complexity. Every view needs an is_alpine check with a redirect fallback, every form needs to work both as a regular POST and as an AJAX submit. If I dropped progressive enhancement and just required JavaScript, those redirect fallbacks and the branching that comes with them would disappear. The views would be simpler. But I think progressive enhancement is important enough to keep in place.

Would I use Alpine AJAX (or htmx) again? Honestly: probably not. I have a lot more fun when building frontends with SvelteKit. Building composable and reusable UI components is so much more natural there, and the performance is simply better (once the initial JS bundle has been downloaded and parsed). But am I going to throw away my current project's code and redo it all? No, I am not. Django with Alpine AJAX is a nice change of scenery, it's a nice playground I don't usually get to play in. I think I ended up with a good compromise, and hey: I still don't have to build and maintain a separate API, API docs, and frontend.

25 Mar 2026 3:16pm GMT

16 Mar 2026

feedPlanet Twisted

Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control

I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.

16 Mar 2026 11:04am GMT

04 Mar 2026

feedPlanet Twisted

Glyph Lefkowitz: What Is Code Review For?

Humans Are Bad At Perceiving

Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..

We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.

Each of these has implications for the fundamental limitations of code review as an engineering practice:

Never Send A Human To Do A Machine's Job

When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:

Don't blame reviewers for missing these things.

Code review should not be how you catch bugs.

What Is Code Review For, Then?

Code review is for three things.

First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.

Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.

You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.

Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".

Oops, Surprise, This Post Is Actually About LLMs Again

Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.

My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.

When you relate to a human colleague, you will expect that:

  1. you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
  2. and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,

With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.

You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.

The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.

Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.

To Sum Up

Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.

If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.

But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!

04 Mar 2026 5:24am GMT

19 Feb 2026

feedPlanet Twisted

Donovan Preston: Wello Horld.

Onovanday Restonpay is going to logbay here again. It's time to take back the rss-source-rss-reader web of links

19 Feb 2026 2:36am GMT

22 Jan 2026

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

22 Jan 2026 9:43am GMT

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

22 Jan 2026 9:43am GMT

Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.

I am team lead of the Plone Admin team, and work at kitconcept.

The current state: see the keynotes, lots happening on the frontend. Good.

The current state of our IT: very troubling and daunting.

This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.

The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.

We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.

Root causes:

  • We failed to scale down in time in our IT landscape and usage.
  • We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
  • The trend is more communication channels, platforms to join and promote yourself, apps to use.

Overview of what have have to keep running as admin team:

  • Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
  • Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
  • Community office automation: Google docds, workspacae, Quaive, Signal, Slack
  • Broader: Discourse and Discord

The first two are really needed, the second we already have some problems with.

Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.

The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.

There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.

On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.

Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.

Some actionable items on GitHub:

  • We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
  • Cleanup users, use Contributors team, Developers
  • Active members: check who has contributed the last years.
  • There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
  • More fine grained teams to control repository access.
  • Use of GitHub Discussions for some central communication of changes.
  • Use project management better.
  • The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
  • Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.

Google Workspace:

  • We are dependent on this.
  • No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
  • Spam and moderation issues
  • We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.

User management:

  • We need separate standalone user management, but implementation is not clear.
  • We cannot contact our members one on one.

Oh yes, Plone websites:

  • upgrade plone.org
  • self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.

22 Jan 2026 9:43am GMT