15 May 2026
Planet Python
PyCharm: Pyrefly LSP Integration with Type Engine in PyCharm 2026.1.2
In PyCharm 2026.1.2, you can enable Pyrefly as an external type provider, dramatically increasing the speed of the IDE's code insight features.
What is the Pyrefly LSP?
"LSP" stands for the Language Server Protocol - a standardized protocol that allows code editors and IDEs to communicate with language servers. The LSP enables language servers to provide code intelligence features, such as:
- Code completion
- Information on hover (for example, quick documentation)
- Go to definition and other actions
- Error checking and type-related diagnostics
The key benefit of the LSP is that it allows a single language server to be used across multiple tools. This means that language-specific intelligence does not have to be implemented separately in every editor, IDE, or CI pipeline.
Pyrefly is Meta's next-generation Python type checker, engineered from the ground up in Rust to replace its predecessor, Pyre (written in OCaml). With the move to Rust, Pyrefly achieves significantly faster performance and improved cross-platform portability. More than just a rewrite, it is designed to be more capable and robust, offering an efficient toolset for maintaining large-scale Python codebases with high precision and minimal overhead.
Pyrefly provides the following benefits:
- Higher performance and efficiency - Thanks to its Rust-based architecture, Pyrefly achieves significantly faster speeds and improves cross-platform portability.
- Enhanced code intelligence - As an external type provider, Pyrefly powers essential code insight features in the IDE, including type inference, type-related diagnostics, quick documentation, and inlay hints.
- Scalability - Pyrefly is designed to handle large-scale Python codebases with high precision and minimal overhead.
Pyrefly is highly beneficial for projects and developers dealing with large, complex Python codebases that prioritize performance and robust typing. Integrating Pyrefly via the LSP is part of our ongoing work to enhance code insight performance in PyCharm.
Using Pyrefly in PyCharm
Once enabled, Pyrefly powers all code insight functionality in PyCharm, including type inference and type-related diagnostics, quick documentation, and inlay hints. Delegating analysis to this faster engine delivers significantly improved performance.
To start using Pyrefly in your PyCharm project, go to the Type widget at the bottom of the window. By default, the IDE uses the built-in type engine. Click on the widget and select the option to use Pyrefly. If you do not have Pyrefly installed yet, PyCharm will install it automatically.

Once you've switched to the Pyrefly type engine, you will see a Pyrefly icon at the bottom, which you can hover over to check the version being used.

Please note that the integration currently works for local interpreter configurations. Support for Docker, Docker Compose, WSL, SSH, and multi-module projects is planned for future releases.
Pyrefly vs. the built-in type engine
Now let's look at how Pyrefly and the built-in type engine behave in a complex Python project. In this FastAPI example, multiple files are typed, but in this file, the variable ref is incorrectly typed, causing four errors. When using the built-in type engine, the IDE identifies that something is wrong, but it suggests running further analysis to fix the problem, which requires an extra step.

Using Pyrefly as the type engine, the IDE reports errors immediately and highlights where they originate. However, it is worth noting that, in our example, there are four errors, but Pyrefly picks up only three of them. It misses the one in self._storage[ref].

Download the latest version of PyCharm and try it out
Ready to experience a dramatic leap in Python development performance? The Pyrefly type engine in PyCharm 2026.1.2 delivers the next generation of type checking. Engineered in Rust for unparalleled speed, it resolves files in as little as 0.5-1 seconds, significantly faster than the built-in engine. If you maintain large, complex Python codebases and prioritize robust typing, this feature is essential, as it allows you to delegate analysis to a faster engine and receive immediate type-related diagnostics. Download the latest version of PyCharm (2026.1.2) to unlock superior efficiency, scalability, and code insight.
15 May 2026 3:31pm GMT
Django community aggregator: Community blog posts
Issue 337: Django Developers Survey 2026
Will and Jeff are at PyCon US in Long Beach, California this week. Drop by the Django Software Foundation booth or the JetBrains booth and say hello.
News
Django Developers Survey 2026
The Django Software Foundation is once again partnering with JetBrains to run the 2026 Django Developers Survey π Help us better understand how Django is being used around the world and guide future technical and community decisions.
DSF member of the month - Bhuvnesh Sharma
Bhuvnesh is a Django contributor since 2022 and a Google Summer of Code (GSoC) participant in 2023 for Django. He is now a mentor and an admin organizer for GSoC for the Django organization, as well as the founder of Django Events Foundation India (DEFI) and DjangoDay India conference.
Announcing the Google Summer of Code 2026 contributors for Django
Google Summer of Code 2026 contributors have been announced for Django, listing the developers who will be working on projects as part of the program. If you are following Django's next wave of community work, this is the roll-up of who's joining and what to watch for.
Releases
Python 3.14.5 is out!
Python 3.14.5 is now available, bringing the latest point release in the Python 3.14 line. If you maintain Django apps, use the update as your prompt to verify dependencies and run your test suite against 3.14.5 before rolling forward.
Updates to Django
Today, "Updates to Django" is presented by Johanan Oppong Amoateng from Djangonaut Space! π
Last week we had 22 pull requests merged into Django by 13 different contributors - including 4 first-time contributors! Congratulations to Denny Biasiolli, Milad Zarour, MANAS MADESHIYA and HΓ©ctor Castillo for having their first commits merged into Django - welcome on board!
This week's Django highlights: π¦
- Allowed max redirect URL length to be set on HttpResponseRedirect. (#36767)
- Added support for object-based form media stylesheet assets. (#37085)
- Deprecated SHA-1 default for salted_hmac() and base64_hmac() algorithm. (#37078)
Python Software Foundation
Python Software Foundation News: Announcing PSF Community Service Award Recipients!
Python Software Foundation has announced the recipients of its PSF Community Service Award. The update highlights people recognized for their contributions to the Python community.
Python Software Foundation News: Strategic Planning at the PSF
Python Software Foundation News covers the PSF's strategic planning efforts and the direction they are working toward. Expect a focus on how the foundation plans its priorities and activities moving forward.
Wagtail CMS News
Results of the 2026 Wagtail DX with AI survey
The 2026 Wagtail DX survey reports where teams are applying AI and what they want next from the platform. Use the findings to align your own Wagtail and AI experimentation with the issues practitioners are actually raising.
Our four contributors for Google Summer of Code 2026
Google Summer of Code 2026 is welcoming four contributors, highlighting the people behind the upcoming work. If you're tracking Django ecosystem activity, this is a quick way to see who's starting and what to watch for next.
Sponsored Link
Middleware, but for AI agents
Django middleware composes request handlers. Harnesses do the same for AI agents - Claude Code, Codex, Gemini in one coordinated system. Learn what a harness actually is, why it's a new primitive, and how to engineer one that holds in production. Apache 2.0, open source.

Articles
How to have a great first PyCon (updated for 2026)
Timeless advice from Trey Hunner on how to make the most out of PyCon US this week or any other technical conference.
Using Django Tasks in production Β· Better Simple
Production-ready Django task setups: what to change, what to watch, and how to keep background jobs reliable once you leave local dev. Useful guidance for deploying and operating task workers with fewer surprises.
Dealing with Dead Links (404s): 2026 Edition | Will Vincent
A practical guide to handling dead links in Django, focusing on what to do when a URL no longer exists and how to respond with clean, user-friendly 404 behavior. Expect guidance on keeping routing and error handling tidy as your site evolves.
Podcasts
Django Chat #203: Deploy on Day One - Calvin Hendryx-Parker
Calvin is the co-founder and CTO of the consultancy SixFeetUp. We discuss developer experience from day one, Kubernetes as a feature, real-world usage of AI and agentic tooling, typing in Python, the junior developer pipeline problem, and more. Also available in video format on YouTube.
Django Job Board
Founding Engineer at MyDataValue
Junior Software Developer (Apprentice) at UCS Assist
PyPI Sustainability Engineer at Python Software Foundation
Projects
abu-rayhan-alif/djangoSecurityHunter
A security and performance inspector for Django & DRF. Features static analysis, config checks, N+1 query detection, and SARIF support for GitHub Code Scanning.
janraasch/dsd-vps-kamal
A Django Simple Deploy plugin for configuring & automating deployments of your Django project to any VPS using Kamal.
15 May 2026 2:00pm GMT
Planet Python
Real Python: The Real Python Podcast β Episode #295: Agentic Architecture: Why Files Aren't Always Enough
What are the limitations of using a file-based agent workflow? Why do massive context windows tend to collapse? This week on the show, Mikiko Bazeley from MongoDB joins us to discuss agentic architecture and context engineering.
[ Improve Your Python With π Python Tricks π - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
15 May 2026 12:00pm GMT
Real Python: Quiz: Python's Array: Working With Numeric Data Efficiently
In this quiz, you'll test your understanding of Python's Array: Working With Numeric Data Efficiently.
By working through this quiz, you'll revisit the differences between Python's array module and the built-in list, the meaning of type codes, how to create and manipulate arrays as mutable sequences, and the performance trade-offs of using a low-level numeric container.
[ Improve Your Python With π Python Tricks π - Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
15 May 2026 12:00pm GMT
13 May 2026
Django community aggregator: Community blog posts
Deploy on Day One - Calvin Hendryx-Parker
π Links
- SixFeetUp Careers
- getscaf, copier, tilt
- A CTOs Guide to AI Coding Assistants
- kind, nix, spec-kit
- Figma make
π¦ Projects
π Books
- London Review of Books
- Big Panda & Tiny Dragon by James Norbury
- Universal Principles of Typography by Elliot Jay Stocks
π₯ YouTube
π€ Sponsor
This episode is brought to you by Six Feet Up, the Python, Django, and AI experts who solve hard software problems. Whether it's scaling an application, deriving insights from data, or getting results from AI, Six Feet Up helps you move forward faster.
See what's possible at https://sixfeetup.com/.
13 May 2026 3:00pm GMT
11 May 2026
Django community aggregator: Community blog posts
Improving First Byte and Contentful Paint on a Django Website

Recently I have been experimenting with http streaming and realized how it can improve page performance. If you come from the PHP world, you might know the command flush(). It immediately sends to the visitor what has been echoed to the buffer, and doesn't wait for the full page to be rendered on the server side. That allows the browser to start rendering the website before the whole document is rendered on the server and transferred. On the other hand, the usual Django HttpResponse renders the whole HTML document on the server first, and only then sends it to the visitor. So the initial HTML document rendering is always the bottleneck for the full page load. Here comes StreamingHttpResponse, which can be used to mimic what flush() does in PHP.
HttpResponse vs. StreamingHttpResponse in Action
When using a normal HttpResponse, the HTML document is first rendered on the server side, then sent to the browser, then static files are downloaded in parallel if possible, and lastly rendering in the browser happens.

When you use StreamingHttpResponse, you can send the <head> and the content above the fold as the first part of the document, so that static files can be located and start downloading while the rest of the HTML document is being sent in parts. The first paint of the document would happen just after the CSS file is downloaded, and the rest of the HTML document would be drawn at a later point.
Generic HTML Streaming View
Here is a generic HTMLStreamingView that expects a list of template files, get_document_context_data() for the global context, and get_template_context_data() for the template-specific context:
from django.http.response import StreamingHttpResponse
from django.conf import settings
from django.template.loader import render_to_string
from django.views.generic.base import View
class HTMLStreamingView(View):
# templates for different parts of the document
template_names = []
extra_context = None
def get(self, request, *args, **kwargs):
# Capture the nonce before StreamingHttpResponse is returned.
# CSP middleware writes the nonce into the response header during
# process_response, then replaces request.csp_nonce with
# an error-raising lazy object. generate() restores the plain value
# so templates can access it during streaming.
self._csp_nonce = (
str(request.csp_nonce)
if hasattr(request, "csp_nonce")
else None
)
context = self.get_document_context_data(**kwargs)
return StreamingHttpResponse(
self.generate(context),
content_type="text/html"
)
def generate(self, context):
if self._csp_nonce is not None:
self.request.csp_nonce = self._csp_nonce
for template_name in self.template_names:
template_context = {
**context,
**self.get_template_context_data(template_name)
}
yield render_to_string(
template_name,
template_context,
request=self.request
)
def get_document_context_data(self, **kwargs):
kwargs.setdefault("view", self)
if self.extra_context is not None:
kwargs.update(self.extra_context)
return kwargs
def get_template_context_data(self, template_name, **kwargs):
return {}
Use Case with the Strategic Prioritizer "1st things 1st"
The start page of the decision support system and strategic prioritizer 1st things 1st has been implemented as a multi-section landing page. The cookie consent widget only showed up after the whole page had rendered, resulting in a delay of a few seconds.
This is how I used HTMLStreamingView to reorganize the page into parts:
class StartPageView(HTMLStreamingView):
template_names = [
"startpage_index_top.html",
"startpage/includes/description.html",
"startpage/includes/tutorial.html",
"startpage/includes/benefits.html",
"startpage/includes/social_proof.html",
"startpage/includes/testimonials.html",
"startpage/includes/about_us.html",
"startpage/includes/questions_and_answers.html",
"startpage/includes/pricing.html",
"startpage/includes/cause.html",
"startpage/includes/call_to_action.html",
"startpage/includes/footer.html",
"startpage_index_bottom.html",
]
def get_template_context_data(self, template_name, *args, **kwargs):
if template_name == "startpage_index_top.html":
return {
"structured_data": settings.JSON_LD_STRUCTURED_DATA,
}
if template_name == "startpage/includes/social_proof.html":
from django.contrib.auth import get_user_model
User = get_user_model()
return {
"active_user_count": User.objects.filter(is_active=True).count(),
}
...
return super().get_template_context_data(template_name, **kwargs)
To transform a normal Django view into an HTTP streaming view, I cut the base.html template into two pieces:
- everything before
{% block content %}asbase_top.html- the head and content above the fold. - everything after
{% endblock content %}asbase_bottom.html- the closing HTML tags and the footer.
For example, here's base_top.html:
<!DOCTYPE html>
{% load static %}
<html lang="en">
<head>
<meta charset="utf-8" />
<title>{% block title %}1st things 1st{% endblock %}</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" href="{% static 'css/styles.css' %}" />
{% block extra_head %}{% endblock %}
</head>
<body>
{% block top_navigation %}
<nav>
<a href="/">Logo</a>
</nav>
{% endblock %}
<main id="main_content">
{% block content %}{% endblock %}
{% include "startpage/includes/extra_js.html" %}
And here is base_bottom.html:
{% block content %}{% endblock %}
</main>
<footer>
...
</footer>
</body>
</html>
I moved the JS from base_bottom.html to the body section of base_top.html, where it will start downloading immediately after the content above the fold is shown. I did that to reduce the delay for the cookie consent widget.
Then I prepared the templates for all parts of the start page:
startpage_index_top.htmlextendsbase_top.html- content templates provide the HTML directly without extending anything.
startpage_index_bottom.htmlextendsbase_bottom.html.
The Optimization Results
I used the Lighthouse plugin to measure performance for the start page on an emulated slow mobile network, before and after applying StreamingHttpResponse.

In the updated version, the content above the fold and the static files needed to render it are retrieved earlier. These include the static file requirements for the cookie consent widget, which can now be loaded from the initial part of the stream, so the widget appears sooner.

Final Words
HTTP streaming is a relatively simple technique that can make a noticeable difference in perceived page performance, particularly when it comes to metrics like First Byte and Contentful Paint. By sending the top of the document early, the browser can begin fetching static assets and rendering above-the-fold content while the server is still working on the rest of the page.
A faster Time To First Byte (TTFB) is also worth considering for LLM crawlers such as GPTBot or ClaudeBot. These bots often work with short timeouts, and if your server doesn't respond quickly enough, they may abandon the request before reading your content. HTTP streaming helps here too, since it gets the most important parts of your HTML out early - right at the top of the document where crawlers are most likely to see them.
That said, it does require splitting your templates into parts and thinking more carefully about which context data is needed where. If your page is lightweight and fast to render, the added complexity probably isn't worth it. The technique really shines on heavier pages that involve bigger database queries or external API calls - those are exactly the cases where server-side delay is most significant, and where streaming can therefore have the greatest impact.
It is also worth noting that HTTP streaming works with both WSGI and ASGI, so it fits into most standard Django deployment setups without requiring any major infrastructure changes.
Thanks to Famitsay Tamayo for the cover photo!
11 May 2026 5:00pm GMT
04 Apr 2026
Planet Twisted
Donovan Preston: Using osascript with terminal agents on macOS
Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.
04 Apr 2026 1:31pm GMT
16 Mar 2026
Planet Twisted
Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control
I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.
16 Mar 2026 11:04am GMT
04 Mar 2026
Planet Twisted
Glyph Lefkowitz: What Is Code Review For?
Humans Are Bad At Perceiving
Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..
We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.
Each of these has implications for the fundamental limitations of code review as an engineering practice:
-
Inattentional Blindness: you won't be able to reliably find bugs that you're not looking for.
-
Repetition Blindness: you won't be able to reliably find bugs that you are looking for, if they keep occurring.
-
Vigilance Fatigue: you won't be able to reliably find either kind of bugs, if you have to keep being alert to the presence of bugs all the time.
-
and, of course, the distinct but related Alert Fatigue: you won't even be able to reliably evaluate reports of possible bugs, if there are too many false positives.
Never Send A Human To Do A Machine's Job
When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:
- to catch logical errors, use automated tests.
- to catch formatting errors, use autoformatters.
- to catch common mistakes, use linters.
- to catch common security problems, use a security scanner.
Don't blame reviewers for missing these things.
Code review should not be how you catch bugs.
What Is Code Review For, Then?
Code review is for three things.
First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.
Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.
You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.
Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".
Oops, Surprise, This Post Is Actually About LLMs Again
Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.
My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.
When you relate to a human colleague, you will expect that:
- you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
- and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,
With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.
You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.
The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.
Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.
To Sum Up
Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.
If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.
But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.
Acknowledgments
Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!
04 Mar 2026 5:24am GMT