16 Apr 2026
Django community aggregator: Community blog posts
Djangocon EU: digitising historical caving data with Python and Django - Andrew Northall
(One of my summaries of the 2026 Djangocon EU in Athens).
Andrew's hobby is caving: exploring wild cave systems. It is a niche hobby, but he likes it a lot. One of the nicest thinks is that you can be the first human being standing in a specific spot on earth.
Cavers achieve high standards, without formal organisation. Sounds like Django :-) With Django, we have exposure and visibility and bug trackers: we're transparent. So our high standards are verifiable. Would the same be possible for caving? The safety record is real good, but the public image is bad. Can the safety record be made more visible?
There's an (American) organisation that has historical incident reports. But... mostly in printed text. Image recognition was hard. And something like "fall 1985" isn't really a Postgres data format. Several volunteers put a lot of work in it by manually entering incidents in a Django website that Andrew build. It was slow going.
Nowadays they have an LLM pipeline for it that is actually really good. Extraction: docling. Splitting into separate incidents: LLM. Formatting/structuring/checking: LLM. Normalisation: mostly with the help of Django.
Docling (https://github.com/docling-project/docling) is a great project for extracting usable text data out of various sources. Including detecting paragraphs that start on one page and end on the next.
Normalisation was a problem. Locations are stored as a tree structure using Django Treebeard. You get several precision levels. If you know the town, reference that. If you only know the state, do that. Handy with data that's not always that precise. The strange or seasonal incomplete dates ("spring 1972") are handled with a custom Django model field that can handle it.
He generated the frontend with claude. He was surprised that, on a page that shows a single incident, claude added a section for the AI summary of the incident...
Nice: the dataset is already being used for actual research on how to reduce incidents.
A note about the volunteer work that went into the original manual work: that was used as testcases and for verification for the LLM work.
Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. Disused railway station of Gillenfeld on the former Daun-Wittlich line.
16 Apr 2026 4:00am GMT
Djangocon EU: beyond print() debugging: observability for Django apps - LaĆs Carvalho
(One of my summaries of the 2026 Djangocon EU in Athens).
OpenTelemetry (or OTel) is an open source vendor-neutral way of observering your app. There are lots of vendors, open source projects, language support, integrations.
What is observability? Understanding the inner works of a system from the outside. So: "why is this happening?". For observability you need proper instrumentation. It is all about "signals" (or symptoms) and "causes".
There are three pillars of observability:
- Logs. A log message is normally a datetime, a level and a message. If you have a logfile, it can be hard to group the individual log messages that belong together: everything is in one long undifferentiated list.
- Traces are more elaborate. Log items now have start/end datetimes and they can be nested.
- Metrics. A collection of datapoints at intervals. When stored with timestamps, it becomes a timeseries. For instance a timeseries with the duration of all the web requests.
Monitoring and observability according to her definition: monitoring is what is happening, observability is about what happened.
Signals and causes. A signal might be "my website is showing lots of Error 500's", a cause might be "my database is down".
When in doubt about your system health, measure these four:
- Latency: time to serve requests. (ms)
- Traffic: load in the django app. (req/sec)
- Errors: rate of failed requests 4xx/5xx.
- Saturation: how full the system is.
You can also monitor your AI usage. Requests that you make, the time it takes, whether you get errors, etc. There's OpenLLMetry: opentelemetry instrumentation for LLM providers. "Evals" are apparently important, there's info on hamel.dev about that.
Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. The elaborate castle ruins of the "Niederburg" in Manderscheid.
16 Apr 2026 4:00am GMT
Djangocon EU: a practical guide to agentic coding for Django developers - Marlene Mhangami
(One of my summaries of the 2026 Djangocon EU in Athens).
You could get code completion in your IDE for a while. Since about two years, you can generate code with an LLM by describing what you want and copy/pasting the results into your program. Nowadays you can do agentic coding. You give an AI agent access to your environment so that it can actually create the files (including a file with tests) on its own.
A definition: an AI agent is an LLM (large language model) that calls tools in a loop to achieve a goal.
You start with a "prompt", the agent starts its loop and after a while it is done. In the mean time you can interrupt/steer or give more "context". Within the loop, three things happen:
-
Gather context. A "context" is something the LLM can use to gather information. You can attach a github issue, for instance. Or you attach files. Tell it "follow the conventions in this file. Instruction files also help: copilot-instructions.md, agents.md.
There's something called "context engineering". If your AI agent starts to remember too much context, it starts to be less effective. So cleaning up old parts of the context might help.
-
Take action. MCP, model context protocol, is an open protocol for giving agents access to tools. You can write your own, btw.
Related, you can also add "skills". A skill is a markdown file describing to your agent how to do something, like "run this python script to generate an email" and "send it using this method".
-
Verify results. The amount of code submitted to GitHub is increasing a lot. They think there will be 14 billion commits this year, to 1 billion in 2025. Much of the increase is in AI-assisted programming. What about the quality? That's where verifying the results, especially automatically, comes in.
Clean code amplifies AI gains, according to a study (link is in her slides). If your project is kept neat and tidy and tested, the results are better. Your agent is kept in check, that way. Unchecked AIs result in a mess.
Test driven development can help. But watch out: an agent often generates its own tests and they're not always right or complete or honest.
She demoed https://playwright.dev , a website testing tool that an agent can steer.
Some git/github tips:
- Commit often.
- Run experiments in branches.
- Watch out when opening pull requests. Many projects get a lot of pull requests, swamping the maintainers. So double-check the code you're submitting.
Unrelated photo explanation: a trip in November to the Mosel+Eifel region in Germany. The monastry church of Maria Laach (next to a lake that's all that's left of a vulcano that exploded 10 thousand years ago).
16 Apr 2026 4:00am GMT