08 Apr 2026
Django community aggregator: Community blog posts
Switching all of my Python packages to PyPI trusted publishing
Switching all of my Python packages to PyPI trusted publishing
As I have teased on Mastodon, I'm switching all of my packages to PyPI trusted publishing. I have been using it to release the django-debug-toolbar a few times but never set it up myself. The process seemed tedious.
The malicious releases uploaded to PyPI two weeks ago and the blog post about digital attestations in pylock.toml finally pushed me to make the switch. All of my PyPI tokens have been revoked so there is no quick shortcut.
Note
I'm also looking at other code hosting platforms. I have been using git before GitHub existed and I'll probably still use git when GitHub has completed its enshittification. For now the cost/benefit ratio of staying on GitHub is still positive for me. Trusted publishing isn't available everywhere, so for now it is GitHub anyway.
In the end, switching an existing project was easier than expected. I have completed the process for django-prose-editor and feincms3-cookiecontrol.
For my future benefit, here are the step by step instructions I have to follow:
-
Have a package which is buildable using e.g.
uvx build -
On PyPI add a trusted publisher in the project's publishing settings:
- Owner:
matthiask,feincms,feinheit, whatever the user or organization's name is. - Repository:
django-prose-editor - Workflow name:
publish.yml - Environment:
release
- Owner:
-
In the GitHub repository, create a
releaseenvironment in Settings / Environments. Add myself and potentially also other releasers as a required reviewer. I allow self-review and disallow administrators to bypass the protection rules. -
Run
git tag x.y.zandgit push, no moreuvx twineorhatch publish. -
Approve the release in the actions tab on the repository.
-
Either enjoy or swear and repeat the steps.
I'm happy with testing the release process in production. The older I get the less I care if people think I'm stupid. That's also why feincms3-cookiecontrol 1.7.0 doesn't exist, only 1.7.1 - the process failed and I had to bump the patch version and try again. Copy the publish.yml from a known good place, for example from the django-prose-editor repository. I have added the if: github.repository == 'feincms/django-prose-editor' statement which ensures that the workflow only runs in the main repository, but that's optional if you don't care about failing workflows.
08 Apr 2026 5:00pm GMT
New Package: Django Dependency Map
I have recently been reading Swizec Teller's new book Scaling Fast and in it he mentions architectural complexity, which reminded me of my desire for a tool that combines database dependencies between Django apps and import dependencies between Django apps. To date, I have used other tools such as graph models from Django extensions, import-linter is the most recent one, and pyreverse from Pylint. They all do bits of the job, but require manual stitching together to get a cohesive graph of everything overlaid in the right way. So I remembered about this, and so over the last couple of days, I've built a new package which combines all of this into a live view which updates as you build your app, a management command and a panel for Debug Toolbar.
Why the Django app level, you ask? Primarily, I do find models good, but they can get a little too complicated and a little you get a few too many lines and doing imports at the module level within an app or like separating it all out, again, you lose it becomes there becomes too much noise to signal to really understand the logical relationship between different components in the system. I like to think that Django apps naturally represent logical representations of different parts of a project or a system. A project obviously is too large unless you're dealing with multiple projects, but within a single Django project, it's a good representation to have an app deal with one thing. You can I know you can structure Django projects & apps in many ways. So it'd be interesting to see this tool used on other's project structures that aren't one app for a single logical component.
So without further ado, here is Django Dependency Map, which combines output from Django extensions graph_models and grimp, which is used by import-linter to dynamically map the dependencies between your different apps and third-party apps. Initially, it was a management command, which then outputs a HTML file, which exists. I then added that into a live view, and there's an integration into Django debug toolbar.
The live map page has the following features:
- you can hide nodes and kind of see how the dependencies change.
- force graph & hierarchical graph representation,
- Detailed information on a single app and its relationships
- import cycle detection
- import violations from import-linter
- Debug toolbar panel
- Export of the graph to mermaid & dot formats
My hope is twofold. One, it might reveal things about your projects that you didn't know about in terms of how fit how interlinked things are. And secondly, I hope it may change the way you build your Django apps. I'm hoping to have it open as another tab and just to watch as I'm building things to make sure out as I'm and maybe as an agent's building things see use it as a sense check of if it's doing things right or as I expect it to in terms of overall architecture rather than at the code level.
The pypi package is coming very soon, but you can visit the repo here: https://github.com/softwarecrafts/django-dependency-map
08 Apr 2026 5:00am GMT
How I configured OpenClaw's multi-model setup (so you don't have to)
A heads up before we start: over 95% of this blog post was written by my OpenClaw bot running GLM-5. I reviewed, edited, and approved everything, but credit where it's due: Tepui ⛰️ (yes, I named my AI) did most of the heavy lifting.
I need to vent, but in a good way this time.
Last week I vented about Anthropic pushing away paying customers. After that third-party ban hit, I had to rip out Claude Opus 4.6 from my OpenClaw setup and find alternatives. So I rebuilt the whole thing from scratch.
This time, I did it right.
What I wanted
I use OpenClaw as my personal AI assistant. It connects to my Telegram, manages my calendar, runs cron jobs, helps with research, and generally makes my life easier. Before the ban, it was running Claude Opus 4.6. After the ban, I needed alternatives.
My requirements were simple:
- Free or cheap (Lazer's LiteLLM proxy gives us free access to certain models)
- A text model for daily use (fast, capable reasoning)
- A vision model for images and PDFs (I send screenshots, receipts, documents)
- Image generation (sometimes I need to create images)
- A fallback if something breaks
What I got was so much more.
The journey
The whole process took about two hours. I started with a simple question: "What's the best model to use with OpenClaw?"
First thing I did was pull the model catalog from models.dev. If you're not familiar, it's a JSON file maintained by the OpenAI-compatible API community that lists every model from every provider with their specs: context window, token limits, pricing, capabilities, everything. I pulled it to /tmp/models.dev.json and started digging.
curl -s https://models.dev/api.json > /tmp/models.dev.json
Then I checked the Lazer proxy to see what models were actually available. Lazer Technologies (where I work) gives employees free access to a curated set of models through their LiteLLM proxy. The API is OpenAI-compatible, so you just query /v1/models:
curl -s https://llm.lazertechnologies.com/v1/models \
-H "Authorization: Bearer $LAZER_API_KEY"
The big ones available through Lazer:
- GLM-5 : Open source, 200K context, reasoning-enabled, competitive with frontier models
- GLM-4.6V : Vision model (text + images), also reasoning-enabled
- GPT-OSS-120b-Turbo : Fast, cheap, reasoning model
- Kimi-K2.5-Turbo : Multimodal (text + image + video)
These are all free for Lazer employees. If you don't have that luxury, the same models are available through DeepInfra or OpenRouter at reasonable prices.
The problem with my existing setup
My OpenClaw config was bare. I had:
- A primary model: MiniMax M2.7 through OpenRouter
- No fallback model configured
- No image model configured
- No image generation model
- No PDF model
And the MiniMax model was timing out on my cron jobs. The Montevideo Events Report job was failing because MiniMax M2.7 was too slow for complex reasoning tasks. I needed something faster, and free through Lazer.
The realization about model slots
This is where I learned something new. OpenClaw doesn't just have one "model" config. It has six:
agents.defaults.model: Primary text model (plus fallbacks)agents.defaults.imageModel: For image input (when primary can't accept images)agents.defaults.pdfModel: For PDF parsing (falls back to imageModel)agents.defaults.imageGenerationModel: For creating images (not just viewing them)agents.defaults.musicGenerationModel: For music generationagents.defaults.videoGenerationModel: For video generation
I was only using slot #1. No wonder images weren't working right.
What I configured
After some back-and-forth with Tepui (yes, I named my AI), here's what we landed on:
| Role | Model | Provider | Cost (per 1M tokens) |
|---|---|---|---|
| Primary text | GLM-5 | Lazer | $0.80 / $2.56 |
| Fallback | GLM-5 | OpenRouter | $0.80 / $2.56 |
| Image/PDF input | GLM-4.6V | Lazer | $0.30 / $0.90 |
| Image generation | Gemini 3.1 Flash Image | OpenRouter | $0.50 / $3.00 |
| Quick tasks (reserve) | GPT-OSS-120b-Turbo | Lazer | $0.15 / $0.60 |
| Video-capable (reserve) | Kimi-K2.5-Turbo | Lazer | $0.60 / $3.00 |
I'm putting actual prices in because Lazer's proxy is free for me, but I want to track costs as if I were paying. That way I know the real value of what I'm using.
Why these choices?
GLM-5 for text. It's the best open-source reasoning model available. 200K context window, MIT licensed, competitive with GPT-4 on agentic tasks. I tested it with quick prompts and it's snappy.
GLM-5 via OpenRouter as fallback. Same model, different provider. If the Lazer proxy goes down, OpenClaw keeps working through OpenRouter with the exact same model. No quality drop, just a different route. I also kept MiniMax M2.7 in the allowlist so I can switch to it manually if I ever need to.
GLM-4.6V for images and PDFs. This was the key insight. GLM-5 is text-only. For images and PDFs, I needed a vision model. GLM-4.6V handles both, and it's on the same Lazer proxy. This means my cron jobs can parse images (like parking receipts) without hitting paid APIs.
Fun fact: I actually added the GLM-4.6V model to my config from my car while waiting for my girlfriend to finish her driving classes. I was using my OpenCode server running at home, connected through WireGuard on my phone. Pulled the model specs from models.dev, updated the config, tested it with a screenshot. All from the car. That's the beauty of having your tools always running and always accessible.
Gemini 3.1 Flash Image for generation. I didn't have any image generation set up. Tepui suggested Flux.2 Pro (free on OpenRouter) but I wanted something more capable. Gemini 3.1 Flash Image generates high-quality images for about $3 per million output tokens. Worth it for occasional use.
The config changes
Here's what I actually changed in ~/.openclaw/openclaw.json:
// Primary model with fallback
agents.defaults.model: {
primary: "lazer/deepinfra/zai-org/GLM-5",
fallbacks: ["openrouter/zai-org/GLM-5"]
}
// Vision for images and PDFs
agents.defaults.imageModel: {
primary: "lazer/deepinfra/zai-org/GLM-4.6V"
}
agents.defaults.pdfModel: {
primary: "lazer/deepinfra/zai-org/GLM-4.6V"
}
// Image generation
agents.defaults.imageGenerationModel: {
primary: "openrouter/google/gemini-3.1-flash-image-preview"
}
I also added all the models to the allowlist with aliases so I can switch easily:
agents.defaults.models: {
"openrouter/minimax/minimax-m2.7": { alias: "MiniMax" },
"openrouter/zai-org/GLM-5": { alias: "GLM-5-OR" },
"lazer/deepinfra/zai-org/GLM-5": { alias: "GLM-5" },
"lazer/deepinfra/zai-org/GLM-4.6V": { alias: "GLM-4.6V" },
"lazer/deepinfra/openai/gpt-oss-120b-Turbo": { alias: "GPT-OSS" },
"lazer/deepinfra/moonshotai/Kimi-K2.5-Turbo": { alias: "Kimi" },
"openrouter/google/gemini-3.1-flash-image-preview": { alias: "Gemini-Image" }
}
The aliases make it easy to switch with /model commands in chat.
Testing it
I tested everything:
- Sent a picture of my car's steering wheel. Correctly identified the Mitsubishi logo.
- Sent a parking receipt from the airport. Correctly parsed it (and Tepui correctly identified it was my grandma's flight, not my girlfriend's, by checking my calendar).
- Sent a screenshot of my Spotify. Correctly identified the band (High Fade playing "Gossip").
The vision model works. The text model works. The fallback is there if something breaks.
What I learned
GLM-5 doesn't support images. The model name sounds like it should be the successor to GLM-4.6V, but it's text-only. For vision, you need GLM-4.6V specifically.
Model config fields are strict. OpenClaw's schema only accepts certain fields: id, name, input, contextWindow, maxTokens, reasoning, cost, api. Things like tool_call and temperature get rejected.
models.dev is the source of truth. Don't rely on memory or provider docs. Pull the JSON and check the specs yourself.
OpenClaw model slots matter. If you're only configuring one model, you're missing out on image parsing, PDF reading, and image generation. Set up all six slots.
Pricing matters even when free. I have free access through Lazer, but I still track prices. It helps me understand the cost of what I'm doing and compare alternatives.
The result
My OpenClaw setup is now:
- Free through the Lazer proxy for everyday use
- Fast with GLM-5 for reasoning tasks
- Vision-capable with GLM-4.6V for images and PDFs
- Image-generating with Gemini for when I need to create visuals
- Resilient with GLM-5 fallback through OpenRouter (same model, different provider)
And the cron jobs that were timing out? They're running fine now. The Montevideo Events Report takes 23 seconds instead of timing out at 75 seconds.
Not bad for two hours of work.
What's next
I kept GPT-OSS-120b-Turbo and Kimi-K2.5-Turbo in reserve. GPT-OSS is cheaper than GLM-5 for quick tasks, so I might use it as a second fallback. Kimi has video support, which could be useful if I ever need to analyze video frames.
But for now, this setup covers everything I need. Text, images, PDFs, generation, fallbacks. All configured properly with the right models in the right slots.
If you're running OpenClaw (or any AI assistant), do yourself a favor: check your model config. Make sure you're using the right slots. Pull the specs from models.dev. Track your actual costs. And test everything with real inputs.
It's worth the two hours.
See you in the next one!
08 Apr 2026 5:00am GMT