10 Apr 2026

feedDjango community aggregator: Community blog posts

Django News - DjangoCon Europe Next Week! - Apr 10th 2026

Introduction

Hi everyone, sorry for the late send of Issue #331.

Last week, our provider, Curated, which is owned by Buttondown, went down and wasn't able to send our newsletter for six days. You might have received it yesterday, but not everyone did. It's the first time in several years we haven't been able to land in your inbox.

We've been in touch with their support all week and appreciate your patience while this gets sorted out.

In the meantime, Will and I are looking at other provider options. If this shows up next week looking a little different, that's probably why.

If you missed it, please check out last week's Issue 331: https://django-news.com/issues/331#start

Django Newsletter

News

Django security releases issued: 6.0.4, 5.2.13, and 4.2.30

Django 4.2 has reached the end of extended support. Five CVEs (security vulnerabilities) fixed in this latest update.

djangoproject.com

DjangoCon Europe is next week!

April 15-19 in Athens, Greece. There is a Django.Social event the night before, April 14th, 6-10pm, at Ipitou The Bar, organized by Jon Gould of Foxley Talent and Andrew Miller.

djangocon.eu

Updates to Django

Today, "Updates to Django" is presented by Pradhvan from Djangonaut Space! 🚀

Last week we had 14 pull requests merged into Django by 9 different contributors - including 2 first-time contributors! Congratulations to Eddy ADEGNANDJOU and Rodrigo Vieira 🚀 for having their first commits merged into Django - welcome on board! 🥳

This week's Django highlights: 🦄

Django Newsletter

Django Fellow Reports

Fellow Report - Jacob

In addition to advancing work on pending security issues, reviewed some improvements around accessibility and performance. 3 tickets triaged, 16 reviewed, 12 authored, and more.

djangoproject.com

Fellow Report - Natalia

I was traveling this week so I was less available than usual. My main priority was to support Jacob with anything needed for the upcoming security release, helping keep things on track during a critical phase. I also made an effort to stay on top of inbox and notifications, though seeing my current unread count I can confirm I have failed miserably.

djangoproject.com

Sponsored Link 1

The deployment service for developers and teams.

appliku.com

Articles

Contributing to the Django community

There are a lot of ways to get involved in the Django community; this post goes in-depth to highlight all the various opportunities.

better-simple.com

Switching all of my Python packages to PyPI trusted publishing

How and why the maintainer of django-debug-toolbar and other tools is switching due to recent malicious uploads.

406.ch

A Claude Code Plugin for Triaging Django Issues

django-triage is a Claude Code plugin that searches CVEs, Trac tickets, and Django forum discussions, then scaffolds a structured triage workspace with its findings.

manfre.me

Events

Django Day Copenhagen - Call for Proposals

The third edition will be held on Friday, October 2nd 2026, a full day of talks, followed by an evening of social events.

djangoday.dk

PyOhio 2026 CFP

PyOhio will take place on Saturday & Sunday July 25-26, 2026, at the Cleveland State University Student Center in Cleveland, OH.

pretalx.com

Design Articles

Why frontends fail when you approach them like a backend

A thoughtful exploration of why frontend development is harder than most backend work, covering UX context, accessibility pitfalls, and why "just hacking together HTML and CSS" kills quality.

marijkeluttekes.dev

Sponsored Link 2

You know @login_required. Now meet @app.reasoner(). AgentField turns Python functions into production AI agents, structured output, async execution, agent discovery. Every decorator becomes a REST endpoint. Open source, Apache 2.0. Python, Go & TypeScript SDKs.

agentfield.ai

Django Job Board

Python Developer at Open Data Services

Django Newsletter

Projects

efe/django-root-secret

Django package for managing one root encryption key and decrypting encrypted secrets at runtime.

github.com

danjac/django-studio

Django project generator for rapid, opinionated full stack development.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

10 Apr 2026 3:00pm GMT

08 Apr 2026

feedDjango community aggregator: Community blog posts

Switching all of my Python packages to PyPI trusted publishing

Switching all of my Python packages to PyPI trusted publishing

As I have teased on Mastodon, I'm switching all of my packages to PyPI trusted publishing. I have been using it to release the django-debug-toolbar a few times but never set it up myself. The process seemed tedious.

The malicious releases uploaded to PyPI two weeks ago and the blog post about digital attestations in pylock.toml finally pushed me to make the switch. All of my PyPI tokens have been revoked so there is no quick shortcut.

Note

I'm also looking at other code hosting platforms. I have been using git before GitHub existed and I'll probably still use git when GitHub has completed its enshittification. For now the cost/benefit ratio of staying on GitHub is still positive for me. Trusted publishing isn't available everywhere, so for now it is GitHub anyway.

In the end, switching an existing project was easier than expected. I have completed the process for django-prose-editor and feincms3-cookiecontrol.

For my future benefit, here are the step by step instructions I have to follow:

  1. Have a package which is buildable using e.g. uvx build

  2. On PyPI add a trusted publisher in the project's publishing settings:

    • Owner: matthiask, feincms, feinheit, whatever the user or organization's name is.
    • Repository: django-prose-editor
    • Workflow name: publish.yml
    • Environment: release
  3. In the GitHub repository, create a release environment in Settings / Environments. Add myself and potentially also other releasers as a required reviewer. I allow self-review and disallow administrators to bypass the protection rules.

  4. Run git tag x.y.z and git push, no more uvx twine or hatch publish.

  5. Approve the release in the actions tab on the repository.

  6. Either enjoy or swear and repeat the steps.

I'm happy with testing the release process in production. The older I get the less I care if people think I'm stupid. That's also why feincms3-cookiecontrol 1.7.0 doesn't exist, only 1.7.1 - the process failed and I had to bump the patch version and try again. Copy the publish.yml from a known good place, for example from the django-prose-editor repository. I have added the if: github.repository == 'feincms/django-prose-editor' statement which ensures that the workflow only runs in the main repository, but that's optional if you don't care about failing workflows.

08 Apr 2026 5:00pm GMT

How I configured OpenClaw's multi-model setup (so you don't have to)


A heads up before we start: over 95% of this blog post was written by my OpenClaw bot running GLM-5. I reviewed, edited, and approved everything, but credit where it's due: Tepui ⛰️ (yes, I named my AI) did most of the heavy lifting.


I need to vent, but in a good way this time.

Last week I vented about Anthropic pushing away paying customers. After that third-party ban hit, I had to rip out Claude Opus 4.6 from my OpenClaw setup and find alternatives. So I rebuilt the whole thing from scratch.

This time, I did it right.

What I wanted

I use OpenClaw as my personal AI assistant. It connects to my Telegram, manages my calendar, runs cron jobs, helps with research, and generally makes my life easier. Before the ban, it was running Claude Opus 4.6. After the ban, I needed alternatives.

My requirements were simple:

What I got was so much more.

The journey

The whole process took about two hours. I started with a simple question: "What's the best model to use with OpenClaw?"

First thing I did was pull the model catalog from models.dev. If you're not familiar, it's a JSON file maintained by the OpenAI-compatible API community that lists every model from every provider with their specs: context window, token limits, pricing, capabilities, everything. I pulled it to /tmp/models.dev.json and started digging.

curl -s https://models.dev/api.json > /tmp/models.dev.json

Then I checked the Lazer proxy to see what models were actually available. Lazer Technologies (where I work) gives employees free access to a curated set of models through their LiteLLM proxy. The API is OpenAI-compatible, so you just query /v1/models:

curl -s https://llm.lazertechnologies.com/v1/models \
 -H "Authorization: Bearer $LAZER_API_KEY"

The big ones available through Lazer:

These are all free for Lazer employees. If you don't have that luxury, the same models are available through DeepInfra or OpenRouter at reasonable prices.

The problem with my existing setup

My OpenClaw config was bare. I had:

And the MiniMax model was timing out on my cron jobs. The Montevideo Events Report job was failing because MiniMax M2.7 was too slow for complex reasoning tasks. I needed something faster, and free through Lazer.

The realization about model slots

This is where I learned something new. OpenClaw doesn't just have one "model" config. It has six:

  1. agents.defaults.model : Primary text model (plus fallbacks)
  2. agents.defaults.imageModel : For image input (when primary can't accept images)
  3. agents.defaults.pdfModel : For PDF parsing (falls back to imageModel)
  4. agents.defaults.imageGenerationModel : For creating images (not just viewing them)
  5. agents.defaults.musicGenerationModel : For music generation
  6. agents.defaults.videoGenerationModel : For video generation

I was only using slot #1. No wonder images weren't working right.

What I configured

After some back-and-forth with Tepui (yes, I named my AI), here's what we landed on:

Role Model Provider Cost (per 1M tokens)
Primary text GLM-5 Lazer $0.80 / $2.56
Fallback GLM-5 OpenRouter $0.80 / $2.56
Image/PDF input GLM-4.6V Lazer $0.30 / $0.90
Image generation Gemini 3.1 Flash Image OpenRouter $0.50 / $3.00
Quick tasks (reserve) GPT-OSS-120b-Turbo Lazer $0.15 / $0.60
Video-capable (reserve) Kimi-K2.5-Turbo Lazer $0.60 / $3.00

I'm putting actual prices in because Lazer's proxy is free for me, but I want to track costs as if I were paying. That way I know the real value of what I'm using.

Why these choices?

GLM-5 for text. It's the best open-source reasoning model available. 200K context window, MIT licensed, competitive with GPT-4 on agentic tasks. I tested it with quick prompts and it's snappy.

GLM-5 via OpenRouter as fallback. Same model, different provider. If the Lazer proxy goes down, OpenClaw keeps working through OpenRouter with the exact same model. No quality drop, just a different route. I also kept MiniMax M2.7 in the allowlist so I can switch to it manually if I ever need to.

GLM-4.6V for images and PDFs. This was the key insight. GLM-5 is text-only. For images and PDFs, I needed a vision model. GLM-4.6V handles both, and it's on the same Lazer proxy. This means my cron jobs can parse images (like parking receipts) without hitting paid APIs.

Fun fact: I actually added the GLM-4.6V model to my config from my car while waiting for my girlfriend to finish her driving classes. I was using my OpenCode server running at home, connected through WireGuard on my phone. Pulled the model specs from models.dev, updated the config, tested it with a screenshot. All from the car. That's the beauty of having your tools always running and always accessible.

Gemini 3.1 Flash Image for generation. I didn't have any image generation set up. Tepui suggested Flux.2 Pro (free on OpenRouter) but I wanted something more capable. Gemini 3.1 Flash Image generates high-quality images for about $3 per million output tokens. Worth it for occasional use.

The config changes

Here's what I actually changed in ~/.openclaw/openclaw.json:

// Primary model with fallback
agents.defaults.model: {
 primary: "lazer/deepinfra/zai-org/GLM-5",
 fallbacks: ["openrouter/zai-org/GLM-5"]
}

// Vision for images and PDFs
agents.defaults.imageModel: {
 primary: "lazer/deepinfra/zai-org/GLM-4.6V"
}
agents.defaults.pdfModel: {
 primary: "lazer/deepinfra/zai-org/GLM-4.6V"
}

// Image generation
agents.defaults.imageGenerationModel: {
 primary: "openrouter/google/gemini-3.1-flash-image-preview"
}

I also added all the models to the allowlist with aliases so I can switch easily:

agents.defaults.models: {
 "openrouter/minimax/minimax-m2.7": { alias: "MiniMax" },
 "openrouter/zai-org/GLM-5": { alias: "GLM-5-OR" },
 "lazer/deepinfra/zai-org/GLM-5": { alias: "GLM-5" },
 "lazer/deepinfra/zai-org/GLM-4.6V": { alias: "GLM-4.6V" },
 "lazer/deepinfra/openai/gpt-oss-120b-Turbo": { alias: "GPT-OSS" },
 "lazer/deepinfra/moonshotai/Kimi-K2.5-Turbo": { alias: "Kimi" },
 "openrouter/google/gemini-3.1-flash-image-preview": { alias: "Gemini-Image" }
}

The aliases make it easy to switch with /model commands in chat.

Testing it

I tested everything:

The vision model works. The text model works. The fallback is there if something breaks.

What I learned

GLM-5 doesn't support images. The model name sounds like it should be the successor to GLM-4.6V, but it's text-only. For vision, you need GLM-4.6V specifically.

Model config fields are strict. OpenClaw's schema only accepts certain fields: id, name, input, contextWindow, maxTokens, reasoning, cost, api. Things like tool_call and temperature get rejected.

models.dev is the source of truth. Don't rely on memory or provider docs. Pull the JSON and check the specs yourself.

OpenClaw model slots matter. If you're only configuring one model, you're missing out on image parsing, PDF reading, and image generation. Set up all six slots.

Pricing matters even when free. I have free access through Lazer, but I still track prices. It helps me understand the cost of what I'm doing and compare alternatives.

The result

My OpenClaw setup is now:

And the cron jobs that were timing out? They're running fine now. The Montevideo Events Report takes 23 seconds instead of timing out at 75 seconds.

Not bad for two hours of work.

What's next

I kept GPT-OSS-120b-Turbo and Kimi-K2.5-Turbo in reserve. GPT-OSS is cheaper than GLM-5 for quick tasks, so I might use it as a second fallback. Kimi has video support, which could be useful if I ever need to analyze video frames.

But for now, this setup covers everything I need. Text, images, PDFs, generation, fallbacks. All configured properly with the right models in the right slots.

If you're running OpenClaw (or any AI assistant), do yourself a favor: check your model config. Make sure you're using the right slots. Pull the specs from models.dev. Track your actual costs. And test everything with real inputs.

It's worth the two hours.

See you in the next one!

08 Apr 2026 5:00am GMT