16 Mar 2026

feedDjango community aggregator: Community blog posts

Built with Django β€” Weekly Roundup (Mar 09–Mar 16, 2026)

Hey, Happy Monday!

Why are you getting this: *You signed up to receive this newsletter on Built with Django. I promised to send you the latest projects and jobs on the site as well as any other interesting Django content I encountered during the month. If you don't want to receive this newsletter, feel free to unsubscribe anytime.

Sponsor

This issue is sponsored by TuxSEO - your AI content team on auto-pilot.

Projects

Jobs

From the Community

Support

You can support this project by using one of the affiliate links below. These are always going to be projects I use and love! No "Bluehost" crap here!

16 Mar 2026 6:00pm GMT

15 Mar 2026

feedDjango community aggregator: Community blog posts

How I deploy my projects to a single VPS with Gitea, NGINX and Docker

Hello everyone πŸ‘‹

A few weeks ago, the team behind Jmail (a Gmail-styled interface for browsing the publicly released Epstein files) shared that they had racked up a $46,485 bill on Vercel The site had gone viral with ~450 million pageviews, and Vercel's pricing structure turned that into a five-figure invoice. Vercel's CEO ended up covering the bill personally, which is nice, but not exactly a scalable solution πŸ˜…

When I saw that story, my first thought was: this is an efficiency problem. Jmail is essentially a search interface on top of mostly static content. An SRE on Hacker News mentioned they handle 200x Jmail's request load on just two Hetzner servers. The whole thing could have been served from a moderately sized VPS for a fraction of the cost.

That got me thinking about my own setup. I run everything on a single VPS: my blog, my side projects, my git server, analytics, a wiki, a forum, a secret sharing tool, and more. The whole thing is held together by NGINX, Gitea, some bash scripts, and Docker. No Kubernetes, no Terraform, no CI/CD platform with a $500/month bill. Just a cheap VPS, some config files, and a deployment flow that's simple enough that I can fix it from my phone at the beach (I've written about that before).

I get asked about my deployment setup more often than I expected, so I figured I'd write it all down. Let me walk you through the whole thing.

The VPS

I'm running a Hetzner Cloud CPX21 in Nuremberg, Germany. Here are the specs:

Spec Value
vCPUs 3
RAM 4 GB
Disk 80 GB SSD
OS Ubuntu
Price ~€7-8/month

The CPX21 is one of Hetzner's shared vCPU instances. It's cheap, reliable, and more than enough for what I need. I'm usually sitting at around ~10% CPU and ~2GB RAM, so there's plenty of headroom.

I set up the VPS manually. No Ansible, no configuration management, just plain old SSH and installing things by hand. I know, I know, "infrastructure as code" and all that. But for a single server that I manage myself, the overhead of automating the setup isn't worth it. If the server dies, I can set it up again in a couple of hours and restore from backups.

What's running on it

Here's everything running on this single VPS:

Bare metal (directly on the server)

Service Purpose
Gitea Self-hosted git server
NGINX Web server / reverse proxy
Certbot SSL/TLS certificates
PHP-FPM For WordPress sites
DokuWiki Personal wiki
fail2ban Brute force protection
UFW Firewall
A couple WordPress sites Various projects

Docker

Service Purpose
ntfy Push notifications
shhh Secret sharing
SearXNG Privacy-respecting search engine
WireGuard VPN
phpBB YAMS community forum
Umami Privacy-respecting analytics
Gitea Actions runner CI/CD runner
Watchtower Automatic Docker image updates

Static sites (Hugo, served by NGINX)

Site Purpose
rogs.me This blog!
montevideo.restaurant Restaurant directory
yams.media YAMS documentation site

That's a lot of stuff for a 4GB VPS. But static sites are basically free in terms of resources, and the Docker services are all lightweight. The heaviest things are probably Gitea and the WordPress sites, and even those barely register.

The web server: NGINX

Every site and service gets its own NGINX config file in /etc/nginx/conf.d/. One file per site, nice and clean. No sites-available / sites-enabled symlink dance.

Here's what a typical config looks like for one of my Hugo sites:

server {
 root /var/www/rogs.me;
 index index.html;

 server_name rogs.me;

 location / {
 try_files $uri $uri/ =404;
 }

 listen 443 ssl; # managed by Certbot
 ssl_certificate /etc/letsencrypt/live/rogs.me/fullchain.pem;
 ssl_certificate_key /etc/letsencrypt/live/rogs.me/privkey.pem;
 include /etc/letsencrypt/options-ssl-nginx.conf;
 ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}

server {
 if ($host = rogs.me) {
 return 301 https://$host$request_uri;
 }

 server_name rogs.me;
 listen 80;
 return 404;
}

Nothing fancy. Serve files from /var/www/rogs.me, redirect HTTP to HTTPS, done. The SSL bits are all managed by Certbot (more on that later).

For Docker services, the config looks slightly different because NGINX acts as a reverse proxy:

server {
 server_name analytics.rogs.me;

 location / {
 proxy_pass http://localhost:3000;
 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 proxy_set_header X-Forwarded-Proto $scheme;
 }

 listen 443 ssl; # managed by Certbot
 # ... SSL config same as above
}

Same pattern: one file per service, NGINX handles SSL termination, and proxies to whatever port the Docker container exposes on localhost.

SSL/TLS with Let's Encrypt

All certificates come from Let's Encrypt via Certbot. I installed it with apt and used the NGINX plugin:

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d rogs.me

Certbot modifies the NGINX config automatically to add the SSL directives (that's why you see those # managed by Certbot comments).

Certificates auto-renew daily at 3 AM via a cron job:

0 3 * * * certbot renew -q

The -q flag keeps it quiet: no output unless something goes wrong. Certbot is smart enough to only renew certificates that are close to expiring, so running it daily is fine.

Self-hosted git with Gitea

I use Gitea as my primary git server. It runs bare metal on the VPS (not in Docker) and lives at git.rogs.me.

Why Gitea instead of just using GitHub? I want to own my git infrastructure. GitHub is great for collaboration, but I like having control over where my code lives. If GitHub goes down or decides to change their terms, my repos are safe on my own server.

That said, I mirror everything to both GitHub and GitLab so other people can collaborate, open issues, and submit PRs. Best of both worlds: I own the primary, and the mirrors handle the social coding side.

Gitea Actions

Gitea has a built-in CI/CD system called Gitea Actions that's compatible with GitHub Actions workflows. The runner is the official gitea/act_runner Docker image, running on the same VPS. Pretty vanilla setup, no custom configuration.

This is the core of my deployment pipeline. Every time I push to master, Gitea Actions picks up the workflow and deploys the site.

Deploying Hugo sites

This is where it all comes together. All three of my Hugo sites follow the exact same deployment pattern. Here's the flow:

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” push β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” Gitea Actions β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Local │────────────────▢ β”‚ Gitea β”‚ ────────────────────▢│ Runner β”‚
β”‚ machine β”‚ β”‚(git.rogs)β”‚ β”‚ (Docker) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
β”‚
SSH into same VPS
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ VPS β”‚
β”‚ git pull β”‚
β”‚ build.sh β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
β”‚
Hugo builds to
/var/www/domain/
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ NGINX β”‚
β”‚ serves β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Yes, the Gitea Actions runner SSHes into the same server it's running on. I know that's a bit redundant, but I designed it this way on purpose: if I ever move my hosting somewhere else (or switch back to GitHub Actions), the workflow doesn't need to change. The SSH target is just a secret, so I swap an IP address and everything keeps working.

The Gitea Actions workflow

Here's the workflow file that lives in .gitea/workflows/deploy.yml in each repo:

name: deploy

on:
 push:
 branches:
 - master

jobs:
 deploy:
 runs-on: ubuntu-latest
 steps:
 - name: Deploy via SSH
 uses: appleboy/ssh-action@v1
 with:
 host: ${{ secrets.SSH_HOST }}
 username: ${{ secrets.SSH_USER }}
 key: ${{ secrets.SSH_PRIVATE_KEY }}
 port: ${{ secrets.SSH_PORT }}
 script: |
 cd repo && git stash && git pull --force origin master && ./build.sh

It's beautifully simple:

  1. Push to master triggers the workflow
  2. The runner uses appleboy/ssh-action to SSH into the server
  3. On the server: stash any local changes, pull the latest code, and run the build script

The git stash is there as a safety net. The WebP conversion in the build script modifies tracked files (more on that in a second), so without the stash, git pull would complain about dirty working tree.

All four secrets (SSH_HOST, SSH_USER, SSH_PRIVATE_KEY, SSH_PORT) are configured in Gitea's repository settings. The SSH key has access to the server but is locked down to only what the deployment needs.

The build script

Every Hugo site has a build.sh in the repo root. Here's the one for this blog:

#!/bin/bash

# Convert all images to WebP for better performance
for file in $(git ls-files --others --cached --exclude-standard \
 | grep -v '.git' \
 | grep -E '\.(png|jpg|jpeg)$'); do
 cwebp -lossless "$file" -o "${file%.*}.webp"
done

# Update all references from png/jpg/jpeg to webp
for tracked_file in $(git ls-files --others --cached --exclude-standard \
 | grep -v '.git'); do
 sed -i 's/\.webp/.webp/g' "$tracked_file"
 sed -i 's/\.webp/.webp/g' "$tracked_file"
 sed -i 's/\.webp/.webp/g' "$tracked_file"
done

# Build the site
hugo -s . -d /var/www/rogs.me/ --minify --cacheDir $PWD/hugo-cache

Three things happen here:

  1. Image optimization: Every PNG, JPG, and JPEG gets converted to WebP using cwebp (lossless mode, so no quality loss). WebP files are significantly smaller than their originals.
  2. Reference rewriting: All file references get updated from .webp / .webp / .webp to .webp. This is why we need git stash in the workflow; this step modifies tracked files.
  3. Hugo build: Generates the static site with minification enabled and outputs it directly to /var/www/rogs.me/. NGINX is already configured to serve from that directory, so the site is live immediately.

The --cacheDir flag keeps Hugo's build cache in the repo directory, which speeds up subsequent builds.

Each site's build.sh is essentially identical, just with a different output path (montevideo.restaurant, yams.media, etc.).

Variations across sites

While the pattern is the same, there are small differences:

Docker services and Watchtower

Most of my non-static services run in Docker with docker-compose. Each service has its own directory in /opt/:

/opt/
β”œβ”€β”€ analytics.rogs.me/ # Umami
β”‚ └── docker-compose.yml
β”œβ”€β”€ ntfy/
β”‚ └── docker-compose.yml
β”œβ”€β”€ shhh/
β”‚ └── docker-compose.yml
β”œβ”€β”€ searx/
β”‚ └── docker-compose.yml
└── ...

For updates, I use Watchtower. It runs as a Docker container itself and periodically checks if there are newer images available for my running containers. If there are, it pulls the new image, stops the old container, and starts a new one with the same configuration.

version: "3"
services:
 watchtower:
 image: containrrr/watchtower
 volumes:
 - /var/run/docker.sock:/var/run/docker.sock
 restart: unless-stopped

Is this a bit risky? Sure. An automatic update could break something. But in practice, it hasn't failed me once, and the services I'm running are stable enough that breaking changes in Docker images are rare. For a personal setup, the convenience of never having to manually update containers is worth the small risk.

Security

I'm not running a bank here, but I do take basic security seriously:

# Quick UFW setup
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 'Nginx Full'
sudo ufw allow ssh
sudo ufw enable

DNS

All my domains use Cloudflare for DNS. But only DNS for most of them. I'm not using Cloudflare's CDN or proxy features on my main sites. The DNS records point directly to my VPS IP with the proxy toggle set to "DNS only" (the grey cloud, not the orange one).

Why Cloudflare for DNS? Two reasons. First, it's free, fast, and the dashboard is easy to use. Second, and more importantly: if something goes wrong, I can switch to using Cloudflare's full proxy and DDoS protection with the flick of a button. Just toggle the grey cloud to orange and you're behind Cloudflare's network instantly.

I've already had to do this once. forum.yams.media (the YAMS community forum) was getting DDoSed and swarmed by bots constantly. Flipping that toggle to orange solved the problem immediately. The rest of my sites run without Cloudflare's proxy because they don't need it, but knowing I can turn it on in seconds gives me peace of mind.

Backups

This is the part that most people skip. Don't be most people.

My backup strategy has two stages:

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” 11 PM cron β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ VPS β”‚ ───────────────▢│ /home/backups/ β”‚
β”‚ (services) β”‚ tar + GPG β”‚ (encrypted .gpg) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
midnight cron
(SSH pull)
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Home Server β”‚
β”‚ (NAS + S3) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Stage 1: Backup on the VPS (11 PM)

Every night at 11 PM, a series of cron jobs run backup scripts for each service. Each script follows the same pattern:

#!/bin/bash

BACKUP_DIR="/home/backups/servicename"
TARGET_DIR="/path/to/service"
DATE=$(date +%Y-%m-%d-%s)
BACKUP_FILE="$BACKUP_DIR/backup-servicename-$DATE.tar.zst"
ENCRYPTED_FILE="$BACKUP_FILE.gpg"
LOG_FILE="/var/log/backup_servicename.log"
GPG_RECIPIENT="your-email@example.com"

log_message() {
 echo "$(date +'%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
}

log_message "=== Starting backup ==="

mkdir -p "$BACKUP_DIR"

# For Docker services: stop containers first
docker compose stop

# Create compressed archive
tar -caf "$BACKUP_FILE" -C "$TARGET_DIR" .

# Encrypt with GPG
gpg --encrypt --armor -r "$GPG_RECIPIENT" -o "$ENCRYPTED_FILE" "$BACKUP_FILE"
rm -f "$BACKUP_FILE" # Remove unencrypted version

# For Docker services: restart containers
docker compose up -d

log_message "=== Backup completed ==="

Key points:

Stage 2: Pull to home server (midnight)

At midnight, my home server SSHes into the VPS and pulls all the encrypted backup files to my local NAS. From there, they also get pushed to an S3 bucket.

This gives me the classic 3-2-1 backup strategy: 3 copies of the data (VPS, NAS, S3), on 2 different media types, with 1 offsite copy. If Hetzner's datacenter burns down, I have everything locally. If my house burns down, I have everything in S3.

Monitoring

I run Uptime Kuma on my home server to monitor all my services. It checks every site and service periodically and sends me a notification (via ntfy, naturally) if something goes down.

It's not fancy, but it works. I've caught a few issues before anyone else noticed them, which is the whole point.

The big picture

Here's what the whole setup looks like:

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Hetzner CPX21 β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Gitea β”‚ β”‚ NGINX β”‚ β”‚
β”‚ β”‚ Actions β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ Runner β”‚ β”‚ β”‚ Static β”‚ β”‚ Reverse β”‚ β”‚ β”‚
β”‚ β”‚ (Docker) β”‚ β”‚ β”‚ sites β”‚ β”‚ proxy to β”‚ β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β”‚ β”‚/var/www/ β”‚ β”‚ Docker svcs β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β”‚ SSH β”‚ β–² β”‚ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚ β”‚ β”‚
β”‚ β–Ό β”‚ β–Ό β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Git │──build──│ Hugo β”‚ β”‚ Docker β”‚ β”‚
β”‚ β”‚ repos β”‚ β”‚ sites β”‚ β”‚ services β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Gitea β”‚ β”‚ Certbot β”‚ β”‚ fail2ban β”‚ β”‚
β”‚ β”‚ (bare metal)β”‚ β”‚ (SSL) β”‚ β”‚ + UFW β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Conclusion

The whole philosophy here is simplicity. There's no orchestration tool, no container registry, no deployment platform. It's just:

  1. Push code to Gitea
  2. A workflow SSHes into the server
  3. Git pull + bash script builds the site
  4. NGINX serves it

Could I make this more sophisticated? Sure. Could I use Ansible to manage the server config, or Kubernetes to orchestrate the containers, or a proper CI/CD platform with build artifacts and rollbacks? Absolutely. But for a personal setup that hosts a blog, some side projects, and a handful of services, this is more than enough.

The setup has been running for years with minimal maintenance. The most time I spend on it is writing backup scripts for new services and adding NGINX configs when I deploy something new. Everything else is automated: deployments, SSL renewals, Docker updates, backups.

If you're thinking about self-hosting your projects, my advice is: start simple. A VPS, NGINX, and a bash script can take you surprisingly far. You can always add complexity later if you need it, but in my experience, you probably won't.

If you have questions about any part of this setup, feel free to reach out on the Contact page. I'm always happy to help people get started with self-hosting.

See you in the next one!

15 Mar 2026 5:00am GMT

14 Mar 2026

feedDjango community aggregator: Community blog posts

10 Years of Jazzband

Jazzband is sunsetting. Before moving on, here's a look at what 10 years of cooperative coding actually looked like.

By the numbers

Five years in, we had about 1,350 members and 55 projects. Here's where things stand now:

Members

Projects

Activity

Releases

Project teams

How Jazzband was actually used

The numbers above only tell part of the story. Here's what's more interesting.

Not everyone used the release pipeline

20 active projects never shipped a single release through it. Projects like Watson (2,515 stars), django-rest-knox (1,255), and django-admin2 (1,187) used Jazzband as a collaborative home - for shared access, triage, and maintenance - not for releases. The pipeline was useful for the projects that used it, but it wasn't what made Jazzband work for most people.

Old projects stayed alive

django-avatar's repo was created in 2008 and shipped its most recent Jazzband release in January 2026 - a 17-year-old repo still getting releases. django-axes (2009), sorl-thumbnail (2010), django-constance (2010), and 18 other projects created before 2015 were all still getting releases in 2025 or 2026. Jazzband kept old projects alive long after their original authors moved on. That was the whole point.

Release cadence varied wildly

django-axes had the most active release cadence: 253 release files across 127 versions, peaking at 28 versions in 2019 - roughly one every 13 days. pip-tools was second at 138 releases / 69 versions.

Meanwhile, 7 active projects have no team members at all - django-permission, django-mongonaut, and five others. Nobody was actively working on them, but they had a home and stayed installable.

pip-tools was its own community

With 69 team members it dwarfed every other project (the next largest, djangorestframework-simplejwt, had 24). It was basically a sub-organization within Jazzband. And two projects joined as recently as 2024 (django-tagging, django-summernote) with single-digit stars and zero releases - people were still finding value in the model right up to the end.

The open access model was genuinely controversial

When django-newsletter transferred in, its author @dokterbob worried that giving 800 members write access would "dissolve the responsibility so much that it might actually reduce participation." I wrote a long reply defending the open model.

An earlier project, Collectfast, actually left Jazzband after a member pushed directly to master without review - merging commits the author had been holding off on. That incident led to real discussions about code review processes, branch protection, and what "open access" should actually mean. The tension between openness and control was never fully resolved.

Moderation was another solo job

Over the years I had to block 10 accounts from the GitHub organization - first crypto spammers who joined just to be in the org, then community conflicts that needed real moderation decisions, and finally the AI-driven spam that made the open model untenable. None of that is unusual for an organization this size, but it all went through one person.

The onboarding bottleneck

Every transferred project got an onboarding checklist - a webhook automatically opened an "Implement Jazzband guidelines" issue with TODOs like fixing links, adding badges, setting up CI, adding jazzband to PyPI, deciding on a project lead. 41 projects got one of these. 28 completed it. 13 are still open.

The pattern in those 13 is telling: contributors would do every item they could, then get stuck on things that required admin access - configuring webhooks, fixing CI checks, setting up the release pipeline - and wait for me. Sometimes for months.

django-user-sessions' original author pinged me five times over two months about broken CI checks only an admin could fix. Watson's lead asked twice to remove legacy CI tools blocking PR merges. The checklist was good. The bottleneck was me.

Projects that moved on

One of the earliest and most visible Jazzband projects was django-debug-toolbar, transferred in back in 2016. It grew to over 8,000 stars under Jazzband before it moved to Django Commons in 2024.

django-simple-history, django-oauth-toolkit, PrettyTable, and tablib all moved on too, for similar reasons - they needed more autonomy than Jazzband's structure could provide.

Downloads

For context on how widely these projects are used, here are some numbers from PyPI. All projects that were ever part of Jazzband account for over 150 million downloads a month. Current projects alone are around 95 million.

Top 15 by monthly downloads:

Project Downloads/month Note
prettytable 42.4M left Jazzband
pip-tools 23.3M
contextlib2 10.7M
django-redis 9.6M
django-debug-toolbar 7.3M left, now Django Commons
djangorestframework-simplejwt 6.1M
dj-database-url 5.5M
pathlib2 4.9M
django-model-utils 4.8M
geojson 4.6M
tablib 4.1M
django-oauth-toolkit 3.7M left
django-simple-history 3.1M left, now Django Commons
django-silk 2.7M
django-formtools 2.1M

One thing that surprised me: prettytable alone accounts for 42 million downloads a month, and it isn't even a Django package. contextlib2, pathlib2, and geojson aren't either. Jazzband ended up being broader than the Django ecosystem it started in.

django-debug-toolbar ranked in the top three most used third-party packages in the Django Developers Survey and is featured in the official Django tutorial. It spent 8 years under Jazzband before moving to Django Commons.

If you've come across Jazzband projects before, it was probably through the Django News newsletter, Python Weekly, or Opensource.com's 2020 piece on how Jazzband worked.

Top 10 projects by stars

Project Stars
pip-tools 7,997
django-silk 4,939
tablib 4,752
djangorestframework-simplejwt 4,310
django-taggit 3,429
django-redis 3,059
django-model-utils 2,759
Watson 2,515
django-push-notifications 2,384
django-widget-tweaks 2,165

14 Mar 2026 4:02pm GMT