05 Apr 2026

feedPlanet Python

EuroPython: Humans of EuroPython: George Zisopoulos

Behind every flawless talk, engaging workshop, and perfectly timed coffee break at EuroPython is a crew of unsung heroes-our volunteers! 🌟 Not just organizers, but dream enablers: printer ninjas, registration magicians, social butterflies, and even salsa instructors (yeah, that happened!)

We're the quiet force turning chaos into community, one sprint at a time. πŸ’»βœ¨

Curious who really makes the magic happen? Today we'd like to introduce George Zisopoulos, member of the Operations team at EuroPython 2025.

altGeorge Zisopoulos, member of the Operations Team at EuroPython 2025

EP: What first inspired you to volunteer for EuroPython? And which edition of the conference was it?

I was inspired because I gave a presentation in 2020, and after that I wanted to experience the conference from the other side, as part of the volunteers. It was amazing to see how much work all these people had done for us as attendees, and I wanted to be a part of that.

So I applied and became an online volunteer in 2022 in Dublin, and the following year I joined EuroPython 2023 as an on-site volunteer. Once you start, you can't stop doing it.

EP: Have you learned new skills while contributing to EuroPython? If so, which ones?

It's less about learning new skills and more about discovering the ones you already have. With guidance and a supportive team, you feel confident using them and even pushing a bit past your comfort zone.

EP: What&aposs your favorite memory from volunteering at the conference?

My favorite part is walking into the conference and unexpectedly running into someone you met at previous years' editions. It's like a little dΓ©jΓ  vu. They hug you like you just saw them yesterday, even if it's been a whole year.

EP: Did you make any lasting friendships or professional connections through volunteering?

Yes, I've made a few lasting friendships. We stay in touch all year, even though we live in different cities or countries. We visit each other, and often end up meeting in other countries while traveling.

EP: Any unexpected or funny experiences during the conference which you'd like to share?

I love coffee, so during the conference I'm usually wandering around with a cup in hand. Two years ago, thanks to some playful hits from friends, I ended up destroying three t-shirts with coffee during the conference! Now every year they wonder… How many shirts will I sacrifice this time?

EP: Would you volunteer again, and why?

I would say what I used to say last year: Summer without EuroPython just doesn't really feel like a summer πŸ˜‰ See you all there!

EP: Thank you for your contribution, George!

05 Apr 2026 2:18pm GMT

04 Apr 2026

feedPlanet Twisted

Donovan Preston: Using osascript with terminal agents on macOS

Here is a useful trick that is unreasonably effective for simple computer use goals using modern terminal agents. On macOS, there has been a terminal osascript command since the original release of Mac OS X. All you have to do is suggest your agent use it and it can perform any application control action available in any AppleScript dictionary for any Mac app. No MCP set up or tools required at all. Agents are much more adapt at using rod terminal commands, especially ones that haven't changed in 30 years. Having a computer control interface that hasn't changed in 30 years and has extensive examples in the Internet corpus makes modern models understand how to use these tools basically Effortlessly. macOS locks down these permissions pretty heavily nowadays though, so you will have to grant the application control permission to terminal. But once you have done that, the range of possibilities for commanding applications using natural language is quite extensive. Also, for both Safari and chrome on Mac, you are going to want to turn on JavaScript over AppleScript permission. This basically allows claude or another agent to debug your web applications live for you as you are using them.In chrome, go to the view menu, developer submenu, and choose "Allow JavaScript from Apple events". In Safari, it's under the safari menu, settings, developer, "Allow JavaScript from Apple events". Then you can do something like "Hey Claude, would you Please use osascript to navigate the front chrome tab to hacker news". Once you suggest using OSA script in a session it will figure out pretty quickly what it can do with it. Of course you can ask it to do casual things like open your mail app or whatever. Then you can figure out what other things will work like please click around my web app or check the JavaScript Console for errors. Another very important tips for using modern agents is to try to practice using speech to text. I think speaking might be something like five times faster than typing. It takes a lot of time to get used to, especially after a lifetime of programming by typing, but it's a very interesting and a different experience and once you have a lot of practice It starts to to feel effortless.

04 Apr 2026 1:31pm GMT

feedPlanet Python

Marcos Dione: Correcting OpenStreetMap wrong tag values

As a hobbyist consumer of OSM data to render maps, I find wrong tags annoying. Bad values mean that the resulting map is wrong or incomplete, so less useful. I decided to attack the most egregious ones, which include typos, street names instead of type and some other errors. The idea is to attack the long tail first, so I'm not blocked because the next batch of errors (objects with exactly the same error) looks too big (yes, OCD).

So I hacked a small python script to help me find and edit them:

#! /usr/bin/env python3

import os

import psycopg2


def main():
    db = psycopg2.connect(dbname='europe')
    cursor = db.cursor()

    cursor.execute('''
        SELECT
            count(*) AS count,
            highway
        FROM planet_osm_line
        WHERE
            highway IS NOT NULL
        GROUP BY highway
        ORDER BY count ASC
    ''')
    data = cursor.fetchall()

    for count, highway in data:
        print(f"next {count}: {highway}")

        cursor.execute('''
            SELECT osm_id
            FROM planet_osm_line
            WHERE
                highway = %s
        ''', (highway, ))

        for (osm_id, ) in cursor.fetchall():
            if osm_id < 0:
                # in rendering DBs, this is a relation
                os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?relation={-osm_id}'")
            else:
                os.system(f"librewolf -P default 'https://www.openstreetmap.org/edit?way={osm_id}'")


if __name__ == '__main__':
    main()

It is quite inefficient, but what I want is to edit the errors, not to write a script :) This requires a rendering database, which I already have locally :)

From here the workflow is:

In my machine, finding the long tail, and finding each set of errors takes one minute, so I was launching two at the same time. One of the things to notice is that if the object you try to edit does no exists anymore, you get and edit view of the whole planet.

04 Apr 2026 10:16am GMT

feedDjango community aggregator: Community blog posts

Anthropic is pushing away its paying customers

I need to vent.

I want to start by saying this is my opinion and doesn't reflect the views of my employers or anyone else.

I've been paying $100/month for Claude Max because Claude is, without question, the best model for programming. I've built my entire AI workflow around it. I've written blog posts about it. I've recommended it to colleagues, friends, and strangers on the internet. I've been a loyal, paying customer.

And Anthropic keeps making it harder to stay.

The third-party ban

On the night of April 3, 2026, Anthropic sent an email to subscribers announcing that third-party harnesses like OpenClaw can no longer use Claude Max subscription limits. Starting April 4 at 12pm PT. That's less than 24 hours of notice.

Ouch.

Let that sink in. Less than 24 hours to rip out and replace the model powering my personal AI assistant, my Emacs tooling, and potentially other parts of my workflow.

My OpenClaw setup was running Opus 4.6 for personal tasks: managing my calendar, maintaining my open source projects, doing research, all through Telegram. It was perfect. Now if I want to keep using Claude with OpenClaw, I need to pay extra on top of my $100/month subscription through their new "extra usage" pay-as-you-go option.

This also killed CLIProxyAPI, which I wrote about two months ago. That tool let me use my Max subscription with Emacs packages like forge-llm and magit-gptcommit. I wrote an entire blog post about it, shared my config, helped people set it up. Dead now. Two months.

And it's not just OpenClaw and CLIProxyAPI. GSD 2, the next generation of the tool I use for all my heavy development work, is built on the Pi SDK, the same foundation OpenClaw uses. I'm over 90% sure it's also affected. That's the tool I've been watching closely and testing on weekends for my personal projects. If GSD 2 can't use my subscription, that's yet another thing Anthropic broke.

Their email said these tools "put an outsized strain on our systems" and that they need to "prioritize customers using core products". I'm paying $100/month. I am a customer. But apparently I'm not using the product the "right way."

The notice was insulting

We'd been hearing rumblings for a while. Rumors that Anthropic didn't like users accessing Claude through third-party tools. Reports on Reddit of people getting banned for using OpenClaw too aggressively. But nothing official.

Then, with less than 24 hours of notice, they made it policy.

Yes, they offered a one-time credit equal to your monthly subscription price. Yes, they're offering discounts on pre-purchased usage bundles. Yes, they're offering refunds. But none of that changes the fact that they gave paying customers less than a day to restructure their workflows.

A consumer-forward company would have given weeks of notice, not hours. A consumer-forward company would have opened a dialogue with the community before dropping the hammer. Instead, we got an email at night and a deadline the next morning.

The usage limits are a mess

This isn't even the first time Anthropic has frustrated me recently. The usage limits on Claude Code have been a disaster since late March.

Sessions that used to last hours started burning through in under 90 minutes. I'd start in the morning and hit the limit in about 45 minutes doing the same kind of work that used to last all morning. This week, I hit 50% of my weekly usage by Tuesday. My usage resets on Friday. That's terrifying when you depend on the tool for your daily work.

Anthropic acknowledged the issue. An engineer confirmed on X that limits drain faster during peak hours to "manage growing demand." A GitHub issue has been accumulating reports. Reddit threads are flooded with complaints. Someone reverse-engineered the Claude Code binary and found bugs that break prompt caching, silently inflating costs by 10-20x.

And through all of this, Anthropic has been mostly silent. I see tweets from employees saying they're working on it, but I don't see results. Meanwhile, their leadership seems more focused on shipping new features than making sure what they already have actually works. They keep shipping and shipping and not fixing what's broken.

For comparison, I've been using OpenAI's models through OpenCode as my fallback, and I have yet to hit a 5-hour usage limit. Not once. The experience is night and day.

What I did about it

I moved everything to Lazer's LiteLLM proxy (a perk we have as employees at Lazer Technologies). OpenClaw now runs GLM-5, which is a legitimately great model: open source, MIT licensed, and competitive with frontier models on agentic tasks. My Emacs tools (forge-llm, magit-gptcommit) also moved to the Lazer proxy with GLM-5 and Qwen3 Coder 480B Turbo respectively. If you don't have access to a company proxy, OpenRouter is a solid alternative, or you can use your own API keys directly.

The migration wasn't hard. It took a couple of hours. But that's not the point. The point is that I shouldn't have had to do it. I was paying for a service and they changed what I was paying for.

Where I stand

I'm very close to canceling my subscription and moving back to ChatGPT. I've been using OpenAI's models for programming through OpenCode, and they're getting really good. A little too verbose, and not quite at Opus level, but more than good enough for my workflow. And crucially, OpenAI isn't pulling the rug out from under me every other week.

Claude is still the best model for coding. I'm not going to pretend otherwise. But the best model doesn't matter if you can't use it reliably, if the limits drain in 45 minutes, and if the company keeps changing the terms on paying customers without adequate notice.

Here's where I am right now:

The decisions coming out of Anthropic lately feel like corporate decisions that shaft users, not decisions made by a company that cares about its customers. And that's frustrating, because the engineering team clearly builds incredible stuff. It's the business side that's letting them down.

I updated my AI Toolbox page with all the changes. If you want to see my current setup (post-Anthropic-rug-pull), that's the place to look.

See you in the next one. Hopefully less angry.

04 Apr 2026 5:00am GMT

feedPlanet Python

Armin Ronacher: Absurd In Production

About five months ago I wrote about Absurd, a durable execution system we built for our own use at Earendil, sitting entirely on top of Postgres and Postgres alone. The pitch was simple: you don't need a separate service, a compiler plugin, or an entire runtime to get durable workflows. You need a SQL file and a thin SDK.

Since then we've been running it in production, and I figured it's worth sharing what the experience has been like. The short version: the design held up, the system has been a pleasure to work with, and other people seem to agree.

A Quick Refresher

Absurd is a durable execution system that lives entirely inside Postgres. The core is a single SQL file (absurd.sql) that defines stored procedures for task management, checkpoint storage, event handling, and claim-based scheduling. On top of that sit thin SDKs (currently TypeScript, Python and an experimental Go one) that make the system ergonomic in your language of choice.

The model is straightforward: you register tasks, decompose them into steps, and each step acts as a checkpoint. If anything fails, the task retries from the last completed step. Tasks can sleep, wait for external events, and suspend for days or weeks. All state lives in Postgres.

If you want the full introduction, the original blog post covers the fundamentals. What follows here is what we've learned since.

What Changed

The project got multiple releases over the last five months. Most of the changes are things you'd expect from a system that people actually started depending on: hardened claim handling, watchdogs that terminate broken workers, deadlock prevention, proper lease management, event race conditions, and all the edge cases that only show up when you're running real workloads.

A few things worth calling out specifically.

Decomposed steps. The original design only had ctx.step(), where you pass in a function and get back its checkpointed result. That works well for many cases but not all. Sometimes you need to know whether a step already ran before deciding what to do next. So we added beginStep() / completeStep(), which give you a handle you can inspect before committing the result. This turned out to be very useful for modeling intentional failures and conditional logic. This in particular is necessary when working with "before call" and "after call" type hook APIs.

Task results. You can now spawn a task, go do other things, and later come back to fetch or await its result. This sounds obvious in hindsight, but the original system was purely fire-and-forget. Having proper result inspection made it possible to use Absurd for things like spawning child tasks from within a parent workflow and waiting for them to finish. This is particularly useful for debugging with agents too.

absurdctl. We built this out as a proper CLI tool. You can initialize schemas, run migrations, create queues, spawn tasks, emit events, retry failures from the command line. It's installable via uvx or as a standalone binary. This has been invaluable for debugging production issues. When something is stuck, being able to just absurdctl dump-task --task-id=<id> and see exactly where it stopped is a very different experience from digging through logs.

Habitat. A small Go application that serves up a web dashboard for monitoring tasks, runs, checkpoints, and events. It connects directly to Postgres and gives you a live view of what's happening. It's simple, but it's the kind of thing that makes the system more enjoyable for humans.

Agent integration. Since Absurd was originally built for agent workloads, we added a bundled skill that coding agents can discover and use to debug workflow state via absurdctl. There's also a documented pattern for making pi agent turns durable by logging each message as a checkpoint.

What Held Up

The thing I'm most pleased about is that the core design didn't need to change all that much. The fundamental model of tasks, steps, checkpoints, events, and suspending is still exactly what it was initially. We added features around it, but nothing forced us to rethink the basic abstractions.

Putting the complexity in SQL and keeping the SDKs thin turned out to be a genuinely good call. The TypeScript SDK is about 1,400 lines. The Python SDK is about 1,900 but most of this comes from the complexity of supporting colored functions. Compare that to Temporal's Python SDK at around 170,000 lines. It means the SDKs are easy to understand, easy to debug, and easy to port. When something goes wrong, you can read the entire SDK in an afternoon and understand what it does.

The checkpoint-based replay model also aged well. Unlike systems that require deterministic replay of your entire workflow function, Absurd just loads the cached step results and skips over completed work. That means your code doesn't need to be deterministic outside of steps. You can call Math.random() or datetime.now() in between steps and things still work, because only the step boundaries matter. In practice, this makes it much easier to reason about what's safe and what isn't.

Pull-based scheduling was the right choice too. Workers pull tasks from Postgres as they have capacity. There's no coordinator, no push mechanism, no HTTP callbacks. That makes it trivially self-hostable and means you don't have to think about load management at the infrastructure level.

What Might Not Be Optimal

I had some discussions with folks about whether the right abstraction should have been a durable promise. It's a very appealing idea, but it turns out to be much more complex to implement in practice. It's however in theory also more powerful. I did make some attempts to see what absurd would look like if it was based on durable promises but so far did not get anywhere with it. It's however an experiment that I think would be fun to try!

What We Use It For

The primary use case is still agent workflows. An agent is essentially a loop that calls an LLM, processes tool results, and repeats until it decides it's done. Each iteration becomes a step, and each step's result is checkpointed. If the process dies on iteration 7, it restarts and replays iterations 1 through 6 from the store, then continues from 7.

But we've found it useful for a lot of other things too. All our crons just dispatch distributed workflows with a pre-generated deduplication key from the invocation. We can have two cron processes running and they will only trigger one absurd task invocation. We also use it for background processing that needs to survive deploys. Basically anything where you'd otherwise build your own retry-and-resume logic on top of a queue.

What's Still Missing

Absurd is deliberately minimal, but there are things I'd like to see.

There's no built-in scheduler. If you want cron-like behavior, you run your own scheduler loop and use idempotency keys to deduplicate. That works, and we have a documented pattern for it, but it would be nice to have something more integrated.

There's no push model. Everything is pull. If you need an HTTP endpoint to receive webhooks and wake up tasks, you build that yourself. I think that's the right default as push systems are harder to operate and easier to overwhelm but there are cases where it would be convenient. In particular there are quite a few agentic systems where it would be super nice to have webhooks natively integrated (wake on incoming POST request). I definitely don't want to have this in the core, but that sounds like the kind of problem that could be a nice adjacent library that builds on top of absurd.

The biggest omission is that it does not support partitioning yet. That's unfortunate because it makes cleaning up data more expensive than it has to be. In theory supporting partitions would be pretty simple. You could have weekly partitions and then detach and delete them when they expire. The only thing that really stands in the way of that is that Postgres does not have a convenient way of actually doing that.

The hard part is not partitioning itself, it's partition lifecycle management under real workloads. If a worker inserts a row whose expires_at lands in a month without a partition, the insert fails and the workflow crashes. So you need a separate maintenance loop that always creates future partitions far enough ahead for sleeps/retries, and does that for every queue.

On the delete side, the safe approach is DETACH PARTITION CONCURRENTLY, but getting that to run from pg_cron doesn't work because it cannot be run within a transaction, but pg_cron runs everything in one.

I don't think it's an unsolvable problem, but it's one I have not found a good solution for and I would love to get input on.

Does Open Source Still Matter?

This brings me a bit to a meta point on the whole thing which is what the point of Open Source libraries in the age of agentic engineering is. Durable Execution is now something that plenty of startups sell you. On the other hand it's also something that an agent would build you and people might not even look for solutions any more. It's kind of … weird?

I don't think a durable execution library can support a company, I really don't. On the other hand I think it's just complex enough of a problem that it could be a good Open Source project void of commercial interests. You do need a bit of an ecosystem around it, particularly for UI and good DX for debugging, and that's hard to get from a throwaway implementation.

I don't think we have squared this yet, but it's already much better to use than a few months ago.

If you're using Absurd, thinking about it, or building adjacent ideas, I'd love your feedback. Bug reports, rough edges, design critiques, and contributions are all very welcome-this project has gotten better every time someone poked at it from a different angle.

04 Apr 2026 12:00am GMT

03 Apr 2026

feedDjango community aggregator: Community blog posts

Django News - Supply Chain Wake-Up Call - Apr 3rd 2026

News

Incident Report: LiteLLM/Telnyx supply-chain attacks, with guidance

A recent supply chain attack on popular PyPI packages exposed how quickly malware can spread through unpinned dependencies-and why practices like dependency locking and cooldowns are now essential for Python developers.

pypi.org

The PyCon US 2026 schedule is live 🌴🐍 plus security updates, community programs & more

PyCon US 2026 heads to Long Beach with its schedule now live, alongside major Python ecosystem updates spanning security improvements, new community programs, and ongoing PSF initiatives.

mailchi.mp

Django Software Foundation

DSF Board Meeting Minutes, March 12, 2026

DSF approved trademark renewal plans, advanced a long-awaited Code of Conduct update, and continued shaping community governance and outreach efforts.

django.github.io

Wagtail CMS News

How to Generate SEO Descriptions for Your Entire Wagtail Site at Once ⚑

Use Wagtail AI's built-in LLM pipeline to bulk-generate SEO meta descriptions across your entire site in minutes with a simple Django management command.

timonweb.com

How to Show a Waitlist Until Your Wagtail Site Is Ready

A clever Django and Wagtail pattern for launching with a waitlist while selectively granting preview access using secure cookies and a simple passphrase gate.

djangotricks.com

Build Dynamic Campaign Landing Pages in Wagtail

Use a single Wagtail page with dynamic routing, built-in A/B testing, and campaign slug tracking to replace dozens of duplicate landing pages with one flexible, data-driven solution.

wagtail.org

Updates to Django

Today, "Updates to Django" is presented by Hwayoung from Djangonaut Space! πŸš€

Last week we had 11 pull requests merged into Django by 9 different contributors - including 4 first-time contributors! Congratulations to Georgios Verigakis, David Ansa, Vinay Datta and Sebastian Skonieczny for having their first commits merged into Django - welcome on board!

Documentation was added to clarify how database routers handle related-object access. It explains that Django uses instance.state.db by default for related lookups and provides guidance on using the instance hint in dbfor_read() to maintain routing consistency in multi-database configurations. (#29762)

Django Newsletter

Sponsored Link 1

The deployment service for developers and teams.

appliku.com

Articles

The Story of Python's Lazy Imports: Why It Took Three Years and Two Attempts

From PEP 690's rejection to PEP 810's unanimous acceptance - how Python finally got explicit lazy imports after three years of real-world production evidence and a fundamental design inversion

techlife.blog

Tombi, pre-commit, prek and uv.lock

A subtle tooling mismatch reveals how a recent update made uv.lock suddenly count as TOML, causing pre-commit to reformat it unexpectedly across environments.

vanrees.org

Claude Pitfalls: Database Indexes

A smart migration tweak reveals how AI code reviews can both catch real production risks and miss critical context, proving that combining multiple agents leads to better Django performance decisions.

lincolnloop.com

Loopwerk: Building modern Django apps with Alpine AJAX, revisited

After ditching template partials and full-page AJAX hacks, this deep dive shows how splitting Django views and using template includes leads to simpler code, better performance, and a more maintainable Alpine-powered stack.

loopwerk.io

Djangonaut diaries, week 4: Eliminating a Redundant Index in Django's ORM

A deep dive into a subtle Django ORM inefficiency shows how removing a redundant many-to-many index improves database performance and highlights the real-world journey from bug report to merged PR.

dev.to

SHA Pinning Is Not Enough

SHA pinning isn't a silver bullet-this deep dive shows how attackers can still slip malicious code into GitHub Actions by pointing to trusted-looking but rogue commits.

rosesecurity.dev

A primer on Django project structure Β€ 101% objective - always!

AI is rapidly rewriting the world's software, but without scalable verification like formal proofs, we risk deploying fast, flawed, and fundamentally untrusted code at global scale.

overtag.dk

When AI Writes the World's Software, Who Verifies It?

AI is rapidly rewriting the world's software, but without scalable verification like formal proofs, we risk shipping faster code that no one truly understands or can trust.

leodemoura.github.io

So OpenAI is acquiring Astral

OpenAI's acquisition of Astral raises real concerns about the future of uv, but for now, it's still one of the fastest and most practical Python tooling choices worth sticking with.

mostlypython.com

Events

DjangoCon Europe is soon!

April 15-19 in Athens, Greece. Get a ticket if you're able to attend. Keynote speakers, workshops, and all talks available online.

djangocon.eu

PyCon US May 13-19 in Long Beach, CA

Tickets are available for this annual event now in beautiful Long Beach, California.

pycon.org

DjangoCon US Early Bird Tickets Now Available

Don't hesitate! If you can, join for five days of talks, workshops, and sprints once again in Chicago this August 24-28.

djangocon.us

Videos

Boost Your GitHub DX

A lively chat with Adam Johnson on leveling up your GitHub workflow, from practical DX tips to cutting-edge Python tooling like ICU bindings.

djangotv.com

Django Job Board

Two fresh Python roles this week: one focused on open data impact, the other on client-facing architecture with a leading developer tools company.

Python Developer at Open Data Services πŸ†•

Solutions Architect - Python (Client-facing) at JetBrains

Django Newsletter

Django Forum

Django sprint at Pycon DE? - Events

A call is out for someone to lead a Django sprint at PyCon DE 2026, with contributors already eager to join and help onboard newcomers.

djangoproject.com

Projects

freelawproject/django-s3-express-cache

A high-speed, low latency cache that uses S3 Express to store many objects cheaply and efficiently

github.com

kjnez/django-rclone

Django database and media backup management commands, powered by rclone.

github.com


This RSS feed is published on https://django-news.com/. You can also subscribe via email.

03 Apr 2026 3:00pm GMT

02 Apr 2026

feedDjango community aggregator: Community blog posts

OpenCode as a server: AI agents that work while I sleep

My main machine is a beast. Ryzen 9 9950X3D, 64 GB of RAM, RX 9060 XT, three monitors, the works. It barely ever shuts off. So at some point I started thinking: why isn't this thing working for me when I'm not sitting in front of it?

The answer is now: it does. I'm running OpenCode as a persistent server on this machine, accessible from anywhere through my WireGuard VPN. I can spin up coding sessions from my MacBook Air, my phone, wherever. And the best part? I have scheduled jobs that run overnight: adding tests, updating documentation, enforcing code conventions. I wake up to PRs waiting for my review.

Here's the full setup.

The architecture

 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ roger-beast β”‚
β”‚ (Ryzen 9 9950X3D / 64GB) β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ opencode serve │◀─────│ systemd user service β”‚ β”‚
β”‚ β”‚ :4096 (web UI) β”‚ β”‚ (auto-start/restart) β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ β”‚ opencode-scheduler β”‚ β”‚
β”‚ β”‚ β”‚ (systemd timers) β”‚ β”‚
β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ 2am: add tests β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ 3am: update docs β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β”‚ 4am: conventions β”‚ β”‚ β”‚
β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Nginx Proxy β”‚
β”‚ Manager β”‚
β”‚(opencode.example.com)β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ WireGuard VPN β”‚
β”‚ / Local Network β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ β”‚
β”Œβ”€β”€β”΄β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”
β”‚ πŸ’» β”‚ β”‚ πŸ“± β”‚
β”‚ MBA β”‚ β”‚ Phone β”‚
β””β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The idea is simple: OpenCode runs as a systemd user service, Nginx Proxy Manager gives it a nice domain, and WireGuard makes sure only my devices can reach it. From any browser on any device, I just go to opencode.example.com and I'm in.

Phase 1: OpenCode server with systemd

OpenCode has a serve command that starts a web UI you can access from a browser. The trick is making it persistent so it survives reboots and restarts itself if it crashes.

First, create a systemd user service. This means it runs as your user, not as root, which is important because it needs access to your home directory, your API keys, your OpenCode config, everything.

Create the file at ~/.config/systemd/user/opencode.service:

[Unit]
Description=OpenCode headless server
After=network.target

[Service]
ExecStart=/home/roger/.opencode/bin/opencode serve --hostname 0.0.0.0 --port 4096
Restart=on-failure
RestartSec=5

[Install]
WantedBy=default.target

A few things to note:

Enable and start it:

systemctl --user daemon-reload
systemctl --user enable opencode.service
systemctl --user start opencode.service

Verify it's running:

systemctl --user status opencode.service

You should see it active and running. If you want the service to keep running even when you're not logged in (which you probably do, since the whole point is that it runs when you're away), you need to enable lingering:

sudo loginctl enable-linger roger

Replace roger with your username. This tells systemd to keep your user services running even after you log out. Without this, systemd kills your user services when your last session closes, which defeats the entire purpose.

At this point, you should be able to open http://localhost:4096 on the machine and see the OpenCode web UI.

Sweet, sweet OpenCode

Phase 2: Nginx Proxy Manager + WireGuard

I use Nginx Proxy Manager as my reverse proxy. It's a Docker-based GUI for managing Nginx configs, SSL certificates, and proxy hosts. If you prefer raw Nginx configs, you can absolutely do that instead, the concept is the same: point a domain at the OpenCode port.

In Nginx Proxy Manager, I created a new proxy host:

For the access part, I don't need to worry too much about authentication because the domain is only accessible from two places:

  1. My local network: If I'm at home, my devices are already on the same network as the machine.
  2. My WireGuard VPN: If I'm remote, I connect to my WireGuard VPN first, which puts me on the same network. My WireGuard setup is the same one I described in my Claude Code from the beach post.

The DNS for opencode.example.com points to the internal IP of the machine running Nginx Proxy Manager. This means the domain simply doesn't resolve from the public internet. You'd have to be on my network (or VPN) for it to go anywhere.

Phase 3: Accessing from anywhere

This is the satisfying part. Once the server is running and the proxy is configured, the workflow from any device is:

  1. Connect to WireGuard (if I'm not already home)
  2. Open a browser
  3. Go to opencode.example.com
  4. Done. Full OpenCode web UI, all my agents, all my MCP servers, everything.

From my MacBook Air at a coffee shop, from my phone on the couch, doesn't matter. The web UI is the same everywhere. I can start a task on my MacBook, close the laptop, pick it up on my phone later, and everything is still there because the server is running on the beast at home.

This pairs really nicely with my Claude Code from the beach setup, but it's way friendlier. That setup uses mosh + tmux + SSH bridges through Termux to get a terminal on a remote machine. It works great for Claude Code (which is a TUI), but it's a lot of moving parts: you need Termux, SSH keys on your phone, a jump box, mosh installed everywhere. If something breaks in the chain, you're debugging SSH configs from a phone keyboard. I wrote a whole blog post about that setup and I'm proud of it, but let's be real: the fact that I needed an entire blog post to explain how to use Claude Code from my phone is kind of the problem.

With OpenCode, I just open a browser. That's it. Any browser, on any device. No Termux, no SSH keys, no jump box, no terminal emulator. My phone's regular browser works perfectly. My MacBook's browser works perfectly. If I ever get a tablet, that'll work too. The barrier to entry went from "install Termux, configure SSH, set up mosh, create fish aliases" to "open Firefox."

Hey Anthropic, if you're reading this: please give Claude Code a web UI. I love your tool, I pay $100/month for it, but the fact that OpenCode can do this out of the box and Claude Code can't is… not great. I shouldn't need a 600-word phase-by-phase guide to use my coding agent from my phone. Just saying. πŸ™ƒ

I still use the Claude Code + mosh + tmux setup for Claude Code specifically (since it's terminal-only), but for OpenCode work, the web UI is a massive quality-of-life upgrade for mobile coding.

Phase 4: The overnight crew

This is my favorite part. The server runs 24/7, so why not put it to work while I sleep?

I use the opencode-scheduler plugin, which lets you schedule recurring jobs using your OS's native scheduler (systemd timers on Linux, launchd on Mac). It's an OpenCode plugin, so you set it up directly from the OpenCode UI.

First, add the plugin to your opencode.json:

{
 "plugin": ["opencode-scheduler"]
}

Then, from the OpenCode UI, you just tell it what you want in natural language:

Schedule a job that runs every weekday at 2am and runs the test-gap-pr-cronjob skill

The plugin takes care of creating the systemd timer and service under ~/.config/systemd/user/. You can verify it's installed with:

systemctl --user list-timers | grep opencode

What my overnight jobs do

I have three scheduled jobs that run between 1 AM and 6 AM while I'm sleeping. Each one uses a custom OpenCode skill (similar to the planning/execution/review agents I described on my AI Toolbox page):

Each job uses a custom skill that knows the project's conventions, testing patterns, and documentation style. The skills are the same kind of custom agents I build for my regular OpenCode workflow, just triggered on a schedule instead of manually.

The morning routine

When I log in in the morning, I usually have 1-3 PRs waiting for me. Most of them are good to go with minor tweaks. Some need more work. Either way, the tedious stuff (writing tests for edge cases, updating docstrings, fixing inconsistent naming) is already done, and I just need to review it.

It's like having a junior developer who works the night shift. They're not perfect, but they're reliable, they don't complain, and they're surprisingly good at the boring stuff.

You can check the logs for any job at any time:

# From the OpenCode UI
Show logs for test-gap-pr-cronjob

# Or directly on disk
cat ~/.config/opencode/logs/test-gap-pr-cronjob.log

The specs

For anyone curious about the machine running all of this:

OS: Manjaro Linux 26.0.4
Host: B850M Pro-A WiFi
Kernel: 6.12.77-1-MANJARO
CPU: AMD Ryzen 9 9950X3D (32) @ 5.752GHz
GPU: AMD ATI Radeon RX 9060 XT GAMING OC 16G
Memory: 64 GB DDR5
Network: WiFi 6
Uptime: usually measured in days, not hours

The machine is wildly overpowered for this. OpenCode's server uses barely any resources when idle, and even during active sessions or scheduled jobs, it doesn't break a sweat. If you have a less powerful machine that stays on, this setup will work fine for you too.

Conclusion

The whole setup took maybe 30 minutes. A systemd service, a proxy host, and a scheduler plugin. That's it.

What I love about this is that it extends my AI Toolbox in a way I didn't expect. I went from "I use OpenCode when I'm at my desk" to "OpenCode is always running and I can use it from anywhere, and it also does work for me while I sleep." The scheduled jobs alone have saved me hours of tedious work every week.

If you have a machine that stays on (even a modest home server or an old laptop), you can do this. You don't need a Ryzen 9 or 64 GB of RAM. You need a machine that doesn't turn off, a way to reach it remotely, and the willingness to let AI handle the boring stuff while you're asleep.

All my configs are public in my dotfiles: git.rogs.me/rogs/dotfiles

If you have questions, hit me up. And if you set this up and wake up to PRs you didn't write, let me know. That first morning is a great feeling.

See you in the next one!

02 Apr 2026 5:00am GMT

16 Mar 2026

feedPlanet Twisted

Donovan Preston: "Start Drag" and "Drop" to select text with macOS Voice Control

I have been using macOS voice control for about three years. First it was a way to reduce pain from excessive computer use. It has been a real struggle. Decades of computer use habits with typing and the mouse are hard to overcome! Text selection manipulation commands work quite well on macOS native apps like apps written in swift or safari with an accessibly tagged webpage. However, many webpages and electron apps (Visual Studio Code) have serious problems manipulating the selection, not working at all when using "select foo" where foo is a word in the text box to select, or off by one errors when manipulating the cursor position or extending the selection. I only recently expanded my repertoire with the "start drag" and "drop" commands, previously having used "Click and hold mouse", "move cursor to x", and "release mouse". Well, now I have discovered that using "start drag x" and "drop x" makes a fantastic text selection method! This is really going to improve my speed. In the long run, I believe computer voice control in general is going to end up being faster than WIMP, but for now the awkwardly rigid command phrasing and the amount of times it misses commands or misunderstands commands still really holds it back. I've been learning the macOS Voice Control specific command set for years now and I still reach for the keyboard and mouse way too often.

16 Mar 2026 11:04am GMT

04 Mar 2026

feedPlanet Twisted

Glyph Lefkowitz: What Is Code Review For?

Humans Are Bad At Perceiving

Humans are not particularly good at catching bugs. For one thing, we get tired easily. There is some science on this, indicating that humans can't even maintain enough concentration to review more than about 400 lines of code at a time..

We have existing terms of art, in various fields, for the ways in which the human perceptual system fails to register stimuli. Perception fails when humans are distracted, tired, overloaded, or merely improperly engaged.

Each of these has implications for the fundamental limitations of code review as an engineering practice:

Never Send A Human To Do A Machine's Job

When you need to catch a category of error in your code reliably, you will need a deterministic tool to evaluate - and, thanks to our old friend "alert fatigue" above - ideally, to also remedy that type of error. These tools will relieve the need for a human to make the same repetitive checks over and over. None of them are perfect, but:

Don't blame reviewers for missing these things.

Code review should not be how you catch bugs.

What Is Code Review For, Then?

Code review is for three things.

First, code review is for catching process failures. If a reviewer has noticed a few bugs of the same type in code review, that's a sign that that type of bug is probably getting through review more often than it's getting caught. Which means it's time to figure out a way to deploy a tool or a test into CI that will reliably prevent that class of error, without requiring reviewers to be vigilant to it any more.

Second - and this is actually its more important purpose - code review is a tool for acculturation. Even if you already have good tools, good processes, and good documentation, new members of the team won't necessarily know about those things. Code review is an opportunity for older members of the team to introduce newer ones to existing tools, patterns, or areas of responsibility. If you're building an observer pattern, you might not realize that the codebase you're working in already has an existing idiom for doing that, so you wouldn't even think to search for it, but someone else who has worked more with the code might know about it and help you avoid repetition.

You will notice that I carefully avoided saying "junior" or "senior" in that paragraph. Sometimes the newer team member is actually more senior. But also, the acculturation goes both ways. This is the third thing that code review is for: disrupting your team's culture and avoiding stagnation. If you have new talent, a fresh perspective can also be an extremely valuable tool for building a healthy culture. If you're new to a team and trying to build something with an observer pattern, and this codebase has no tools for that, but your last job did, and it used one from an open source library, that is a good thing to point out in a review as well. It's an opportunity to spot areas for improvement to culture, as much as it is to spot areas for improvement to process.

Thus, code review should be as hierarchically flat as possible. If the goal of code review were to spot bugs, it would make sense to reserve the ability to review code to only the most senior, detail-oriented, rigorous engineers in the organization. But most teams already know that that's a recipe for brittleness, stagnation and bottlenecks. Thus, even though we know that not everyone on the team will be equally good at spotting bugs, it is very common in most teams to allow anyone past some fairly low minimum seniority bar to do reviews, often as low as "everyone on the team who has finished onboarding".

Oops, Surprise, This Post Is Actually About LLMs Again

Sigh. I'm as disappointed as you are, but there are no two ways about it: LLM code generators are everywhere now, and we need to talk about how to deal with them. Thus, an important corollary of this understanding that code review is a social activity, is that LLMs are not social actors, thus you cannot rely on code review to inspect their output.

My own personal preference would be to eschew their use entirely, but in the spirit of harm reduction, if you're going to use LLMs to generate code, you need to remember the ways in which LLMs are not like human beings.

When you relate to a human colleague, you will expect that:

  1. you can make decisions about what to focus on based on their level of experience and areas of expertise to know what problems to focus on; from a late-career colleague you might be looking for bad habits held over from legacy programming languages; from an earlier-career colleague you might be focused more on logical test-coverage gaps,
  2. and, they will learn from repeated interactions so that you can gradually focus less on a specific type of problem once you have seen that they've learned how to address it,

With an LLM, by contrast, while errors can certainly be biased a bit by the prompt from the engineer and pre-prompts that might exist in the repository, the types of errors that the LLM will make are somewhat more uniformly distributed across the experience range.

You will still find supposedly extremely sophisticated LLMs making extremely common mistakes, specifically because they are common, and thus appear frequently in the training data.

The LLM also can't really learn. An intuitive response to this problem is to simply continue adding more and more instructions to its pre-prompt, treating that text file as its "memory", but that just doesn't work, and probably never will. The problem - "context rot" is somewhat fundamental to the nature of the technology.

Thus, code-generators must be treated more adversarially than you would a human code review partner. When you notice it making errors, you always have to add tests to a mechanical, deterministic harness that will evaluates the code, because the LLM cannot meaningfully learn from its mistakes outside a very small context window in the way that a human would, so giving it feedback is unhelpful. Asking it to just generate the code again still requires you to review it all again, and as we have previously learned, you, a human, cannot review more than 400 lines at once.

To Sum Up

Code review is a social process, and you should treat it as such. When you're reviewing code from humans, share knowledge and encouragement as much as you share bugs or unmet technical requirements.

If you must reviewing code from an LLM, strengthen your automated code-quality verification tooling and make sure that its agentic loop will fail on its own when those quality checks fail immediately next time. Do not fall into the trap of appealing to its feelings, knowledge, or experience, because it doesn't have any of those things.

But for both humans and LLMs, do not fall into the trap of thinking that your code review process is catching your bugs. That's not its job.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you've read here and you'd like to read more of it, or you'd like to support my various open-source endeavors, you can support my work as a sponsor!

04 Mar 2026 5:24am GMT

22 Jan 2026

feedPlanet Plone - Where Developers And Integrators Write

Maurits van Rees: Mikel Larreategi: How we deploy cookieplone based projects.

We saw that cookieplone was coming up, and Docker, and as game changer uv making the installation of Python packages much faster.

With cookieplone you get a monorepo, with folders for backend, frontend, and devops. devops contains scripts to setup the server and deploy to it. Our sysadmins already had some other scripts. So we needed to integrate that.

First idea: let's fork it. Create our own copy of cookieplone. I explained this in my World Plone Day talk earlier this year. But cookieplone was changing a lot, so it was hard to keep our copy updated.

Maik Derstappen showed me copier, yet another templating language. Our idea: create a cookieplone project, and then use copier to modify it.

What about the deployment? We are on GitLab. We host our runners. We use the docker-in-docker service. We develop on a branch and create a merge request (pull request in GitHub terms). This activates a piple to check-test-and-build. When it is merged, bump the version, use release-it.

Then we create deploy keys and tokens. We give these access to private GitLab repositories. We need some changes to SSH key management in pipelines, according to our sysadmins.

For deployment on the server: we do not yet have automatic deployments. We did not want to go too fast. We are testing the current pipelines and process, see if they work properly. In the future we can think about automating deployment. We just ssh to the server, and perform some commands there with docker.

Future improvements:

  • Start the docker containers and curl/wget the /ok endpoint.
  • lock files for the backend, with pip/uv.

22 Jan 2026 9:43am GMT

Maurits van Rees: Jakob Kahl and Erico Andrei: Flying from one Plone version to another

This is a talk about migrating from Plone 4 to 6 with the newest toolset.

There are several challenges when doing Plone migrations:

  • Highly customized source instances: custom workflow, add-ons, not all of them with versions that worked on Plone 6.
  • Complex data structures. For example a Folder with a Link as default page, with pointed to some other content which meanwhile had been moved.
  • Migrating Classic UI to Volto
  • Also, you might be migrating from a completely different CMS to Plone.

How do we do migrations in Plone in general?

  • In place migrations. Run migration steps on the source instance itself. Use the standard upgrade steps from Plone. Suitable for smaller sites with not so much complexity. Especially suitable if you do only a small Plone version update.
  • Export - import migrations. You extract data from the source, transform it, and load the structure in the new site. You transform the data outside of the source instance. Suitable for all kinds of migrations. Very safe approach: only once you are sure everything is fine, do you switch over to the newly migrated site. Can be more time consuming.

Let's look at export/import, which has three parts:

  • Extraction: you had collective.jsonify, transmogrifier, and now collective.exportimport and plone.exportimport.
  • Transformation: transmogrifier, collective.exportimport, and new: collective.transmute.
  • Load: Transmogrifier, collective.exportimport, plone.exportimport.

Transmogrifier is old, we won't talk about it now. collective.exportimport: written by Philip Bauer mostly. There is an @@export_all view, and then @@import_all to import it.

collective.transmute is a new tool. This is made to transform data from collective.exportimport to the plone.exportimport format. Potentially it can be used for other migrations as well. Highly customizable and extensible. Tested by pytest. It is standalone software with a nice CLI. No dependency on Plone packages.

Another tool: collective.html2blocks. This is a lightweight Python replacement for the JavaScript Blocks conversion tool. This is extensible and tested.

Lastly plone.exportimport. This is a stripped down version of collective.exportimport. This focuses on extract and load. No transforms. So this is best suited for importing to a Plone site with the same version.

collective.transmute is in alpha, probably a 1.0.0 release in the next weeks. Still missing quite some documentation. Test coverage needs some improvements. You can contribute with PRs, issues, docs.

22 Jan 2026 9:43am GMT

Maurits van Rees: Fred van Dijk: Behind the screens: the state and direction of Plone community IT

This is a talk I did not want to give.

I am team lead of the Plone Admin team, and work at kitconcept.

The current state: see the keynotes, lots happening on the frontend. Good.

The current state of our IT: very troubling and daunting.

This is not a 'blame game'. But focussing on resources and people this conference should be a first priority. We are a real volunteer organisation, nobody is pushing anybody around. That is a strength, but also a weakness. We also see that in the Admin team.

The Admin team is 4 senior Plonistas as allround admin, 2 release managers, 2 CI/CD experts. 3 former board members, everyone overburdened with work. We had all kinds of plans for this year, but we have mostly been putting out fires.

We are a volunteer organisation, and don't have a big company behind us that can throw money at the problems. Strength and weakness. In all society it is a problem that volunteers are decreasing.

Root causes:

  • We failed to scale down in time in our IT landscape and usage.
  • We have no clean role descriptions, team descriptions, we can't ask a minimum effort per week or month.
  • The trend is more communication channels, platforms to join and promote yourself, apps to use.

Overview of what have have to keep running as admin team:

  • Support main development process: github, CI/CD, Jenkins main and runners, dist.plone.org.
  • Main communication, documentation: pone.org, docs.plone.org, training.plone.org, conf and country sites, Matomo.
  • Community office automation: Google docds, workspacae, Quaive, Signal, Slack
  • Broader: Discourse and Discord

The first two are really needed, the second we already have some problems with.

Some services are self hosted, but also a lot of SAAS services/platforms. In all, it is quite a bit.

The Admin team does not officially support all of these, but it does provide fallback support. It is too much for the current team.

There are plans for what we can improve in the short term. Thank you to a lot of people that I have already talked to about this. 3 areas: GitHub setup and config, Google Workspace, user management.

On GitHub we have a sponsored OSS plan. So we have extra features for free, but it not enough by far. User management: hard to get people out. You can't contact your members directly. E-mail has been removed, for privacy. Features get added on GitHub, and no complete changelog.

Challenge on GitHub: we have public repositories, but we also have our deployments in there. Only really secure would be private repositories, otherwise the danger is that credentials or secret could get stolen. Every developer with access becomes an attack vector. Auditing is available for only 6 months. A simple question like: who has been active for the last 2 years? No, can't do.

Some actionable items on GitHub:

  • We will separate the contributor agreement check from the organisation membership. We create a hidden team for those who signed, and use that in the check.
  • Cleanup users, use Contributors team, Developers
  • Active members: check who has contributed the last years.
  • There have been security incidents. Someone accidentally removed a few repositories. Someone's account got hacked, luckily discovered within a few hours, and some actions had already been taken.
  • More fine grained teams to control repository access.
  • Use of GitHub Discussions for some central communication of changes.
  • Use project management better.
  • The elephant in the room that we have practice on this year, and ongoing: the Collective organisation. This was free for all, very nice, but the development world is not a nice and safe place anymore. So we already needed to lock down some things there.
  • Keep deployments and the secrets all out of GitHub, so no secrets can be stolen.

Google Workspace:

  • We are dependent on this.
  • No user management. Admins have had access because they were on the board, but they kept access after leaving the board. So remove most inactive users.
  • Spam and moderation issues
  • We could move to Google docs for all kinds of things. Use Google workspace drives for all things. But the Drive UI is a mess, so docs can be in your personal account without you realizing it.

User management:

  • We need separate standalone user management, but implementation is not clear.
  • We cannot contact our members one on one.

Oh yes, Plone websites:

  • upgrade plone.org
  • self preservation: I know what needs to be done, and can do it, but have no time, focusing on the previous points instead.

22 Jan 2026 9:43am GMT