12 Dec 2025
Planet Grep
Lionel Dricot: L’autocomplétion de nos intentions

L'autocomplétion de nos intentions
Lorsque j'ai commencé à utiliser mon premier smartphone, en 2012, j'utilisais évidemment le clavier fourni par défaut qui proposait de l'autocomplétion.
Il ne m'a fallu que quelques jours pour être profondément choqué. L'autocomplétion proposait des mots qui convenaient parfaitement, mais qui n'étaient pas ceux que j'avais en tête. En acceptant une autocomplétion par désir d'économiser quelques pressions de doigts, je me retrouvais avec une phrase différente de ce que j'avais initialement envisagé. Je modifiais le cours de ma pensée pour m'adapter à l'algorithme !
C'était choquant !
Moi qui étais passé au Bépo quelques années plus tôt et qui avais découvert les puissances de la dactylographie pour affiner mes idées, je ne pouvais imaginer laisser une machine me dicter ma pensée, même pour un texte aussi mondain qu'un SMS. Je me suis donc mis en quête d'un clavier optimisé pour usage sur un écran tactile minuscule, mais sans autocomplétion. J'ai trouvé MessagEase, que j'ai utilisé pendant des années avant de passer à ThumbKey, version libre du précédent.
- Le bépo sur le bout des doigts (ploum.net)
- Writing on a smartphone: review of 8pen and MessagEase (ploum.net)
- dessalines/thumb-key: A privacy-conscious Android keyboard made for your thumbs (github.com)
Le choc fut encore plus violent lorsqu'apparurent les suggestions de réponses aux emails dans l'interface Gmail. Ma première expérience avec ce système fut de me voir proposer plusieurs manières de répondre par l'affirmative à un email professionnel auquel je voulais répondre négativement. Avec horreur, je perçus en moi un vague instinct de cliquer pour me débarrasser plus vite de cet email corvée.
Cette expérience m'inspira la nouvelle « Les imposteurs », lisible dans le recueil « Stagiaire au spatioport Omega 3000 et autres joyeusetés que nous réserve le futur » (qui est justement disponible à -50% jusqu'au 15 décembre ou à prix normal, mais sans frais de port chez votre libraire).
L'autocomplétion manipule notre intention, cela ne fait aucun doute. Et s'il y a bien quelque chose que je souhaite préserver chez moi, c'est mon cerveau et mes idées. Comme un footballeur préserve ses jambes, comme un pianiste préserve ses mains, je chéris et protège mon cerveau et mon libre arbitre. Au point de ne jamais boire d'alcool, de ne jamais consommer la moindre drogue : je ne veux pas altérer mes perceptions, mais, au contraire, les affiner.
Mon cerveau est ce que j'ai de plus précieux, l'autocomplétion même la plus basique est une atteinte directe à mon libre arbitre.
Mais, avec les chatbots, c'est désormais une véritable économie de l'intention qui se met en place. Car si les prochaines versions de ChatGPT ne sont pas meilleures à répondre à vos questions, elles seront meilleures à les prédire.
Non pas à cause de pouvoir de divination ou de télépathie. Mais parce qu'elles vous auront influencé pour vous amener dans la direction qu'elles auront choisie, à savoir la plus profitable.
Une partie de l'intérêt disproportionné que les politiciens et les CEOs portent aux chatbots vient clairement de leur incompétence voire de leur bêtise. Leur métier étant de dire ce que l'audience veut entendre, même si cela n'a aucun sens, ils sont sincèrement étonnés de voir une machine être capable de les remplacer. Et ils sont le plus souvent incapables de percevoir que tout le monde n'est pas comme eux, que tout le monde ne fait pas semblant de comprendre à longueur de journée, que tout le monde n'est pas Julius.
Mais, chez les plus retors et les plus intelligents, une partie de cet intérêt peut également s'expliquer par le potentiel de manipulation des foules. Là où Facebook et TikTok ont ponctuellement influencé des élections majeures grâce à des mouvements de foule virtuels, une ubiquité de ChatGPT et consorts permet un contrôle total sur les pensées les plus intimes de tous les utilisateurs.
Après tout, j'ai bien entendu dans l'épicerie de mon quartier une femme se vanter auprès d'une amie d'utiliser ChatGPT comme conseiller pour ses relations amoureuses. À partir de là, il est trivial de modifier le code pour faire en sorte que les femmes soient plus dociles, plus enclines à sacrifier leurs aspirations personnelles pour celles de leur conjoint, de pondre plus d'enfants et de les éduquer selon les préceptes ChatGPTesques.
Contrairement au fait de résoudre les « hallucinations », problème insoluble, car les Chatbots n'ont pas de notion de vérité épistémologique, introduire des biais est trivial. En fait, il a été démontré plusieurs fois que ces biais existent déjà. C'est juste que nous avons naïvement supposé qu'ils étaient involontaires, mécaniques.
Alors qu'ils sont un formidable produit à vendre à tous les apprentis dictateurs. Un produit certainement rentable et pas très éloigné du ciblage publicitaire que vendent déjà Facebook et Google.
Un produit qui apparaît comme parfaitement éthique, approprié et même bénéfique à l'humanité. Du moins si on se fie à ce que nous répondra ChatGPT. Qui confirmera d'ailleurs son propos en pointant vers plusieurs articles scientifiques. Rédigés avec son aide.
Je suis Ploum et je viens de publier Bikepunk, une fable écolo-cycliste entièrement tapée sur une machine à écrire mécanique. Pour me soutenir, achetez mes livres (si possible chez votre libraire) !
Recevez directement par mail mes écrits en français et en anglais. Votre adresse ne sera jamais partagée. Vous pouvez également utiliser mon flux RSS francophone ou le flux RSS complet.
12 Dec 2025 10:04am GMT
Frederic Descamps: Deploying on OCI with the starter kit – part 4 (connecting to the database)
Let's now see how we can connect to our MySQL HeatWave DB System, which was deployed with the OCI Hackathon Starter Kit in part 1. We have multiple possibilities to connect to the DB System, and we will use three of them: MySQL Shell in the command line MySQL Shell is already installed on the […]
12 Dec 2025 10:04am GMT
Dries Buytaert: Can AI clean up its own mess?
In The Big Blind, investor Fred Wilson highlights how AI is revolutionizing geothermal energy discovery. This sentence stood out for me:
It is wonderfully ironic that the technology that is creating an energy crisis is also a potential solve for that energy crisis.
AI consumes massive amounts of electricity, but also helps to discover new sources of clean energy. The source of demand might become the source of supply. I emphasized the word "might" because history is not on our side.
But energy scarcity is a "discovery problem", and AI excels at discovery. The geothermal energy was always there. The bottleneck was our ability to find it. AI can analyze seismic data, geological surveys and satellite imagery at a scale no human can match.
The quote stood out because technology rarely cleans up its own mess. More cars don't fix traffic and more coal doesn't fix pollution. Usually it takes a different technology to clean up the mess, if it can be cleaned up at all. For example, the internet created information overload, and search engines were invented to help manage it.
But here, for AI and energy, the system using the resource might also be the one capable of discovering more of it.
I see a similar pattern in open source.
Most open source projects depend on a small group of maintainers who review code, maintain infrastructure and keep everything running. They shoulder a disproportionate share of the work.
AI risks adding to that burden. It makes it easier for people to generate code and submit pull requests, but reviewing those code contributions still falls on the same few maintainers. When contributions scale up, review capacity has to keep pace.
And just like with energy discovery, AI might also be the solution. There already exist AI-powered review tools that can scan pull requests, enforce project standards and surface issues before a human even looks at them. If you believe AI-generated code is here to stay (I do), AI-assisted review might not be optional.
I'm no Fred Wilson, but as an occasional angel investor, such review tools look like a good way to go long on vibe coding. And as Drupal's Project Lead, I'd love to collaborate with the providers of these tools. If we can make open source maintenance more scalable and sustainable, everyone benefits.
So yes, the technology making a situation worse might also be capable of helping to solve it. That is rare enough to pay attention to.
12 Dec 2025 10:04am GMT
11 Dec 2025
Planet Debian
Dirk Eddelbuettel: #056: Running r-ci with R-devel

Welcome to post 56 in the R4 series.
The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the 'matrix' of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the 'fast, easy, reliable: pick all three!' provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.
This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:
strategy:
matrix:
include:
- { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
- { name: r-devel, os: ubuntu-latest, container: rocker/drd }
- { name: macos, os: macos-latest }
- { name: ubuntu, os: ubuntu-latest }
runs-on: ${{ matrix.os }}
container: ${{ matrix.container }}This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff's mmap repo.
It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under 'normal' circumstances it is not needed.
Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
11 Dec 2025 6:29pm GMT
Planet Lisp
Scott L. Burson: FSet v2.1.0 released: Seq improvements
I have just released FSet v2.1.0 (also on GitHub).
This release is mostly to add some performance and functionality improvements for seqs. Briefly:
- Access to and updating of elements at the beginning or end of a long seq is now faster.
- I have finally gotten around to implementing
searchandmismatchon seqs. NOTE: this may require changes to your package definitions; see below. - Seqs containing only characters are now treated specially, making them a viable replacement for CL strings in many cases.
- In an FSet 2 context, the seq constructor macros now permit specification of a default.
- There are changes to some
convertmethods. - There are a couple more FSet 2 API changes, involving
image.
See the above links for the full release notes.
UPDATE: there's already a v2.1.1; I had forgotten to export the new function char-seq?.
11 Dec 2025 4:01am GMT
09 Dec 2025
FOSDEM 2026
/dev/random and lightning talks
The room formally known as "Lightning Talks" is now known as /dev/random. After 25 years, we say goodbye to the old Lightning Talks format. In place, we have two new things! /dev/random: 15 minute talks on a random, interesting, FOSS-related subject, just like the older Lightning Talks New Lightning Talks: a highly condensed batch of 5 minute quick talks in the main auditorium on various FOSS-related subjects! Last year we experimented with running a more spontaneous lightning talk format, with a submission deadline closer to the event and strict short time limits (under five minutes) for each speaker. The experiment舰
09 Dec 2025 11:00pm GMT
08 Dec 2025
Planet Debian
Thorsten Alteholz: My Debian Activities in November 2025
Debian LTS/ELTS
This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.
During my allocated time I uploaded or worked on:
- [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
- [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
- [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
- [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
- [libcupsfilters] upload to unstable to fix two CVEs
- [cups-filters] upload to unstable to fix three CVEs
- [cups] upload to unstable to fix two CVEs
- [rlottie] upload to unstable to finally fix three CVEs
- [rplay] upload to unstable to finally fix one CVE
- [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
- [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
- [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
- [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
- [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.
I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.
Debian Printing
This month I uploaded a new upstream version or a bugfix version of:
- … lprng to unstable.
- … cpdb-backend-cups to unstable.
- … cpdb-libs to unstable.
- … ippsample to unstable.
- … cups-filters to unstable.
I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.
This work is generously funded by Freexian!
Debian Astro
This month I uploaded a new upstream version or a bugfix version of:
- … siril to unstable (sponsored upload).
- … supernovas to unstable (sponsored upload).
Debian IoT
This month I uploaded a new upstream version or a bugfix version of:
- … openzwave-controlpanel to unstable.
- … pywws to unstable.
Debian Mobcom
This month I uploaded a new upstream version or a bugfix version of:
- … osmo-tetra to unstable.
- … libgsm to unstable.
- … osmo-tetra to unstable.
misc
This month I uploaded a new upstream version or a bugfix version of:
- … cpptest to unstable.
- … npd6 to unstable.
- … ptunnel to unstable.
- … ptunnel-ng to unstable.
- … dateutils to unstable.
- … apcupsd to unstable.
- … puppet-modules-cirrax-gitolite to unstable.
- … visam to unstable.
- … apcupsd to unstable.
On my fight against outdated RFPs, I closed 30 of them in November.
I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.
Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.
FTP master
This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.
08 Dec 2025 3:20pm GMT
François Marier: Learning a new programming language with an LLM
I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.
Searching more efficiently
The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.
I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."
I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").
Autocomplete is too distracting
A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.
I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).
Asking about idiomatic code
One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.
It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.
Reviews
One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.
If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:
- Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
- Feed that prompt to multiple models. They each have different answers and will detect different problems.
- Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.
The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.
Similarly for security reviews:
- A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
- Some of it may highlight areas for improvement that you hadn't considered.
- Occasionally, they will point out real vulnerabilities.
But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed
An unexpected benefit
One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.
Learning
In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.
So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.
P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.
08 Dec 2025 12:15am GMT
04 Dec 2025
Planet Lisp
Tim Bradshaw: Literals and constants in Common Lisp
Or, constantp is not enough.
Because I do a lot of things with Štar, and for other reasons, I spend a fair amount of time writing various compile-time optimizers for things which have the semantics of function calls. You can think of iterator optimizers in Štar as being a bit like compiler macros: the aim is to take a function call form and to turn it, in good cases, into something quicker1. One important way of doing this is to be able to detect things which are known at compile-time: constants and literals, for instance.
One of the things this has made clear to me is that, like John Peel, constantp is not enough. Here's an example.
(in-row-major-array a :simple t :element-type 'fixnum) is a function call whose values Štar can use to tell it how to iterate (via row-major-aref) over an array. When used in a for form, its optimizer would like to be able to expand into something involving (declare (type (simple-array fixnum *) ...), so that the details of the array are known to the compiler, which can then generate fast code for row-major-aref. This makes a great deal of difference to performance: array access to simple arrays of known element types is usually much faster than to general arrays.
In order to do this it needs to know two things:
- that the values of the
simpleandelement-typekeyword arguments are compile-time constants; - what their values are.
You might say, well, that's what constantp is for2. It's not: constantp tells you only the first of these, and you need both.
Consider this code, in a file to be compiled:
(defconstant et 'fixnum)
(defun ... ...
(for ((e (in-array a :element-type et)))
...)
...)
Now, constantpwill tell you that et is indeed a compile-time constant. But it won't tell you its value, and in particular nothing says it needs to be bound at compile-time at all: (symbol-value 'et) may well be an error at compile-time.
constantp is not enough3! instead you need a function that tells you 'yes, this thing is a compile-time constant, and its value is …'. This is what literal does4: it conservatively answers the question, and tells you the value if so. In particular, an expression like (literal '(quote fixnum)) will return fixnum, the value, and t to say yes, it is a compile-time constant. It can't do this for things defined with defconstant, and it may miss other cases, but when it says something is a compile-time constant, it is. In particular it works for actual literals (hence its name), and for forms whose macroexpansion is a literal.
That is enough in practice.
-
Śtar's iterator optimizers are not compiler macros, because the code they write is inserted in various places in the iteration construct, but they're doing a similar job: turning a construct involving many function calls into one requiring fewer or no function calls. ↩
-
And you may ask yourself, "How do I work this?" / And you may ask yourself, "Where is that large automobile?" / And you may tell yourself, "This is not my beautiful house" / And you may tell yourself, "This is not my beautiful wife" ↩
-
Here's something that staryed as a mail message which tries to explain this in some more detail. In the case of variables
defconstantis required to tellconstantpthat a variable is a constant at compile-time but is not required (and should not be required) to evaluate the initform, let alone actually establish a binding at that time. In SBCL it does both (SBCL doesn't really have a compilation environment). In LW, say, it at least does not establish a binding, because LW does have a compilation environment. That means that in LW compiling a file has fewer compile-time side-effects than it does in SBCL. Outside of variables, it's easily possible that a compiler might be smart enough to know that, given(defun c (n) (+ n 15)), then(constantp '(c 1) <compilation environment>)is true. But you can't evaluate(c 1)at compile-time at all.constantptells you that you don't need to bind variables to prevent multiple evaluation, it doesn't, and can't, tell you what their values will be. ↩ -
Part of the
org.tfeb.star/utilitiespackage. ↩
04 Dec 2025 4:23pm GMT
01 Dec 2025
Planet Lisp
Joe Marshall: Advent of Code 2025
The Advent of Code will begin in a couple of hours. I've prepared a Common Lisp project to hold the code. You can clone it from https://github.com/jrm-code-project/Advent2025.git It contains an .asd file for the system, a package.lisp file to define the package structure, 12 subdirectories for each day's challenge (only 12 problems in this year's calendar), and a file each for common macros and common functions.
As per the Advent of Code rules, I won't use AI tools to solve the puzzles or write the code. However, since AI is now part of my normal workflow these days, I may use it for enhanced web search or for autocompletion.
As per the Advent of Code rules, I won't include the puzzle text or the puzzle input data. You will need to get those from the Advent of Code website (https://adventofcode.com/2025).
01 Dec 2025 12:42am GMT
15 Nov 2025
FOSDEM 2026
FOSDEM 2026 Accepted Stands
With great pleasure we can announce that the following project will have a stand at FOSDEM 2026! ASF Community BSD + FreeBSD Project Checkmk CiviCRM Cloud Native Computing Foundation + OpenInfra & the Linux Foundation: Building the Open Source Infrastructure Ecosystem Codeberg and Forgejo Computer networks with BIRD, KNOT and Turris Debian Delta Chat (Sunday) Digital Public Goods Dolibar ERP CRM + Odoo Community Association (OCA) Dronecode Foundation + The Zephyr Project Eclipse Foundation F-Droid and /e/OS + OW2 FOSS community / Murena degooglized phones and suite Fedora Project Firefly Zero Foreman FOSS United + fundingjson (and FLOSS/fund) FOSSASIA Framework舰
15 Nov 2025 11:00pm GMT
13 Nov 2025
FOSDEM 2026
FOSDEM 2026 Main Track Deadline Reminder
Submit your proposal for the FOSDEM main track before it's too late! The deadline for main track submissions is earlier than it usually is (16th November, that's in a couple of days!), so don't be caught out. For full details on submission information, look at the original call for participation.
13 Nov 2025 11:00pm GMT