21 Jan 2026
Planet Grep
Lionel Dricot: Giving University Exams in the Age of Chatbots

Giving University Exams in the Age of Chatbots
What I like most about teaching "Open Source Strategies" at École Polytechnique de Louvain is how much I learn from my students, especially during the exam.
I dislike exams. I still have nightmares about exams. That's why I try to subvert this stressful moment and make it a learning opportunity. I know that adrenaline increases memorization dramatically. I make sure to explain to each student what I was expecting and to be helpful.
Here are the rules:
1. You can have all the resources you want (including a laptop connected to the Internet)
2. There's no formal time limit (but if you stay too long, it's a symptom of a deeper problem)
3. I allow students to discuss among themselves if it is on topic. (in reality, they never do it spontanously until I force two students with a similar problem to discuss together)
4. You can prepare and bring your own exam question if you want (something done by fewer than 10% of the students)
5. Come dressed for the exam you dream of taking!
This last rule is awesome. Over the years, I have had a lot of fun with traditional folkloric clothing from different countries, students in pajamas, a banana and this year's champion, my Studentausorus Rex!
An inflatable Tyranosaurus Rex taking my exam in 2026
My all-time favourite is still a fully clothed Minnie Mouse, who did an awesome exam with full face make-up, big ears, big shoes, and huge gloves. I still regret not taking a picture, but she was the very first student to take my words for what was a joke and started a tradition over the years.
Giving Chatbots Choice to the Students
Rule N°1 implies having all the resources you want. But what about chatbots? I didn't want to test how ChatGPT was answering my questions, I wanted to help my students better understand what Open Source means.
Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.
The questionnaire contained the following:
# Use of Chatbots
Tell the professor if you usually use chatbots (ChatGPT/LLM/whatever) when doing research and investigating a subject. You have the choice to use them or not during the exam, but you must decide in advance and inform the professor.
Option A: I will not use any chatbot, only traditional web searches. Any use of them will be considered cheating.
Option B: I may use a chatbot as it's part of my toolbox. I will then respect the following rules:
1) I will inform the professor each time information come from a chatbot
2) When explaining my answers, I will share the prompts I've used so the professor understands how I use the tool
3) I will identify mistakes in answers from the chatbot and explain why those are mistakes
Not following those rules will be considered cheating. Mistakes made by chatbots will be considered more important than honest human mistakes, resulting in the loss of more points. If you use chatbots, you should be held accountable for the output.
I thought this was fair. You can use chatbots, but you will be held accountable for it.
Most Students Don't Want to Use Chatbots
This January, I saw 60 students. I interacted with each of them for a mean time of 26 minutes. This is a tiring but really rewarding process.
Of 60 students, 57 decided not to use any chatbots. For 30 of them, I managed to ask them to explain their choices. For the others, I unfortunately did not have the time. After the exam, I grouped those justifications into four different clusters. I did it without looking at their grades.
The first group is the "personal preference" group. They prefer not to use chatbots. They use them only as a last resort, in very special cases or for very specific subjects. Some even made it a matter of personal pride. Two students told me explicitly "For this course, I want to be proud of myself." Another also explained: "If I need to verify what an LLM said, it will take more time!"
The second group was the "never use" one. They don't use LLMs at all. Some are even very angry at them, not for philosophical reasons, but mainly because they hate the interactions. One student told me: "Can I summarize this for you? No, shut up! I can read it by myself you stupid bot."
The third group was the "pragmatic" group. They reasoned that this was the kind of exam where it would not be needed.
The last and fourth group was the "heavy user" group. They told me they heavily use chatbots but, in this case, were afraid of the constraints. They were afraid of having to justify a chatbot's output or of missing a mistake.
After doing that clustering, I wrote the grade of each student in its own cluster and I was shocked by how coherent it was. Note: grades are between 0 and 20, with 10 being the minimum grade to pass the class.
The "personal preference" students were all between 15 and 19, which makes them very good students, without exception! The "proud" students were all above 17!
The "never use" was composed of middle-ground students around 13 with one outlier below 10.
The pragmatics were in the same vein but a bit better: they were all between 12 and 16 without exceptions.
The heavy users were, by far, the worst. All students were between 8 and 11, with only one exception at 16.
This is, of course, not an unbiased scientific experiment. I didn't expect anything. I will not make any conclusion. I only share the observation.
But Some Do
Of 60 students, only 3 decided to use chatbots. This is not very representative, but I still learned a lot because part of the constraints was to show me how they used chatbots. I hoped to learn more about their process.
The first chatbot student forgot to use it. He did the whole exam and then, at the end, told me he hadn't thought about using chatbots. I guess this put him in the "pragmatic" group.
The second chatbot student asked only a couple of short questions to make sure he clearly understood some concepts. This was a smart and minimal use of LLMs. The resulting exam was good. I'm sure he could have done it without a chatbot. The questions he asked were mostly a matter of improving his confidence in his own reasoning.
This reminded me of a previous-year student who told me he used chatbots to study. When I asked how, he told me he would tell the chatbot to act as the professor and ask exam questions. As a student, this allowed him to know whether he understood enough. I found the idea smart but not groundbreaking (my generation simply used previous years' questions).
The third chatbot-using student had a very complex setup where he would use one LLM, then ask another unrelated LLM for confirmation. He had walls of text that were barely readable. When glancing at his screen, I immediately spotted a mistake (a chatbot explaining that "Sepia Search is a compass for the whole Fediverse"). I asked if he understood the problem with that specific sentence. He did not. Then I asked him questions for which I had seen the solution printed in his LLM output. He could not answer even though he had the answer on his screen.
But once we began a chatbot-less discussion, I discovered that his understanding of the whole matter was okay-ish. So, in this case, chatbots disserved him heavily. He was totally lost in his own setup. He had LLMs generate walls of text he could not read. Instead of trying to think for himself, he tried to have chatbots pass the exam for him, which was doomed to fail because I was asking him, not the chatbots. He passed but would probably have fared better without chatbots.
Can chatbots help? Yes, if you know how to use them. But if you do, chances are you don't need chatbots.
A Generational Fear of Cheating
One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.
One obvious bias is that students want to please the teacher, and I guess they know where I am on this spectrum. One even told me: "I think you do not like chatbots very much so I will pass the exam without them" (very pragmatic of him).
But I also minimized one important generational bias: the fear of cheating. When I was a student, being caught cheating was a clear zero for the exam. You could, in theory, be expelled from university for aggravated cheating, whatever "aggravated" could mean.
During the exam, a good number of students called me panicked because Google was forcing autogenerated answers and they could not disable it. They were very worried I would consider this cheating.
First, I realized that, like GitHub, Google has a 100% market share, to the point students don't even consider using something else a possibility. I should work on that next year.
Second, I learned that cheating, however lightly, is now considered a major crime. It might result in the student being banned from any university in the country for three years. Discussing exam with someone who has yet to pass it might be considered cheating. Students have very strict rules on their Discord.
I was completely flabbergasted because, to me, discussing "What questions did you have?" was always part of the collaboration between students. I remember one specific exam where we gathered in an empty room and we helped each other before passing it. When one would finish her exam, she would come back to the room and tell all the remaining students what questions she had and how she solved them. We never considered that "cheating" and, as a professor, I always design my exams hoping that the good one (who usually choose to pass the exam early) will help the remaining crowd. Every learning opportunity is good to take!
I realized that my students are so afraid of cheating that they mostly don't collaborate before their exams! At least not as much as what we were doing.
In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.
Stream of Consciousness
Another innovation I introduced in the 2026 exam was the stream of consciousness. I asked them to open an empty text file and keep a stream of consciousness during the exam. The rules were the following:
In this file, please write all your questions and all your answers as a "stream of consciousness." This means the following rules:
1. Don't delete anything.
2. Don't correct anything.
3. Never go backward to retouch anything.
4. Write as thoughts come.
5. No copy/pasting allowed (only exception: URLs)
6. Rule 5. implies no chatbot for this exercice. This is your own stream of consciousness.
Don't worry, you won't be judged on that file. This is a tool to help you during the exam. You can swear, you can write wrong things. Just keep writing without deleting. If you are lost, write why you are lost. Be honest with yourself.
This file will only be used to try to get you more points, but only if it is clear that the rules have been followed.
I asked them to send me the file within 24h after the exam. Out of 60 students, I received 55 files (the remaining 5 were not penalized). There was also a bonus point if you sent it to the exam git repository using git-send-email, something 24 managed to do correctly.
The results were incredible. I did not read them all but this tool allowed me to have a glimpse inside the minds of the students. One said: "I should have used AI, this is the kind of question perfect for AI" (he did very well without it). For others, I realized how much stress they had but were hiding. I was touched by one stream of consciousness starting with "I'm stressed, this doesn't make any sense. Why can't we correct what we write in this file" then, 15 lines later "this is funny how writing the questions with my own words made the problem much clearer and how the stress start to fade away".
And yes, I read all the failed students and managed to save a bunch of them when it was clear that they, in fact, understood the matter but could not articulate it well in front of me because of the stress. Unfortunately, not everybody could be saved.
Conclusion
My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I'm delighted to read their thought processes through the stream of consciousness.
Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don't change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.
The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp's marketing department.
Most of my students don't like email. An awful lot of them learned only with me that Git is not the GitHub command-line tool. It turns out that by imposing Outlook with mandatory subscription to useless academic emails, we make sure that students hate email (Microsoft is on a mission to destroy email with the worst possible user experience).
I will never forgive the people who decided to migrate university mail servers to Outlook. This was both incompetence and malice on a terrifying level because there were enough warnings and opposition from very competent people at the time. Yet they decided to destroy one of the university's core infrastructures and historical foundations (UCLouvain is listed by Peter Salus as the very first European university to have a mail server, there were famous pioneers in the department).
By using Outlook, they continue to destroy the email experience. Out of 55 streams of consciousness, 15 ended in my spam folder. All had their links destroyed by Outlook. And university keep sending so many useless emails to everyone. One of my students told me that they refer to their university email as "La boîte à spams du recteur" (Chancellor's spam inbox). And I dare to ask why they use Discord?
Another student asked me why it took four years of computer engineering studies to get a teacher explaining to them that Git was not GitHub and that GitHub was part of Microsoft. He had a distressed look: "How could I have known? We were imposed GitHub for so many exercises!"
Each year, I tell my students the following:
It took me 20 years after university to learn what I know today about computers. And I've only one reason to be there in front of you: be sure you are faster than me. Be sure that you do it better and deeper than I did. If you don't manage to outsmart me, I will have failed.
Because that's what progress is about. Progress is each generation going further than the previous one while learning from the mistakes of your elders. I'm there to tell you about my own mistakes and the mistakes of my generation.
I know that most of you are only there to get a diploma while doing the minimal required effort. Fair enough, that's part of the game. Challenge accepted. I will try to make you think even if you don't intend to do it.
In earnest, I have a lot of fun teaching, even during the exam. For my students, the mileage may vary. But for the second time in my life, a student gave me the best possible compliment:
- You know, you are the only course for which I wake up at 8AM.
To which I responded:
- The feeling is mutual. I hate waking up early, except to teach in front of you.
About the author
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
21 Jan 2026 7:58pm GMT
Dries Buytaert: Software as clay on the wheel

A few weeks ago, Simon Willison started a coding agent, went to decorate a Christmas tree with his family, watched a movie, and came back to a working HTML5 parser.
It sounds like a party trick. But it worked because the results were easy to check. The unit tests either pass or they don't. The type checker either accepts the code or it doesn't. In that kind of environment, the work can keep moving without much supervision.
Geoffrey Huntley's Ralph Wiggum loop is probably the cleanest expression of this idea I've seen, and it's becoming more popular quickly. In his demonstration video, he describes creating specifications through conversation with an AI agent, and letting the loop run. Each iteration starts fresh: the agent reads the specification, picks the most important remaining task, implements it, runs the tests. If they pass, it commits to Git and exits. The next iteration begins with empty context, reads the current state from disk, and picks up where the previous run left off.
If you think about it, that's what human prompting already looks like: prompt, wait, review, prompt again. You're shaping the code or text the way a potter shapes clay: push a little, spin the wheel, look, push again. The Ralph loop just automates the spinning, which makes much more ambitious tasks practical.
The key difference is how state is handled. When you work this way by hand, the whole conversation comes along for the ride. In the Ralph loop, each iteration starts clean.
Why? Because carrying everything with you all the time is a great way to stop getting anywhere. If you're going to work on a problem for hundreds of iterations, things start to pile up. As tokens accumulate, the signal can get lost in noise. By flushing context between iterations and storing state in files, each run can start clean.
Simon Willison's port of an HTML5 library from Python to JavaScript showed the principle at larger scale. Using GPT-5.2 through Codex CLI with the --yolo flag for uninterrupted execution, he gave a handful of prompts and let it run while he decorated a Christmas tree with his family and watched a movie.
Four and a half hours later, the agent had produced a working HTML5 parser. It passed over 9,200 tests from the official html5lib-tests suite.
HTML5 parsing is notoriously complex, but the specification precisely defines how even malformed markup should be handled, with thousands of edge cases accumulated over years. The tests gave the AI agent constant grounding: each test run pulled it back to reality before errors could compound.
As Simon put it: "If you can reduce a problem to a robust test suite you can set a coding agent loop loose on it with a high degree of confidence that it will eventually succeed". Ralph loops and Willison's approach differ in structure, but both depend on tests as the source of truth.
Cursor's research on scaling agents confirms this is starting to work at enterprise scale. Their team explored what happens when hundreds of agents work concurrently on a single codebase for weeks. In one experiment, they built a web browser from scratch. Over a million lines of code across a thousand files, generated in a week. And the browser worked.
That doesn't mean it's secure, fast, or something you'd ship. It just means it met the criteria they gave it. If you decide to check for security or performance, it will work toward that as well. But the pattern is what matters: clear tests, constant verification, and agents that know when they're done.
From solo loops to hundreds of agents running in parallel, the same pattern keeps emerging. It feels like something fundamental is crystallizing: autonomous AI is starting to work well when you can accurately define success upfront.
Willison's success criteria were "simple": all 9,200 tests needed to pass. That is a lot of tests, but the agent got there. Clear success criteria made autonomy possible.
As I argued in AI flattens interfaces and deepens foundations, this changes where humans add value:
Humans are moving to where they set direction at the start and refine results at the end. AI handles everything in between.
The title of this post comes from Geoffrey Huntley. He describes software as clay on the pottery wheel, and once you've worked this way, it's hard to think about it any other way. As Huntley wrote: "If something isn't right, you throw it back on the wheel and keep going". That is exactly how it felt when I built my first Ralph Wiggum loop. Throw it back, refine it, spin again until it's right.
Of course, the Ralph Wiggum loop has limits. It works well when verification is unambiguous. A unit test returns pass or fail. But not all problems come with clear tests. And writing tests can be a lot of work.
For example, I've been thinking about how such loops could work for Drupal, where non-technical users build pages. "Make this page more on-brand" isn't a test you can run.
Or maybe it is? An AI agent could evaluate a page against brand guidelines and return pass or fail. It could check reading level and even do some basic accessibility tests. The verifier doesn't have to be a traditional test suite. It just has to provide clear feedback.
All of this just exposes something we already intuitively understand: defining success is hard. Really hard. When people build pages manually, they often iterate until it "feels right". They know what they want when they see it, but can't always articulate it upfront. Or they hire experts who carry that judgment from years of experience. This is the part of the work that is hardest to automate. The craft is moving upstream, from implementation to specification and validation.
The question for any task is becoming: can you tell, reliably, whether the result is getting better or worse? Where you can, the loop takes over. Where you can't, your judgment still matters.
The boundary keeps moving fast. A year ago, I was wrestling with local LLMs to generate good alt-text for my photos. Today, AI agents build working HTML5 parsers while you watch a movie. It's hard not to find that a little absurd. And hard not to be excited.
21 Jan 2026 7:58pm GMT
Dries Buytaert: Funding Open Source for Digital Sovereignty

As global tensions rise, governments are waking up to the fact that they've lost digital sovereignty. They depend on foreign companies that can change terms, cut off access, or be weaponized against them. A decision in Washington can disable services in Brussels overnight.
Last year, the International Criminal Court ditched Microsoft 365 after a dispute over access to the chief prosecutor's email. Denmark's Ministry of Digitalisation is moving to LibreOffice. And Germany's state of Schleswig-Holstein is migrating 30,000 workstations off Microsoft.
Reclaiming digital sovereignty doesn't require building the European equivalent of Microsoft or Google. That approach hasn't worked in the past, and there is no time to make it work now. Fortunately, Europe has something else: some of the world's strongest Open Source communities, regulatory reach, and public sector scale.
Open Source is the most credible path to digital sovereignty. It's the only software you can run without permission. You can audit, host, modify, and migrate it yourself. No vendor, no government, and no sanctions regime can ever take it away.
But there is a catch. When governments buy Open Source services, the money rarely reaches the people who actually build and maintain it. Procurement rules favor large system integrators, not the maintainers of the software itself. As a result, public money flows to companies that package and resell Open Source, not to the ones who do the hard work of writing and sustaining it.
I've watched this pattern repeat for over two decades in Drupal, the Open Source project I started and that is now widely used across European governments. A small web agency spends months building a new feature. They design it, implement it, and shepherd it through review until it's merged.
Then the government puts out a tender for a new website, and that feature is a critical requirement. A much larger company, with no involvement in Drupal, submits a polished proposal. They have the references, the sales team, and the compliance certifications. They win the contract. The feature exists because the small agency built it. But apart from new maintenance obligations, the original authors get nothing in return.
Public money flows around Open Source instead of into it.
Multiply that by every Open Source project in Europe's software stack, and you start to see both the scale of the problem and the scale of the opportunity.
This is the pattern we need to break. Governments should be contracting with maintainers, not middlemen.
Public money flows around Open Source instead of into it. Governments should contract with maintainers and builders, not middlemen.
Skipping the maintainers is not just unfair, it is bad governance. Vendors who do not contribute upstream can still deliver projects, but they are much less effective at fixing problems at the source or shaping the software's future. You end up spending public money on short-term integration, while underinvesting in the long-term quality, security, and resilience of the software you depend on.
If Europe wants digital sovereignty and real innovation, procurement must invest in upstream maintainers where security, resilience, and new capabilities are actually built. Open Source is public infrastructure. It's time we funded it that way.
The fix is straightforward: make contribution count in procurement scoring. When evaluating vendors, ask what they put back into the Open Source projects they are selling. Code, documentation, security fixes, funding.
Of course, all vendors will claim they contribute. I've seen companies claim credit for work they barely touched, or count contributions from employees who left years ago.
So how does a procurement officer tell who is real? By letting Open Source projects vouch for contributors directly. Projects know who does the work. We built Drupal's credit system to solve for exactly this. It's not perfect, but it's transparent. And transparency is hard to fake.
We use the credit system to maintain a public directory of companies that provide Drupal services, ranked by their contributions to the project. It shows, at a glance, which companies actually help build and maintain Drupal. If a vendor isn't on that list, they're most likely not contributing in a meaningful way. For a procurement officer, this turns a hard governance problem into a simple check: you can literally see which service providers help build Drupal. This is what contribution-based procurement looks like when it's made practical.
Fortunately, the momentum is building. APELL, an association of European Open Source companies, has proposed making contribution a procurement criterion. EuroStack, a coalition of 260+ companies, is lobbying for a "Buy Open Source Act". The European Commission has embraced an Open Source roadmap with procurement recommendations.
Europe does not need to build the next hyperscaler. It needs to shift procurement toward Open Source builders and maintainers. If Europe gets this right, it will mean better software, stronger local vendors, and public money that actually builds public code. Not to mention the autonomy that comes with it.
I submitted this post as feedback to the European Commission's call for evidence on Towards European Open Digital Ecosystems. If you work in Open Source, consider adding your voice. The feedback period ends February 3, 2026.
Special thanks to Taco Potze, Sachiko Muto, and Gábor Hojtsy for their review and contributions to this blog post.
21 Jan 2026 7:58pm GMT
20 Jan 2026
Planet Lisp
Joe Marshall: Filter
One of the core ideas in functional programming is to filter a set of items by some criterion. It may be somewhat suprising to learn that lisp does not have a built-in function named "filter" "select", or "keep" that performs this operation. Instead, Common Lisp provides the "remove", "remove-if", and "remove-if-not" functions, which perform the complementary operation of removing items that satisfy or do not satisfy a given predicate.
The remove function, like similar sequence functions, takes an optional keyword :test-not argument that can be used to specify a test that must fail for an item to be considered for removal. Thus if you invert your logic for inclusion, you can use the remove function as a "filter" by specifying the predicate with :test-not.
> (defvar *nums* (map 'list (λ (n) (format nil "~r" n)) (iota 10)))
*NUMS*
;; Keep *nums* with four letters
> (remove 4 *nums* :key #'length :test-not #'=)
("zero" "four" "five" "nine")
;; Keep *nums* starting with the letter "t"
> (remove #\t *nums* :key (partial-apply-right #'elt 0) :test-not #'eql)
("two" "three")20 Jan 2026 11:46am GMT
Planet Debian
Sahil Dhiman: Conferences, why?
Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That's when the thought struck me: why so many conferences, and why do I work for them?
I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what's happening in their interest zones. I feel naturally happy talking to folks. In this case, folks inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later.
For most conference discussions now, I just show up without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing.
Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet distant friends, talk, and work on something.
No doubt, one reason for all these conferences is evangelism for, let's say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.
20 Jan 2026 2:27am GMT
19 Jan 2026
Planet Debian
Dirk Eddelbuettel: RApiDatetime 0.0.11 on CRAN: Micro-Maintenance

A new (micro) maintenance release of our RApiDatetime package is now on CRAN, coming only a good week after the 0.0.10 release which itself had a two year gap to its predecessor release.
RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.
This release adds a single (and ) around one variable as the rchk container and service by Tomas now flagged this. Which is … somewhat peculiar, as this is old code also 'borrowed' from R itself but no point arguing so I just added this.
Details of the release follow based on the NEWS file.
Changes in RApiDatetime version 0.0.11 (2026-01-19)
- Add PROTECT (and UNPROTECT) to appease rchk
Courtesy of my CRANberries, there is also a diffstat report for this release.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
19 Jan 2026 11:21pm GMT
Isoken Ibizugbe: Mid-Point Project Progress
Halfway There
Hurray!
I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.
I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I've transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.
In all of this, I've committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.
The Original Roadmap vs. Reality
Reviewing my 12-week goal, which included extending automated tests for "live image testing," "installer testing," and "documentation," I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.
Accomplishments
I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.
- Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
- LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.
Solving for "Synergy"
One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide
During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.
To help, I created a contributor guide [MR !97]. This guide addresses the project's writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.
Looking Forward
For the second half of the internship, I plan to:
- Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
- Explore new coverage: Move beyond start-stop tests into deeper functional testing.
This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.
I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.
Here's to the next 6 weeks 
19 Jan 2026 9:15pm GMT
16 Jan 2026
Planet Lisp
Scott L. Burson: FSet v2.2.0: JSON parsing/printing using Jzon
FSet v2.2.0, which is the version included in the recent Quicklisp release, has a new Quicklisp-loadable system, FSet/Jzon. It extends the Jzon JSON parser/printer to construct FSet collections when reading, and to be able to print them.
On parsing, JSON arrays produce FSet seqs; JSON objects produce FSet replay maps by default, but the parser can also be configured to produce ordinary maps or FSet tuples. For printing, any of these can be handled, as well as the standard Jzon types. The tuple representation provides a way to control the printing of `nil`, depending on the type of the corresponding key.
For details, see the GitLab MR.
NOTE: unfortunately, the v2.1.0 release had some bugs in the new seq code, and I didn't notice them until after v2.2.0 was in Quicklisp. If you're using seqs, I strongly recommend you pick up v2.2.2 or newer from GitLab or GitHub.
16 Jan 2026 8:05am GMT
Paolo Amoroso: An Interlisp file viewer in Common Lisp
I wrote ILsee, an Interlisp source file viewer. It is the first of the ILtools collection of tools for viewing and accessing Interlisp data.
I developed ILsee in Common Lisp on Linux with SBCL and the McCLIM implementation of the CLIM GUI toolkit. SLY for Emacs completed my Lisp tooling and, as for infrastructure, ILtools is the first new project I host at Codeberg.
This is ILsee showing the code of an Interlisp file:
Motivation
The concepts and features of CLIM, such as stream-oriented I/O and presentation types, blend well with Lisp and feel natural to me. McCLIM has come a long way since I last used it a couple of decades ago and I have been meaning to play with it again for some time.
I wanted to do a McCLIM project related to Medley Interlisp, as well as try out SLY and Codeberg. A suite of tools for visualising and processing Interlisp data seemed the perfect fit.
The Interlisp file viewer ILsee is the first such tool.
Interlisp source files
Why an Interlisp file viewer instead of less or an editor?
In the managed residential environment of Medley Interlisp you don't edit text files of Lisp code. You edit the code in the running image and the system keeps track of and saves the code to "symbolic files", i.e. databases that contain code and metadata.
Medley maintains symbolic files automatically and you aren't supposed to edit them. These databases have a textual format with control codes that change the text style.
When displaying the code of a symbolic file with, say, the SEdit structure editor, Medley interprets the control codes to perform syntax highlighting of the Lisp code. For example, the names of functions in definitions are in large bold text, some function names and symbols are in bold, and the system also performs a few character substitutions like rendering the underscore _ as the left arrow ← and the caret ^ as the up arrow ↑.
This is what the same Interlisp code of the above screenshot looks like in the TEdit WYSIWYG editor on Medley:
Medley comes with the shell script lsee, an Interlisp file viewer for Unix systems. The script interprets the control codes to appropriately render text styles as colors in a terminal. lsee shows the above code like this:
The file viewer
ILsee is like lsee but displays files in a GUI instead of a terminal.
The GUI comprises a main pane that displays the current Interlisp file, a label with the file name, a command line processor that executes commands (also available as items of the menu bar), and the standard CLIM pointer documentation pane.
There are two commands, See File to display an Interlisp file and Quit to terminate the program.
Since ILsee is a CLIM application it supports the usual facilities of the toolkit such as input completion and presentation types. This means that, in the command processor pane, the presentations of commands and file names become mouse sensitive in input contexts in which a command can be executed or a file name is requested as an argument.
The ILtools repository provides basic instructions for installing and using the application.
Application design and GUI
I initially used McCLIM a couple of decades ago but mostly left it after that and, when I picked it back up for ILtools, I was a bit rusty.
The McCLIM documentation, the CLIM specification, and the research literature are more than enough to get started and put together simple applications. The code of the many example programs of McCLIM help me fill in the details and understand features I'm not familiar with. Still, I would have appreciated the CLIM specification to provide more examples, the near lack of which makes the many concepts and features harder to grasp.
The design of ILsee mirrors the typical structure of CLIM programs such as the definitions of application frames and commands. The slots of the application frame hold application specific data: the name of the currently displayed file and a list of text lines read from the file.
The function display-file does most of the work and displays the code of a file in the application pane.
It processes the text lines one by one character by character, dispatching on the control codes to activate the relevant text attributes or perform character substitution. display-file does incremental redisplay to reduce flicker when repainting the pane, for example after it is scrolled or obscured.
The code has some minor and easy to isolate SBCL dependencies.
Next steps
I'm pleased at how ILsee turned out. The program serves as a useful tool and writing it was a good learning experience. I'm also pleased at CLIM and its nearly complete implementation McCLIM. It takes little CLIM code to provide a lot of advanced functionality.
But I have some more work to do and ideas for ILsee and ILtools. Aside from small fixes, a few additional features can make the program more practical and flexible.
The pane layout may need tweaking to better adapt to different window sizes and shapes. Typing file names becomes tedious quickly, so I may add a simple browser pane with a list of clickable files and directories to display the code or navigate the file system.
And, of course, I will write more tools for the ILtools collection.
#ILtools #CommonLisp #Interlisp #Lisp
Discuss... Email | Reply @amoroso@oldbytes.space
16 Jan 2026 7:19am GMT
12 Jan 2026
FOSDEM 2026
Birds of a Feather/Unconference rooms
As in previous years, some small rooms will be available for Unconference style "Birds of a Feather sessions". The concept is simple: Any project or community can reserve a timeslot (1 hour) during which they have the room just to themselves. These rooms are intended for ad-hoc discussions, meet-ups or brainstorming sessions. They are not a replacement for a developer room and they are certainly not intended for talks. To apply for a BOF session, enter your proposal at https://fosdem.org/submit. Select the BOF/Unconference track and mention in the Submission Notes your preferred timeslots and any times you are unavailable. Also舰
12 Jan 2026 11:00pm GMT
10 Jan 2026
FOSDEM 2026
Travel and transportation advisories
Attendees should be aware of potential transportation disruptions in the days leading up to FOSDEM. Rail travel Railway unions have announced a strike notice from Sunday January 25th, 22:00 until Friday January 30th, 22:00. This may affect travel to Brussels for FOSDEM and related fringe events. While there will be a guaranteed minimum service in place, train frequency may be significantly reduced. Also note that international connections might be affected as well. Road travel From Saturday January 31st (evening) until Sunday February 1st (noon), the E40 highway between Leuven and Brussels will be fully closed. Traffic will be diverted via舰
10 Jan 2026 11:00pm GMT
09 Jan 2026
FOSDEM 2026
FOSDEM Junior Registration
We are pleased to announce the schedule for FOSDEM Junior. Registration for the individual workshops is required. Links to the registration page can be found on the page of each activity. The full schedule can be viewed on the junior track schedule page.
09 Jan 2026 11:00pm GMT



