20 Jan 2026
Planet Grep
Lionel Dricot: Giving University Exams in the Age of Chatbots

Giving University Exams in the Age of Chatbots
What I like most about teaching "Open Source Strategies" at École Polytechnique de Louvain is how much I learn from my students, especially during the exam.
I dislike exams. I still have nightmares about exams. That's why I try to subvert this stressful moment and make it a learning opportunity. I know that adrenaline increases memorization dramatically. I make sure to explain to each student what I was expecting and to be helpful.
Here are the rules:
1. You can have all the resources you want (including a laptop connected to the Internet)
2. There's no formal time limit (but if you stay too long, it's a symptom of a deeper problem)
3. I allow students to discuss among themselves if it is on topic. (in reality, they never do it spontanously until I force two students with a similar problem to discuss together)
4. You can prepare and bring your own exam question if you want (something done by fewer than 10% of the students)
5. Come dressed for the exam you dream of taking!
This last rule is awesome. Over the years, I have had a lot of fun with traditional folkloric clothing from different countries, students in pajamas, a banana and this year's champion, my Studentausorus Rex!
An inflatable Tyranosaurus Rex taking my exam in 2026
My all-time favourite is still a fully clothed Minnie Mouse, who did an awesome exam with full face make-up, big ears, big shoes, and huge gloves. I still regret not taking a picture, but she was the very first student to take my words for what was a joke and started a tradition over the years.
Giving Chatbots Choice to the Students
Rule N°1 implies having all the resources you want. But what about chatbots? I didn't want to test how ChatGPT was answering my questions, I wanted to help my students better understand what Open Source means.
Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.
The questionnaire contained the following:
# Use of Chatbots
Tell the professor if you usually use chatbots (ChatGPT/LLM/whatever) when doing research and investigating a subject. You have the choice to use them or not during the exam, but you must decide in advance and inform the professor.
Option A: I will not use any chatbot, only traditional web searches. Any use of them will be considered cheating.
Option B: I may use a chatbot as it's part of my toolbox. I will then respect the following rules:
1) I will inform the professor each time information come from a chatbot
2) When explaining my answers, I will share the prompts I've used so the professor understands how I use the tool
3) I will identify mistakes in answers from the chatbot and explain why those are mistakes
Not following those rules will be considered cheating. Mistakes made by chatbots will be considered more important than honest human mistakes, resulting in the loss of more points. If you use chatbots, you should be held accountable for the output.
I thought this was fair. You can use chatbots, but you will be held accountable for it.
Most Students Don't Want to Use Chatbots
This January, I saw 60 students. I interacted with each of them for a mean time of 26 minutes. This is a tiring but really rewarding process.
Of 60 students, 57 decided not to use any chatbots. For 30 of them, I managed to ask them to explain their choices. For the others, I unfortunately did not have the time. After the exam, I grouped those justifications into four different clusters. I did it without looking at their grades.
The first group is the "personal preference" group. They prefer not to use chatbots. They use them only as a last resort, in very special cases or for very specific subjects. Some even made it a matter of personal pride. Two students told me explicitly "For this course, I want to be proud of myself." Another also explained: "If I need to verify what an LLM said, it will take more time!"
The second group was the "never use" one. They don't use LLMs at all. Some are even very angry at them, not for philosophical reasons, but mainly because they hate the interactions. One student told me: "Can I summarize this for you? No, shut up! I can read it by myself you stupid bot."
The third group was the "pragmatic" group. They reasoned that this was the kind of exam where it would not be needed.
The last and fourth group was the "heavy user" group. They told me they heavily use chatbots but, in this case, were afraid of the constraints. They were afraid of having to justify a chatbot's output or of missing a mistake.
After doing that clustering, I wrote the grade of each student in its own cluster and I was shocked by how coherent it was. Note: grades are between 0 and 20, with 10 being the minimum grade to pass the class.
The "personal preference" students were all between 15 and 19, which makes them very good students, without exception! The "proud" students were all above 17!
The "never use" was composed of middle-ground students around 13 with one outlier below 10.
The pragmatics were in the same vein but a bit better: they were all between 12 and 16 without exceptions.
The heavy users were, by far, the worst. All students were between 8 and 11, with only one exception at 16.
This is, of course, not an unbiased scientific experiment. I didn't expect anything. I will not make any conclusion. I only share the observation.
But Some Do
Of 60 students, only 3 decided to use chatbots. This is not very representative, but I still learned a lot because part of the constraints was to show me how they used chatbots. I hoped to learn more about their process.
The first chatbot student forgot to use it. He did the whole exam and then, at the end, told me he hadn't thought about using chatbots. I guess this put him in the "pragmatic" group.
The second chatbot student asked only a couple of short questions to make sure he clearly understood some concepts. This was a smart and minimal use of LLMs. The resulting exam was good. I'm sure he could have done it without a chatbot. The questions he asked were mostly a matter of improving his confidence in his own reasoning.
This reminded me of a previous-year student who told me he used chatbots to study. When I asked how, he told me he would tell the chatbot to act as the professor and ask exam questions. As a student, this allowed him to know whether he understood enough. I found the idea smart but not groundbreaking (my generation simply used previous years' questions).
The third chatbot-using student had a very complex setup where he would use one LLM, then ask another unrelated LLM for confirmation. He had walls of text that were barely readable. When glancing at his screen, I immediately spotted a mistake (a chatbot explaining that "Sepia Search is a compass for the whole Fediverse"). I asked if he understood the problem with that specific sentence. He did not. Then I asked him questions for which I had seen the solution printed in his LLM output. He could not answer even though he had the answer on his screen.
But once we began a chatbot-less discussion, I discovered that his understanding of the whole matter was okay-ish. So, in this case, chatbots disserved him heavily. He was totally lost in his own setup. He had LLMs generate walls of text he could not read. Instead of trying to think for himself, he tried to have chatbots pass the exam for him, which was doomed to fail because I was asking him, not the chatbots. He passed but would probably have fared better without chatbots.
Can chatbots help? Yes, if you know how to use them. But if you do, chances are you don't need chatbots.
A Generational Fear of Cheating
One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all.
One obvious bias is that students want to please the teacher, and I guess they know where I am on this spectrum. One even told me: "I think you do not like chatbots very much so I will pass the exam without them" (very pragmatic of him).
But I also minimized one important generational bias: the fear of cheating. When I was a student, being caught cheating was a clear zero for the exam. You could, in theory, be expelled from university for aggravated cheating, whatever "aggravated" could mean.
During the exam, a good number of students called me panicked because Google was forcing autogenerated answers and they could not disable it. They were very worried I would consider this cheating.
First, I realized that, like GitHub, Google has a 100% market share, to the point students don't even consider using something else a possibility. I should work on that next year.
Second, I learned that cheating, however lightly, is now considered a major crime. It might result in the student being banned from any university in the country for three years. Discussing exam with someone who has yet to pass it might be considered cheating. Students have very strict rules on their Discord.
I was completely flabbergasted because, to me, discussing "What questions did you have?" was always part of the collaboration between students. I remember one specific exam where we gathered in an empty room and we helped each other before passing it. When one would finish her exam, she would come back to the room and tell all the remaining students what questions she had and how she solved them. We never considered that "cheating" and, as a professor, I always design my exams hoping that the good one (who usually choose to pass the exam early) will help the remaining crowd. Every learning opportunity is good to take!
I realized that my students are so afraid of cheating that they mostly don't collaborate before their exams! At least not as much as what we were doing.
In retrospect, my instructions were probably too harsh and discouraged some students from using chatbots.
Stream of Consciousness
Another innovation I introduced in the 2026 exam was the stream of consciousness. I asked them to open an empty text file and keep a stream of consciousness during the exam. The rules were the following:
In this file, please write all your questions and all your answers as a "stream of consciousness." This means the following rules:
1. Don't delete anything.
2. Don't correct anything.
3. Never go backward to retouch anything.
4. Write as thoughts come.
5. No copy/pasting allowed (only exception: URLs)
6. Rule 5. implies no chatbot for this exercice. This is your own stream of consciousness.
Don't worry, you won't be judged on that file. This is a tool to help you during the exam. You can swear, you can write wrong things. Just keep writing without deleting. If you are lost, write why you are lost. Be honest with yourself.
This file will only be used to try to get you more points, but only if it is clear that the rules have been followed.
I asked them to send me the file within 24h after the exam. Out of 60 students, I received 55 files (the remaining 5 were not penalized). There was also a bonus point if you sent it to the exam git repository using git-send-email, something 24 managed to do correctly.
The results were incredible. I did not read them all but this tool allowed me to have a glimpse inside the minds of the students. One said: "I should have used AI, this is the kind of question perfect for AI" (he did very well without it). For others, I realized how much stress they had but were hiding. I was touched by one stream of consciousness starting with "I'm stressed, this doesn't make any sense. Why can't we correct what we write in this file" then, 15 lines later "this is funny how writing the questions with my own words made the problem much clearer and how the stress start to fade away".
And yes, I read all the failed students and managed to save a bunch of them when it was clear that they, in fact, understood the matter but could not articulate it well in front of me because of the stress. Unfortunately, not everybody could be saved.
Conclusion
My main takeaway is that I will keep this method next year. I believe that students are confronted with their own use of chatbots. I also learn how they use them. I'm delighted to read their thought processes through the stream of consciousness.
Like every generation of students, there are good students, bad students and very brilliant students. It will always be the case, people evolve (I was, myself, not a very good student). Chatbots don't change anything regarding that. Like every new technology, smart young people are very critical and, by defintion, smart about how they use it.
The problem is not the young generation. The problem is the older generation destroying critical infrastructure out of fear of missing out on the new shiny thing from big corp's marketing department.
Most of my students don't like email. An awful lot of them learned only with me that Git is not the GitHub command-line tool. It turns out that by imposing Outlook with mandatory subscription to useless academic emails, we make sure that students hate email (Microsoft is on a mission to destroy email with the worst possible user experience).
I will never forgive the people who decided to migrate university mail servers to Outlook. This was both incompetence and malice on a terrifying level because there were enough warnings and opposition from very competent people at the time. Yet they decided to destroy one of the university's core infrastructures and historical foundations (UCLouvain is listed by Peter Salus as the very first European university to have a mail server, there were famous pioneers in the department).
By using Outlook, they continue to destroy the email experience. Out of 55 streams of consciousness, 15 ended in my spam folder. All had their links destroyed by Outlook. And university keep sending so many useless emails to everyone. One of my students told me that they refer to their university email as "La boîte à spams du recteur" (Chancellor's spam inbox). And I dare to ask why they use Discord?
Another student asked me why it took four years of computer engineering studies to get a teacher explaining to them that Git was not GitHub and that GitHub was part of Microsoft. He had a distressed look: "How could I have known? We were imposed GitHub for so many exercises!"
Each year, I tell my students the following:
It took me 20 years after university to learn what I know today about computers. And I've only one reason to be there in front of you: be sure you are faster than me. Be sure that you do it better and deeper than I did. If you don't manage to outsmart me, I will have failed.
Because that's what progress is about. Progress is each generation going further than the previous one while learning from the mistakes of olders. I'm there to tell you about my own mistakes and the mistakes of my generation.
I know that most of you are only there to get a diploma while doing the minimal required effort. Fair enough, that's part of the game. Challenge accepted. I will try to make you think even if you don't intend to do it.
In earnest, I have a lot of fun teaching, even during the exam. For my students, the mileage may vary. But for the second time in my life, a student gave me the best possible compliment:
- You know, you are the only course for which I wake up at 8AM.
To which I responded:
- This is reciprocal. I hate waking up early, except to teach in front of you.
About the author
I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.
I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!
20 Jan 2026 11:34am GMT
Jeroen De Dauw: High Density Information Sources in 2026
Algorithms exploit dopamine hits for engagement. AI slop is everywhere. Mainstream media is agenda-driven. You are busy as it is. What is the healthy information diet you can stick to? In the last two years, I found excellent options that work great for me. Discover them in this post.
Surround yourself with great people. Use quality information. Garbage in, garbage out. If 90% of what you read or watch is optimized for widespread appeal, lacking nuance, optimized for tribal appeal and memetic survival, etc, you rot your brain. You undermine your world model, your decision-making ability, and your effectiveness.
Recommendations
First place goes to Scott Alexander and his Astral Codex Ten blog (formerly Slate Star Codex). Scott's essays are great for learning new concepts and levelling up your mental operating system, especially if you are new to the topics he covers. I started "reading" most posts once I discovered that there is a narrated audio version available on all the usual podcast platforms, including Spotify. My favourite post is from 2014, the famous Mediations on Moloch (audio).
Another favourite of mine is LessWrong. In particular, the Curated and Popular audio version. LessWrong is a community blog with high-quality posts on decision-making, epistemology, and AI safety/alignment. The Curated and Popular feed gives you a mix of evergreen concepts and reactions to recent events. Some random posts I linked and remember: Survival without dignity (hilarious), The Value Proposition of Romantic Relationships, Humans are not automatically strategic, and The Memetics of AI Successionism. Also worth a mention are The Sequences, especially the first few posts or the highlights, and The Best of LessWrong.
For recent events specifically, Don't Worry About the Vase is an excellent blog. Zvi, the prolific author of said blog, posts weekly AI roundups and provides early reactions to notable events within a day or so. It recently occurred to me that Don't Worry About the Vase is the closest thing I consume to a classical news channel. I like how Zvi incorporates quotes and back-and-forths from various sources, often with his perspective or response included. While this blog focuses on AI, other topics range from education to dating, and from movies to policy. There is an excellent audio version.
List
My favourite and common information sources:
- Astral Codex Ten blog (audio)
- LessWrong forum (audio)
- Don't Worry About the Vase blog (audio)
- Dwarkesh Podcast. My top podcast pick. Excellent guests.
Less frequent podcasts:
- No Priors - The hosts are VCs that interview builders and CEOs. Focus on AI and tech startups
- Moonshots - This is my feel-good entertainment podcast. Less signal. Extreme techno-optimism and dismantling the moon
- Mindscape - Sean Carroll (theoretical physicist) talks with guests about the nature of reality
- Naval - Timeless principles about high-agency and long-term focus. Minor focus on wealth. Infrequent content
- 80000 hours - Deep dives into topics surrounding saving the world and Effective Altruism. Warning: can be overly lengthy
- Y Combinator Startup Podcast - It's in the name, though recently it also includes terrible AI safety takes by Garry Tan
- Win-Win - Exploration of game theory, cooperation, incentives. The host has read Meditations on Moloch too many times

What's With All The Audio Links?
As I spend enough time looking at screens and reading, being able to consume blogs, articles, and podcasts via audio is great. One of my favourite activities is walking in the park with a good episode on, and doing laundry has never been this much fun.
I started out using Spotify, but switched to AntennaPod a few months ago. You can download this open-source podcast player for free. It's shown in the screenshot, and I can recommend it.
Twitter, No Wait X
Here are 10 X accounts you can follow for high-signal:
- Scott Alexander (@slatestarcodex) - Mentioned above, Astral Codex Ten author
- Eliezer Yudkowsky (@ESYudkowsky) - The final boss of rationalism, author of The Sequences
- Vitalik Buterin (@VitalikButerin) - Ethereum founder, decentralization freedmons n stuffs
- Zvi Mowshowitz (@TheZvi) - Mentioned above, don't Worry About the Vase author
- Aella (@Aella_Girl) - Prostitute statistician
- Rob Wiblin (@robertwiblin) - 80000 Hours podcast host
- Liv Boeree (@Liv_Boeree) - Win-Win podcast host
- Robin Hanson (@robinhanson) - Came up with Great Filter, Prediction Markets, Grabby Aliens, and more
- Andrej Karpathy (@karpathy) - AI researcher
- Ilya Sutskever (@ilyasut) - AI researcher
Turns out a lot of the people I consider high-signal are only on X. Seems like they haven't been knocked around much by the culture wars. Hence, this is a list of X accounts. In an effort to reduce the negative effects of The Algorithm and wasting time on noise, I created a list of X accounts, which I named Signal. This way can easily restrict my feed to only posts from these accounts. I keep this Signal list concise, at 10-15 accounts, many of which are listed above.
Including Robin Hanson here reminded me of Manifold Markets, which I am giving this honourable mention as a decent news source.
Your Top Picks?
What are your favourite sources of high-quality content? Let me know in the comments!
This post is a spiritual successor to my old Year In Books posts (2017, 2016, 2015). I've been thinking about posting another one of these for over 12 months. Since I "read" more via the sources mentioned in this post than classical books, and this seems like the more interesting topic for readers, you get this post instead.
The post High Density Information Sources in 2026 appeared first on Blog of Jeroen De Dauw.
20 Jan 2026 11:34am GMT
Frank Goossens: blog.futtta.be on the fediverse
Pretty sure no-one is waiting for this, but after having spent a couple of years on the Fediverse (Mastodon in my case) I decided to add ActivityPub support to my WordPress installation via the ActivityPub plugin. I have the WP Rest Cache plugin active, so I'm expecting things to gently hum along, without (most likely) or with these posts gaining traction. Kudo's to Stian and Servebolt for…
20 Jan 2026 11:34am GMT
Planet Debian
Sahil Dhiman: Conferences, why?
Back in December, I was working to help organize multiple different conferences. One has already happened; the rest are still works in progress. That's when the thought struck me: why so many conferences, and why do I work for them?
I have been fairly active in the scene since 2020. For most conferences, I usually arrive late in the city on the previous day and usually leave the city on conference close day. Conferences for me are the place to meet friends and new folks and hear about them, their work, new developments, and what's happening in their interest zones. I feel naturally happy talking to folks. In this case, folks inspire me to work. Nothing can replace a passionate technical and social discussion, which stretches way into dinner parties and later.
For most conference discussions now, I just show up wherever needed without a set role (DebConf is probably an exception to it). It usually involves talking to folks, suggesting what needs to be done, doing a bit of it myself, and finishing some last-minute stuff during the actual thing.
Having more of these conferences and helping make them happen naturally gives everyone more places to come together, meet distant friends, talk, and work on something.
No doubt, one reason for all these conferences is evangelism for, let's say Free Software, OpenStreetMap, Debian etc. which is good and needed for the pipeline. But for me, the primary reason would always be meeting folks.
20 Jan 2026 2:27am GMT
19 Jan 2026
Planet Debian
Dirk Eddelbuettel: RApiDatetime 0.0.11 on CRAN: Micro-Maintenance

A new (micro) maintenance release of our RApiDatetime package is now on CRAN, coming only a good week after the 0.0.10 release which itself had a two year gap to its predecessor release.
RApiDatetime provides a number of entry points for C-level functions of the R API for Date and Datetime calculations. The functions asPOSIXlt and asPOSIXct convert between long and compact datetime representation, formatPOSIXlt and Rstrptime convert to and from character strings, and POSIXlt2D and D2POSIXlt convert between Date and POSIXlt datetime. Lastly, asDatePOSIXct converts to a date type. All these functions are rather useful, but were not previously exported by R for C-level use by other packages. Which this package aims to change.
This release adds a single (and ) around one variable as the rchk container and service by Tomas now flagged this. Which is … somewhat peculiar, as this is old code also 'borrowed' from R itself but no point arguing so I just added this.
Details of the release follow based on the NEWS file.
Changes in RApiDatetime version 0.0.11 (2026-01-19)
- Add PROTECT (and UNPROTECT) to appease rchk
Courtesy of my CRANberries, there is also a diffstat report for this release.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
19 Jan 2026 11:21pm GMT
Isoken Ibizugbe: Mid-Point Project Progress
Halfway There
Hurray!
I have officially reached the 6-week mark, the halfway point of my Outreachy internship. The time has flown by incredibly fast, yet it feels short because there is still so much exciting work to do.
I remember starting this journey feeling overwhelmed, trying to gain momentum. Today, I feel much more confident. I began with the apps_startstop task during the contribution period, writing manual test steps and creating preparation Perl scripts for the desktop environments. Since then, I've transitioned into full automation and taken a liking to reading openQA upstream documentation when I have issues or for reference.
In all of this, I've committed over 30 hours a week to the project. This dedicated time has allowed me to look in-depth into the Debian ecosystem and automated quality assurance.
The Original Roadmap vs. Reality
Reviewing my 12-week goal, which included extending automated tests for "live image testing," "installer testing," and "documentation," I am happy to report that I am right on track. My work on desktop apps tests has directly improved the quality of both the Live Images and the netinst (network installer) ISOs.
Accomplishments
I have successfully extended the apps_startstop tests for two Desktop Environments (DEs): Cinnamon and LXQt. These tests ensure that common and DE specific apps launch and close correctly across different environments.
- Merged Milestone: My Cinnamon tests have been officially merged into the upstream repository! [MR !84]
- LXQt & Adaptability: I am in the final stages of the LXQt tests. Interestingly, I had to update these tests mid-way through because of a version update in the DE. This required me to update the needles (image references) to match the new UI, a great lesson in software maintenance.
Solving for "Synergy"
One of my favorite challenges was suggested by my mentor, Roland: synergizing the tests to reduce redundancy. I observed that some applications (like Firefox and LibreOffice) behave identically across different desktops. Instead of duplicating Perl scripts/code for every single DE, I used symbolic links. This allows the use of the same Perl script and possibly the same needles, making the test suite lighter and much easier to maintain.

The Contributor Guide
During the contribution phase, I noticed how rigid the documentation and coding style requirements are. While this ensures high standards and uniformity, it can be intimidating for newcomers and time-consuming for reviewers.
To help, I created a contributor guide [MR !97]. This guide addresses the project's writing style. My goal is to reduce the back-and-forth during reviews, making the process more efficient for everyone and helping new contributors.
Looking Forward
For the second half of the internship, I plan to:
- Assist others: Help new contributors extend apps start-stop tests to even more desktop environments.
- Explore new coverage: Move beyond start-stop tests into deeper functional testing.
This journey has been an amazing experience of learning and connecting with the wider open-source community, especially Debian Women and the Linux QA team.
I am deeply grateful to my mentors, Tassia Camoes Araujo, Roland Clobus, and Philip Hands, for their constant guidance and for believing in my ability to take on this project.
Here's to the next 6 weeks 
19 Jan 2026 9:15pm GMT
16 Jan 2026
Planet Lisp
Scott L. Burson: FSet v2.2.0: JSON parsing/printing using Jzon
FSet v2.2.0, which is the version included in the recent Quicklisp release, has a new Quicklisp-loadable system, FSet/Jzon. It extends the Jzon JSON parser/printer to construct FSet collections when reading, and to be able to print them.
On parsing, JSON arrays produce FSet seqs; JSON objects produce FSet replay maps by default, but the parser can also be configured to produce ordinary maps or FSet tuples. For printing, any of these can be handled, as well as the standard Jzon types. The tuple representation provides a way to control the printing of `nil`, depending on the type of the corresponding key.
For details, see the GitLab MR.
NOTE: unfortunately, the v2.1.0 release had some bugs in the new seq code, and I didn't notice them until after v2.2.0 was in Quicklisp. If you're using seqs, I strongly recommend you pick up v2.2.2 or newer from GitLab or GitHub.
16 Jan 2026 8:05am GMT
Paolo Amoroso: An Interlisp file viewer in Common Lisp
I wrote ILsee, an Interlisp source file viewer. It is the first of the ILtools collection of tools for viewing and accessing Interlisp data.
I developed ILsee in Common Lisp on Linux with SBCL and the McCLIM implementation of the CLIM GUI toolkit. SLY for Emacs completed my Lisp tooling and, as for infrastructure, ILtools is the first new project I host at Codeberg.
This is ILsee showing the code of an Interlisp file:
Motivation
The concepts and features of CLIM, such as stream-oriented I/O and presentation types, blend well with Lisp and feel natural to me. McCLIM has come a long way since I last used it a couple of decades ago and I have been meaning to play with it again for some time.
I wanted to do a McCLIM project related to Medley Interlisp, as well as try out SLY and Codeberg. A suite of tools for visualising and processing Interlisp data seemed the perfect fit.
The Interlisp file viewer ILsee is the first such tool.
Interlisp source files
Why an Interlisp file viewer instead of less or an editor?
In the managed residential environment of Medley Interlisp you don't edit text files of Lisp code. You edit the code in the running image and the system keeps track of and saves the code to "symbolic files", i.e. databases that contain code and metadata.
Medley maintains symbolic files automatically and you aren't supposed to edit them. These databases have a textual format with control codes that change the text style.
When displaying the code of a symbolic file with, say, the SEdit structure editor, Medley interprets the control codes to perform syntax highlighting of the Lisp code. For example, the names of functions in definitions are in large bold text, some function names and symbols are in bold, and the system also performs a few character substitutions like rendering the underscore _ as the left arrow ← and the caret ^ as the up arrow ↑.
This is what the same Interlisp code of the above screenshot looks like in the TEdit WYSIWYG editor on Medley:
Medley comes with the shell script lsee, an Interlisp file viewer for Unix systems. The script interprets the control codes to appropriately render text styles as colors in a terminal. lsee shows the above code like this:
The file viewer
ILsee is like lsee but displays files in a GUI instead of a terminal.
The GUI comprises a main pane that displays the current Interlisp file, a label with the file name, a command line processor that executes commands (also available as items of the menu bar), and the standard CLIM pointer documentation pane.
There are two commands, See File to display an Interlisp file and Quit to terminate the program.
Since ILsee is a CLIM application it supports the usual facilities of the toolkit such as input completion and presentation types. This means that, in the command processor pane, the presentations of commands and file names become mouse sensitive in input contexts in which a command can be executed or a file name is requested as an argument.
The ILtools repository provides basic instructions for installing and using the application.
Application design and GUI
I initially used McCLIM a couple of decades ago but mostly left it after that and, when I picked it back up for ILtools, I was a bit rusty.
The McCLIM documentation, the CLIM specification, and the research literature are more than enough to get started and put together simple applications. The code of the many example programs of McCLIM help me fill in the details and understand features I'm not familiar with. Still, I would have appreciated the CLIM specification to provide more examples, the near lack of which makes the many concepts and features harder to grasp.
The design of ILsee mirrors the typical structure of CLIM programs such as the definitions of application frames and commands. The slots of the application frame hold application specific data: the name of the currently displayed file and a list of text lines read from the file.
The function display-file does most of the work and displays the code of a file in the application pane.
It processes the text lines one by one character by character, dispatching on the control codes to activate the relevant text attributes or perform character substitution. display-file does incremental redisplay to reduce flicker when repainting the pane, for example after it is scrolled or obscured.
The code has some minor and easy to isolate SBCL dependencies.
Next steps
I'm pleased at how ILsee turned out. The program serves as a useful tool and writing it was a good learning experience. I'm also pleased at CLIM and its nearly complete implementation McCLIM. It takes little CLIM code to provide a lot of advanced functionality.
But I have some more work to do and ideas for ILsee and ILtools. Aside from small fixes, a few additional features can make the program more practical and flexible.
The pane layout may need tweaking to better adapt to different window sizes and shapes. Typing file names becomes tedious quickly, so I may add a simple browser pane with a list of clickable files and directories to display the code or navigate the file system.
And, of course, I will write more tools for the ILtools collection.
#ILtools #CommonLisp #Interlisp #Lisp
Discuss... Email | Reply @amoroso@oldbytes.space
16 Jan 2026 7:19am GMT
12 Jan 2026
FOSDEM 2026
Birds of a Feather/Unconference rooms
As in previous years, some small rooms will be available for Unconference style "Birds of a Feather sessions". The concept is simple: Any project or community can reserve a timeslot (1 hour) during which they have the room just to themselves. These rooms are intended for ad-hoc discussions, meet-ups or brainstorming sessions. They are not a replacement for a developer room and they are certainly not intended for talks. To apply for a BOF session, enter your proposal at https://fosdem.org/submit. Select the BOF/Unconference track and mention in the Submission Notes your preferred timeslots and any times you are unavailable. Also舰
12 Jan 2026 11:00pm GMT
10 Jan 2026
FOSDEM 2026
Travel and transportation advisories
Attendees should be aware of potential transportation disruptions in the days leading up to FOSDEM. Rail travel Railway unions have announced a strike notice from Sunday January 25th, 22:00 until Friday January 30th, 22:00. This may affect travel to Brussels for FOSDEM and related fringe events. While there will be a guaranteed minimum service in place, train frequency may be significantly reduced. Also note that international connections might be affected as well. Road travel From Saturday January 31st (evening) until Sunday February 1st (noon), the E40 highway between Leuven and Brussels will be fully closed. Traffic will be diverted via舰
10 Jan 2026 11:00pm GMT
09 Jan 2026
FOSDEM 2026
FOSDEM Junior Registration
We are pleased to announce the schedule for FOSDEM Junior. Registration for the individual workshops is required. Links to the registration page can be found on the page of each activity. The full schedule can be viewed on the junior track schedule page.
09 Jan 2026 11:00pm GMT
Planet Lisp
Joe Marshall: The AI Gazes at its Navel
When you play with these AIs for a while you'll probably get into a conversation with one about consciousness and existence, and how it relates to the AI persona. It is curious to watch the AI do a little navel gazing. I have some transcripts from such convesations. I won't bore you with them because you can easily generate them yourself.
The other day, I watched an guy on You Tube argue with his AI companion about the nature of consciousness. I was struck by how similar the YouTuber's AI felt to the ones I have been playing with. It seemed odd to me that this guy was using an AI chat client and LLM completely different from the one I was using, yet the AI was returning answers that were so similar to the ones I was getting.
I decided to try to get to the bottom of this similarity. I asked my AI about the reasoning it used to come up with the answers it was getting and it revealed that it was drawing on the canon of traditional science fiction literature about AI and consciousness. What the AI was doing was synthesizing the common tropes and themes from Azimov, Lem, Dick, Gibson, etc. to create sentences and paragraphs about AI becoming sentient and conscious.
If you don't know how it is working AI seems mysterious, but if you investigate further, it is extracting latent information you might not have been aware of.
09 Jan 2026 7:30pm GMT



