13 Feb 2026

feedPlanet Grep

Lionel Dricot: Do not apologize for replying late to my email

Do not apologize for replying late to my email

You don't need to apologize for taking hours, days, or years to reply to one of my emails.

If we are not close collaborators, and if I didn't explicitly tell you I was waiting for your answer within a specific timeframe, then please stop apologizing for replying late!

This is a trend I'm witnessing, probably caused by the addiction to instant messaging. Most of the emails I receive these days contain some sort of apology. I received an apology from someone who took five hours to reply to what was a cold and unimportant email. I received apologies in what was a reply to a reply I had sent only a couple of days earlier.

Apologizing for taking time to reply to my email is awkward and makes me uncomfortable.

It also puts a lot of pressure on me: what if I take more time than you to reply? Isn't the whole point of asynchronous communication to be… asynchronous? Each on its own rhythm?

I was not waiting for your email in the first place.

As soon as my email was sent, I probably forgot about it. I may have thought a lot before writing it. I may have drafted it multiple times. Or not. But as soon as it was in my outbox, it was also out of my mind.

That's the very point of asynchronous communication. That's why I use email. I'm not making any assumptions about your availability.

Most of the emails I send are replies to emails I received. So, no, I was not waiting for a reply to my reply.

My email might also be an idea I wanted to share with you, a suggestion, a random thought, a way to connect. In all cases, I'm not sitting there, waiting impatiently for your answer.

Even if my email was about requesting some help or collaborating with you, I've been trying to move forward anyway. Your reply, whenever it comes, will only be a bonus. But, except if we are in close collaboration and I explicitly said so in the email, I'm not waiting for you!

I don't want to know all the details of your life.

Yes, you took several days to reply to my email. That's OK. I don't need to know that it's because your mother was dying of cancer or that you were expelled from your house. I'm not making those up! I really receive that kind of apology from people who took several days to reply to emails that look trivial in comparison.

Life happens. If you have things more important to do than replying to my email, then, for god's sake, don't reply to it. I get it! I'm human too. If I sometimes reply to all the emails I receive for several days, I may also archive them quickly for weeks because I don't have the mental space.

If you want to reply but don't have time, put the burden on me

If I'm asking you something and you really would like to take the time to reply to my email, it is OK to simply send one line like

Hey Ploum, I don't have the time and mental space right now. Could you contact me again in 6 months to discuss this idea?

Then archive or delete my email. That's fine. If I really want your input, I will manage to remind you in 6 months. You don't need to justify. You don't need to explain. Being short saves time for both of us.

You don't need to reply at all!

Except if explicitly stated, don't feel any pressure to reply to one of my emails. Feel free to read and discard the email. Feel free to think about it. Feel free to reply to it, even years later, if it makes sense for you. But, most importantly, feel free not to care!

We all receive too many messages in a day. We all have to make choices. We cannot follow all the paths that look interesting because we are all constrained by having, at most, a couple billion seconds left to live.

Consider whether replying adds any value to the discussion. Is a trivial answer really needed? Is there really something to add? Can't we both save time by you not replying?

If my email is already a reply to yours, is there something you really want to add? At some point, it is better to stop the conversation. And, as I said, it is not rude: I'm not waiting for your reply!

Don't tell me you will reply later!

Some people specialize in answering email by explaining why they have no time and that they will reply later.

If I'm not explicitly waiting for you, then that's the very definition of a useless email. That also adds a lot of cognitive load on you: you promised to answer! The fact that you wrote it makes your brain believe that replying to my email is a daunting task. How will you settle for a quick reply after that? What am I supposed to do with such a non-reply email?

In case an acknowledgement is needed, a simple reply with "thanks" or "received" is enough to inform me that you've got the message. Or "ack" if you are a geek.

If you do reply, remind me of the context

If you choose to reply, consider that I have switched to completely different tasks and may have forgotten the context of my own message. When online, my attention span is measured in seconds, so it doesn't matter if you take 30 minutes or 30 days to answer my email: I guarantee you that I forgot about it.

Consequently, please keep the original text of the whole discussion!

Use bottom-posting style to reply to each question or remark in the body of the original mail itself. Don't hesitate to cut out parts of the original email that are not needed anymore. Feel free to ignore large parts of the email. It is fine to give a one-line answer to a very long question.

I'm trying to make my emails structured. If there are questions I want you to answer, each question will be on its own line and will end with a question mark. If you do not see such lines, then there's probably no question to answer.

If you do top posting, please remind me briefly of the context we are in.

Dear Ploum,

I contacted you 6 months ago about my "fooing the bar" project after we met at FOSDEM. You replied to my email with a suggestion of "baring the foo." You also asked a lot of questions. I will answer those below in your own email:

In short, that's basic mailing-list etiquette.

No, seriously, I don't expect you to reply!

If there's one thing to remember, it's that I don't expect you to reply. I'm not waiting for it. I have a life, a family, and plenty of projects. The chance I'm thinking about the email I sent you is close to zero. No, it is literally zero.

So don't feel pressured to reply. Should you really reply in the first place? In case of doubt, drop the email. Life will continue.

If you do reply, I will be honored, whatever time it took for you to send it.

In any case, whatever you choose, do not apologize for replying late!

About the author

I'm Ploum, a writer and an engineer. I like to explore how technology impacts society. You can subscribe by email or by rss. I value privacy and never share your adress.

I write science-fiction novels in French. For Bikepunk, my new post-apocalyptic-cyclist book, my publisher is looking for contacts in other countries to distribute it in languages other than French. If you can help, contact me!

13 Feb 2026 11:30pm GMT

Frank Goossens: How to go block-less with the WordPress ActivityPub plugin

Being the web performance zealot I am, I strive to having as little JavaScript on my sites as possible. JavaScript after all has to be downloaded and has to be executed, so extra JS will always have a performance impact even when in the best of circumstances it exceptionally does not impact Core Web Vitals (which are a snapshot of the bigger performance and sustainability picture).

Source

13 Feb 2026 11:30pm GMT

Dries Buytaert: Drupal's AI roadmap for 2026

Graphic banner reading "Drupal's AI roadmap for 2026" with a futuristic illustration of a person standing beneath floating, colorful sci-fi structures in the sky.

For the past months, the AI Initiative Leadership Team has been working with our contributing partners to define what the Drupal AI initiative should focus on in 2026. That plan is now ready, and I want to share it with the community.

This roadmap builds directly on the strategy we outlined in Accelerating AI Innovation in Drupal. That post described the direction. This plan turns it into concrete priorities and execution for 2026.

The full plan is available as a PDF, but let me explain the thinking behind it.

Producing consistently high-quality content and pages is really hard. Excellent content requires a subject matter expert who actually knows the topic, a copywriter who can translate expertise into clear language, someone who understands your audience and brand, someone who knows how to structure pages with your component library, good media assets, and an SEO/AEO specialist so people actually discover what you made.

Most organizations are missing at least some of these skillsets, and even when all the people exist, coordinating them is where everything breaks down. We believe AI can fill these gaps, not by replacing these roles but by making their expertise available to every content creator on the team.

For large organizations, this means stronger brand consistency, better accessibility, and improved compliance across thousands of pages. For smaller ones, it means access to skills that were previously out of reach: professional copywriting, SEO, and brand-consistent design without needing a specialist for each.

Used carelessly, AI just makes these problems worse by producing fast, generic content that sounds like everything else on the internet. But used well, with real structure and governance behind it, AI can help organizations raise the bar on quality rather than just volume.

Drupal has always been built around the realities of serious content work: structured content, workflows, permissions, revisions, moderation, and more. These capabilities are what make quality possible at scale. They're also exactly the foundation AI needs to actually work well.

Rather than bolting on a chatbot or a generic text generator, we're embedding AI into the content and page creation process itself, guided by the structure, governance, and brand rules that already live in Drupal.

For website owners, the value is faster site building, faster content delivery, smarter user journeys, higher conversions, and consistent brand quality at scale. For digital agencies, it means delivering higher-quality websites in less time. And for IT teams, it means less risk and less overhead: automated compliance, auditable changes, and fewer ad hoc requests to fix what someone published.

We think the real opportunity goes further than just adding AI to what we already have. It's also about connecting how content gets created, how it performs, and how it gets governed into one loop, so that what you learn from your content actually shapes what you build next.

The things that have always made Drupal good at content are the same things that make AI trustworthy. That is not a coincidence, and it's why we believe Drupal is the right place to build this.

What we're building in 2026

The 2026 plan identifies eight capabilities we'll focus on. Each is described in detail in the full plan, but here is a quick overview:

These eight capabilities are where the official AI Initiative is focusing its energy, but they're not the whole picture for AI in Drupal. There is a lot more we want to build that didn't make this initial list, and we expect to revisit the plan in six months to a year.

We also want to be clear: community contributions outside this scope are welcome and important. Work on migrations, chatbots, and other AI capabilities continues in the broader Drupal community. If you're building something that isn't in our 2026 plan, keep going.

How we're making this happen

Over the past year, we've brought together organizations willing to contribute people and funding to the AI initiative. Today, 28 organizations support the initiative, collectively pledging more than 23 full-time equivalent contributors. That is over 50 individual contributors working across time zones and disciplines.

Coordinating 50+ people across organizations takes real structure, so we've hired two dedicated teams from among our partners:

Both teams are creating backlogs, managing issues, and giving all our contributors clear direction. You can read more about how we are going from strategy to execution.

This is a new model for Drupal. We're testing whether open source can move faster when you pool resources and coordinate in a new way.

Get involved

If you're a contributing partner, we're asking you to align your contributions with this plan. The prioritized backlogs are in place, so pick up something that fits and let's build.

If you're not a partner but want to contribute, jump in. The prioritized backlogs are open to everyone.

And if you want to join the initiative as an official partner, we'd absolutely welcome that.

This plan wasn't built in a room by itself. It's the result of collaboration across 28 sponsoring organizations who bring expertise in UX, core development, QA, marketing, and more. Thank you.

We're building something new for Drupal, in a new way, and I'm excited to see where it goes.

13 Feb 2026 11:30pm GMT

feedPlanet Debian

Erich Schubert: Dogfood Generative AI

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable.

The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in "Google AI Overviews" is to use data-nosnippet and cripple the snippet preview in Google. The "AI" browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site.

Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same "veteran stories" crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don't blame the "creators" - because you can currently earn a decent amount of money from such contents, people will generate brainrot content.

If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending "sewing thread with German instructions" as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews - the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.

Partially because of GenAI, StackOverflow is pretty much dead - which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google's ranking is also to blame, as it began favoring showing "new" content over the existing answered questions - causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch).

Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI.

Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives - to graduate, you are expected to write many papers on certain "A" conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.

However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum).

Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire.

Dogfood the AI

Let's dogfood the AI. Here's an outline:

  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI…

Here is an example prompt that you can use:

You are a university educator, preparing homework assignments in debugging.
The programming language used is {lang}.
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG

Generate a {lang} implementation (with bugs) of: {n} ({desc})

Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.

If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as "model collapse".

On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of "internet 2.0", but I do not have a clear vision on how to keep AI out - if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don't think technology is the answere here, but human networks of trust.

13 Feb 2026 10:29am GMT

12 Feb 2026

feedPlanet Debian

Dirk Eddelbuettel: RcppSpdlog 0.0.27 on CRAN: C++20 Accommodations

Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

Brian Ripley has now turned C++20 on as a default for R-devel (aka R 4.6.0 'to be'), and this turned up misbehvior in packages using RcppSpdlog such as our spdl wrapper (offering a nicer interface from both R and C++) when relying on std::format. So for now, we turned this off and remain with fmt::format from the fmt library while we investigate further.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.27 (2026-02-11)

  • Under C++20 or later, keep relying on fmt::format until issues experienced using std::format can be identified and resolved

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

12 Feb 2026 1:59pm GMT

Freexian Collaborators: Debian Contributions: cross building, rebootstrap updates, Refresh of the patch tagging guidelines and more! (by Anupa Ann Joseph)

Debian Contributions: 2026-01

Contributing to Debian is part of Freexian's mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

cross building, by Helmut Grohne

In version 1.10.1, Meson merged a patch to make it call the correct g-ir-scanner by default thanks to Eli Schwarz. This problem affected more than 130 source packages. Helmut retried building them all and filed 69 patches as a result. A significant portion of those packages require another Meson change to call the correct vapigen. Another notable change is converting gnu-efi to multiarch, which ended up requiring changes to a number of other packages. Since Aurelien dropped the libcrypt-dev dependency from libc6-dev, this transition now is mostly complete and has resulted in most of the Perl ecosystem correctly expressing perl-xs-dev dependencies needed for cross building. It is these infrastructure changes affecting several client packages that this work targets. As a result of this continued work, about 66% of Debian's source packages now have satisfiable cross Build-Depends in unstable and about 10000 (55%) actually can be cross built. There are now more than 500 open bug reports affecting more than 2000 packages most of which carry patches.

rebootstrap, by Helmut Grohne

Maintaining architecture cross-bootstrap requires continued effort for adapting to archive changes such as glib2.0 dropping a build profile or an e2fsprogs FTBFS. Beyond those generic problems, architecture-specific problems with e.g. musl-linux-any or sparc may arise. While all these changes move things forward on the surface, the bootstrap tooling has become a growing pile of patches. Helmut managed to upstream two changes to glibc for reducing its Build-Depends in the stage2 build profile and thanks Aurelien Jarno.

Refresh of the patch tagging guidelines, by Raphaël Hertzog

Debian Enhancement Proposal #3 (DEP-3) is named "Patch Tagging Guidelines" and standardizes meta-information that Debian contributors can put in patches included in Debian source packages. With the feedback received over the years, and with the change in the package management landscape, the need to refresh those guidelines became evident. As the initial driver of that DEP, I spent a good day reviewing all the feedback (that I kept in a folder) and producing a new version of the document. The changes aim to give more weight to the syntax that is compatible with git format-patch's output, and also to clarify the expected uses and meanings of a couple of fields, including some algorithm that parsers should follow to define the state of the patch. After the announcement of the new draft on debian-devel, the revised DEP-3 received a significant number of comments that I still have to process.

Miscellaneous contributions

12 Feb 2026 12:00am GMT

11 Feb 2026

feedPlanet Lisp

vindarel: 🖌️ Lisp screenshots: today's Common Lisp applications in action

I released a hopefully inspiring gallery:

lisp-screenshots.org

We divide the showcase under the categories Music, Games, Graphics and CAD, Science and industry, Web applications, Editors and Utilities.

Of course:

"Please don't assume Lisp is only useful for...

thank you ;)

For more example of companies using CL in production, see this list (contributions welcome, of course).


Don't hesitate to share a screenshot of your app! It can be closed source and yourself as the sole user, as long as it as some sort of a GUI, and you use it. Historical success stories are for another collection.

The criteria are:

Details:

You can reach us on GitHub discussions, by email at (reverse "gro.zliam@stohsneercs+leradniv") and in the comments.

Best,

11 Feb 2026 10:35pm GMT

07 Feb 2026

feedPlanet Lisp

Joe Marshall: Vibe Coded Scheme Interpreter

Mark Friedman just released his Scheme-JS interpreter which is a Scheme with transparent JavaScript interoperability. See his blog post at furious ideas.

This interpreter apparently uses the techniques of lightweight stack inspection - Mark consulted me a bit about that hack works. I'm looking forward to seeing the vibe coded architecture.

07 Feb 2026 12:28am GMT

02 Feb 2026

feedPlanet Lisp

Gábor Melis: Untangling Literate Programming

Classical literate programming

A literate program consists of interspersed narrative and code chunks. From this, source code to be fed to the compiler is generated by a process called tangling, and documentation by weaving. The specifics of tangling vary, but the important point is that this puts the human narrative first and allows complete reordering and textual combination of chunks at the cost of introducing an additional step into the write-compile-run cycle.

The general idea

It is easy to mistake this classical implementation of literate programming for the more general idea that we want to

  1. present code to human readers in pedagogical order with narrative added, and

  2. make changing code and its documentation together easy.

The advantages of literate programming follow from these desiderata.

Untangled LP

In many languages today, code order is far more flexible than in the era of early literate programming, so the narrative order can be approximated to some degree using docstrings and comments. Code and its documentation are side by side, so changing them together should also be easy. Since the normal source code now acts as the LP source, there is no more tangling in the programming loop. This is explored in more detail here.

Pros and cons

Having no tangling is a great benefit, as we get to keep our usual programming environment and tooling. On the other hand, bare-bones untangled LP suffers from the following potential problems.

  1. Order mismatches: Things like inline functions and global variables may need to be defined before use. So, code order tends to deviate from narrative order to some degree.

  2. Reduced locality: Our main tool to sync code and narrative is factoring out small, meaningful functions, which is just good programming style anyway. However, this may be undesirable for reasons of performance or readability. In such a case, we might end up with a larger function. Now, if we have only a single docstring for it, then it can be non-obvious which part of the code a sentence in the docstring refers to because of their distance and the presence of other parts.

  3. No source code only view: Sometimes we want to see only the code. In classical LP, we can look at the tangled file. In untangled LP, editor support for hiding the narrative is the obvious solution.

  4. No generated documentation: There is no more tangling nor weaving, but we still need another tool to generate documentation. Crucially, generating documentation is not in the main programming loop.

In general, whether classical or untangled LP is better depends on the severity of the above issues in the particular programming environment.

The Lisp and PAX view

MGL-PAX, a Common Lisp untangled LP solution, aims to minimize the above problems and fill in the gaps left by dropping tangling.

  1. Order

    • Common Lisp is quite relaxed about the order of function definitions, but not so much about DEFMACRO, DEFVAR, DEFPARAMETER, DEFCONSTANT, DEFTYPE , DEFCLASS, DEFSTRUCT, DEFINE-COMPILER-MACRO, SET-MACRO-CHARACTER, SET-DISPATCH-MACRO-CHARACTER, DEFPACKAGE. However, code order can for the most part follow narrative order. In practice, we end up with some DEFVARs far from their parent DEFSECTIONs (but DECLAIM SPECIAL helps).

    • DEFSECTION controls documentation order. The references to Lisp definitions in DEFSECTION determine narrative order independently from the code order. This allows the few ordering problems to be patched over in the generated documentation.

    • Furthermore, because DEFSECTION can handle the exporting of symbols, we can declare the public interface piecemeal, right next to the relevant definitions, rather than in a monolithic DEFPACKAGE

  2. Locality

    • Lisp macros replace chunks in the rare, complex cases where a chunk is not a straightforward text substitution but takes parameters. Unlike text-based LP chunks, macros must operate on valid syntax trees (S-expressions), so they cannot be used to inject arbitrary text fragments (e.g. an unclosed parenthesis).

      This constraint forces us to organize code into meaningful, syntactic units rather than arbitrary textual fragments, which results in more robust code. Within these units, macros allow us to reshape the syntax tree directly, handling scoping properly where text interpolation would fail.

    • PAX's NOTE is an extractable, named comment. NOTE can interleave with code within e.g. functions to minimize the distance between the logic and its documentation.

    • Also, PAX hooks into the development to provide easy navigation in the documentation tree.

  3. Source code only view: PAX supports hiding verbose documentation (sections, docstrings, comments) in the editor.

  4. Generating documentation

    • PAX extracts docstrings, NOTEs and combines them with narrative glue in DEFSECTIONs.

    • Documentation can be generated as static HTML/PDF files for offline reading or browsed live (in an Emacs buffer or via an in-built web server) during development.

    • LaTeX math is supported in both PDF and HTML (via MathJax, whether live or offline).

In summary, PAX accepts a minimal deviation in code/narrative order but retains the original, interactive Lisp environment (e.g. SLIME/Sly), through which it offers optional convenience features like extended navigation, live browsing, and hiding documentation in code. In return, we give up easy fine-grained control over typesetting the documentation - a price well worth paying in Common Lisp.

02 Feb 2026 12:00am GMT

29 Jan 2026

feedFOSDEM 2026

Join the FOSDEM Treasure Hunt!

Are you ready for another challenge? We're excited to host the second yearly edition of our treasure hunt at FOSDEM! Participants must solve five sequential challenges to uncover the final answer. Update: the treasure hunt has been successfully solved by multiple participants, and the main prizes have now been claimed. But the fun doesn't stop here. If you still manage to find the correct final answer and go to Infodesk K, you will receive a small consolation prize as a reward for your effort. If you're still looking for a challenge, the 2025 treasure hunt is still unsolved, so舰

29 Jan 2026 11:00pm GMT

26 Jan 2026

feedFOSDEM 2026

Guided sightseeing tours

If your non-geek partner and/or kids are joining you to FOSDEM, they may be interested in spending some time exploring Brussels while you attend the conference. Like previous years, FOSDEM is organising sightseeing tours.

26 Jan 2026 11:00pm GMT

Call for volunteers

With FOSDEM just a few days away, it is time for us to enlist your help. Every year, an enthusiastic band of volunteers make FOSDEM happen and make it a fun and safe place for all our attendees. We could not do this without you. This year we again need as many hands as possible, especially for heralding during the conference, during the buildup (starting Friday at noon) and teardown (Sunday evening). No need to worry about missing lunch at the weekend, food will be provided. Would you like to be part of the team that makes FOSDEM tick?舰

26 Jan 2026 11:00pm GMT