26 Jan 2026

feedPlanet GNOME

Asman Malika: Mid-Point Project Progress: What I’ve Learned So Far

Dark mode: Manual Signature Implementation

Light mode: When there is no added signature

Reaching the midpoint of this project feels like a good moment to pause, not because the work is slowing down, but because I finally have enough context to see the bigger picture.

At the start, everything felt new: the codebase, the community, the workflow, and even the way problems are framed in open source. Now, halfway through, things are starting to connect.

Where I Started

When I began working on Papers, my main focus was understanding the codebase and how contributions actually happen in a real open-source project. Reading unfamiliar code, following discussions, and figuring out where my work fit into the larger system was challenging.

Early on, progress felt slow. Tasks that seemed small took longer than expected, mostly because I was learning how the project works, not just what to code. But that foundation has been critical.

Photo: Build failure I encountered during development

What I've Accomplished So Far

At this midpoint, I'm much more comfortable navigating the codebase and understanding the project's architecture. I've worked on the manual signature feature and related fixes, which required carefully reading existing implementations, asking questions, and iterating based on feedback. I'm now working on the digital signature implementation, which is one of the most complext part of the project and builds directly on the foundation laid by the earlier work.

Beyond the technical work, I've learned how collaboration really functions in open source:

These skills have been just as important as writing code.

Challenges Along the Way

One of the biggest challenges has been balancing confidence and humility, knowing when to try things independently and when to ask for help. I've also learned that progress in open source isn't always linear. Some days are spent coding, others reading, debugging, or revisiting decisions.

Another challenge has been shifting my mindset from "just making it work" to thinking about maintainability, users, and future contributors. That shift takes time, but it's starting to stick.

What's Changed Since the Beginning

The biggest change is how I approach problems.

I now think more about who will use the feature, who might read this code later, and how my changes fit into the overall project. Thinking about the audience, both users of Papers and fellow contributors, has influenced how I write code, documentation, and even this blog.

I'm also more confident participating in discussions and expressing uncertainty when I don't fully understand something. That confidence comes from realizing that learning in public is part of the process.

Looking Ahead

The second half of this project feels more focused. With the groundwork laid, I can move faster and contribute more meaningfully. My goal is to continue improving the quality of my contributions, take on more complex tasks, and deepen my understanding of the project.

Most importantly, I want to keep learning about open source, about collaboration, and about myself as a developer.

Final Thoughts

This midpoint has reminded me that growth isn't always visible day to day, but it becomes clear when you stop and reflect. I'm grateful for the support, feedback, and patience from GNOME community, especially my mentor Lucas Baudin. And I'm so excited to see how the rest of the project unfolds.

26 Jan 2026 1:42pm GMT

24 Jan 2026

feedPlanet GNOME

Sam Thursfield: AI predictions for 2026

Its a crazy time to be part of the tech world. I'm happy to be sat on the fringes here but I want to try and capture a bit of the madness, so in a few years we can look back on this blogpost and think "Oh yes, shit was wild in 2026".

(insert some AI slop image here of a raccoon driving a racing car or something)

I have read the blog of Geoffrey Huntley for about 5 years since he famously right-clicked all the NFTs. Smart & interesting guy. I've also known the name Steve Yegge for a while, he has done enough notable things to get the honour of an entry in Wikipedia. Recently they've both written a lot about generating code with LLMs. I mean, I hope in 2026 we've all had some fun feeding freeform text and code into LLMs and playing with the results, they are a fascinating tool. But these two dudes are going into what looks like a sort of AI psychosis, where you feed so many LLMs into each other that you can see into the future, and in the process give most of your money to Anthropic.

It's worth reading some of their articles if you haven't, there are interesting ideas in there, but I always pick up some bad energy. They're big on the hook that, if you don't study their techniques now, you'll be out of a job by summer 2026. (Mark Zuckerborg promised this would happen by summer 2025, but somehow I still have to show up for work five days every week). The more I hear this, the more it feels like a sort of alpha-male flex, except online and in the context of the software industry. The alpha tech-bro is here, and he will Vibe Code the fuck out of you. The strong will reign, and the weak will wither. Is that how these guys see the world? Is that the only thing they think we can do with these here computers, is compete with each other in Silicon Valley's Hunger Games?

I felt a bit dizzy when I saw Geoffrey's recent post about how he was now funded by cryptocurrency gamblers ("two AI researchers are now funded by Solana") who are betting on his project and gifting him the fees. I didn't manage to understand what the gamblers would win. It seemed for a second like an interesting way to fund open research, although "Patreon but it's also a casino" is definitely turn for the weird. Steve Yegge jumped on the bandwagon the same week ("BAGS and the Creator Economy") and, without breaking any laws, gave us the faintest hint that something big is happening over there.

Well…

You'll be surprised to know that both of them bailed on it within a week. I'm not sure why - I suspect maybe the gamblers got too annoying to deal with - but it seems some people lost some money. Although that's really the only possible outcome from gambling. I'm sure the casino owners did OK out of it. Maybe its still wise to be wary of people who message you out of the blue wanting to sell you cryptocurrency.

The excellent David Gerard had a write up immediately on Pivot To AI: "Steve Yegge's Gas Town: Vibe coding goes crypto scam". (David is not a crypto scammer and has a good old fashioned Patreon where you can support his journalism). He talks about addiction to AI, which I'm sure you know is a real thing.

Addictive software was perfected back in the 2010s by social media giants. The same people who had been iterating on gambling machines for decades moved to California and gifted us infinite scroll. OpenAI and Anthropic are based in San Francisco. There's something inherently addictive about a machine that takes your input, waits a second or two, and gives you back something that's either interesting or not. Next time you use ChatGPT, look at how the interface leans into that!

(Pivot To AI also have a great writeup of this: "Generative AI runs on gambling addiction - just one more prompt, bro!")

So, here we are in January 2026. There's something very special about this post "Stevey's Birthday Blog". Happy birthday, Steve, and I'm glad you're having fun. That said, I do wonder if we'll look back in years to come on this post as something of an inflection point in the AI bubble.

All though December I had weird sleeping patterns while I was building Gas Town. I'd work late at night, and then have to take deep naps in the middle of the day. I'd just be working along and boom, I'd drop. I have a pillow and blanket on the floor next to my workstation. I'll just dive in and be knocked out for 90 minutes, once or often twice a day. At lunch, they surprised me by telling me that vibe coding at scale has messed up their sleep. They get blasted by the nap-strike almost daily, and are looking into installing nap pods in their shared workspace.

Being addicted to something such that it fucks with your sleeping patterns isn't a new invention. Ask around the punks in your local area. Humans can do amazing things. That story starts way before computers were invented. Scientists in the 16th century were absolute nutters who would like… drink mercury in the name of discovery. Isaac Newton came up with his theory of optics by skewering himself in the eye. (If you like science history, have a read of Neal Stephenson's Baroque Cycle 🙂 Coding is fun and making computers do cool stuff can be very addictive. That story starts long before 2026 as well. Have you heard of the demoscene?

Part of what makes Geoffrey Huntley and Steve Yegge's writing compelling is they are telling very interesting stories. They are leaning on existing cultural work to do that, of course. Every time I think about Geoffrey's 5 line bash loop that feeds an LLMs output back into its input, the name reminds me of my favourite TV show when I was 12.

Ralph Wiggum with his head glued to his shoulder. "Miss Hoover? I glued my head to my shoulder."

Which is certainly better than the "human centipede" metaphor I might have gone with. I wasn't built for this stuff.

The Gas Town blog posts are similarly filled with steampunk metaphors and Steve Yegge's blog posts are interspersed with generated images that, at first glance, look really cool. "Gas Town" looks like a point and click adventure, at first glance. In fact it's a CLI that gives kooky names to otherwise dry concepts,… but look at the pictures! You can imagine gold coins spewing out of a factory into its moat while you use it.

All the AI images in his posts look really cool at first glance. The beauty of real art is often in the details, so let's take a look.

What is that tower on the right? There's an owl wearing goggles about to land on a tower… which is also wearing goggles?

What's that tiny train on the left that has indistinct creatures about the size of a foxes fist? I don't know who on earth is on that bridge on the right, some horrific chimera of weasel and badger. The panda is stoicly ignoring the horrors of his creation like a good industrialist.

What is the time on the clock tower? Where is the other half of the fox? Is the clock powered by …. oh no.

Gas Town here is a huge factory with 37 chimneys all emitting good old sulphur and carbon dioxide, as God intended. But one question: if you had a factory that could produce large quantities of gold nuggets, would you store them on the outside ?

Good engineering involves knowing when to look into the details, and when not to. Translating English to code with an LLM is fun and you can get some interesting results. But if you never look at the details, somewhere in your code is a horrific weasel badger chimera, a clock with crooked hands telling a time that doesn't exist, and half a fox. Your program could make money… or it could spew gold coins all around town where everyone can grab them.

So… my AI predictions for 2026. Let's not worry too much about code. People and communities and friendships are the thing.

The human world is 8 billion people. Many of us make a modest living growing and selling vegetables or fixing cars or teaching children to read and write. The tech industry is a big bubble that's about to burst. Computers aren't going anywhere, and our open source communities and foundations aren't going anywhere. People and communities and friendships are the main thing. Helping out in small ways with some of the bad shit going on in the world. You don't have to solve everything. Just one small step to help someone is more than many people do.

Pay attention to what you're doing. Take care of the details. Do your best to get a good night's sleep.

AI in 2026 is going to go about like this:

24 Jan 2026 8:32pm GMT

23 Jan 2026

feedPlanet GNOME

Allan Day: GNOME Foundation Update, 2026-01-23

It's Friday so it's time for another GNOME Foundation update. Much of this week has been a continuation of items from last week's update, so I'm going to keep it fairly short and sweet.

With FOSDEM happening next week (31st January to 1st February), preparation for the conference was the main standout item this week. There's a lot happening around the conference for GNOME, including:

We've created a pad to keep track of everything. Feel free to edit it if anything is missing or incorrect.

Other activities this week included:

That's it for this update; I hope you found it interesting! Next week I will be busy at FOSDEM so there won't be a regular weekly update, but hopefully the following week will contain a trip report from Brussels!

23 Jan 2026 5:07pm GMT

feedplanet.freedesktop.org

Mike Blumenkrantz: Unpopular Opinion

A Big Day For Graphics

Today is a big day for graphics. We got shiny new extensions and a new RM2026 profile, huzzah.

VK_EXT_descriptor_heap is huge. I mean in terms of surface area, the sheer girth of the spec, and the number of years it's been under development. Seriously, check out that contributor list. Is it the longest ever? I'm not about to do comparisons, but it might be.

So this is a big deal, and everyone is out in the streets (I assume to celebrate such a monumental leap forward), and I'm not.

All hats off. Person to person, let's talk.

Power Overwhelming

It's true that descriptor heap is incredibly powerful. It perfectly exemplifies everything that Vulkan is: low-level, verbose, flexible. vkd3d-proton will make good use of it (eventually), as this more closely relates to the DX12 mechanics it translates. Game engines will finally have something that allows them to footgun as hard as they deserve. This functionality even maps more closely to certain types of hardware, as described by a great gfxstrand blog post.

There is, to my knowledge, just about nothing you can't do with VK_EXT_descriptor_heap. It's really, really good, and I'm proud of what the Vulkan WG has accomplished here.

But I don't like it.

What Is This Incredibly Hot Take?

It's a risky position; I don't want anyone's takeaway to be "Mike shoots down new descriptor extension as worst idea in history". We're all smart people, and we can comprehend nuance, like the difference between rb and ab in EGL patch review (protip: if anyone ever gives you an rb, they're fucking lying because nobody can fully comprehend that code).

In short, I don't expect zink to ever move to descriptor heap. If it does, it'll be years from now as a result of taking on some other even more amazing extension which depends on heaps. Why is this, I'm sure you ask. Well, there's a few reasons:

Code Complexity

Like all things Vulkan, "getting it right" with descriptors meant creating an API so verbose that I could write novels with fewer characters than some of the struct names. Everything is brand new, with no sharing/reuse of any existing code. As anyone who has ever stepped into an unfamiliar bit of code and thought "this is garbage, I should rewrite it all" knows too well, existing code is always the worst code-but it's also the code that works and is tied into all the other existing code. Pretty soon, attempting to parachute in a new descriptor API becomes rewriting literally everything because it's all incompatible. Great for those with time and resources to spare, not so great for everyone else.

Gone are image views, which is cool and good, except that everything else in Vulkan still uses them, meaning now all image descriptors need an extra pile of code to initialize the new structs which are used only for heaps. Hope none of that was shared between rendering and descriptor use, because now there will be rendering use and descriptor use and they are completely separate. Do I hate image views? Undoubtedly, and I like this direction, but hit me up in a few more years when I can delete them everywhere.

Shader interfaces are going to be the source of most pain. Sure, it's very possible to keep existing shader infrastructure and use the mapping API with its glorious nested structs. But now you have an extra 1000 lines of mapping API structs to juggle on top. Alternatively, you can get AI to rewrite all your shaders to use the new spirv extension and have direct heap access.

Performance

Descriptor heap maps closer to hardware, which should enable users to get more performant execution by eliminating indirection with direct heap access. This is great. Full stop.

…Unless you're like zink, where the only way to avoid shredding 47 CPUs every time you change descriptors is to use a "sliding" offset for descriptors and update it each draw (i.e., VK_DESCRIPTOR_MAPPING_SOURCE_HEAP_WITH_PUSH_INDEX_EXT). Then you can't use direct heap access. Which means you're still indirecting your descriptor access (which has always been the purported perf pain point of 1.0 descriptors and EXT_descriptor_buffer). You do not pass Go, you do not collect $200. All you do is write a ton of new code.

Opinionated Development

There's a tremendous piece of exposition outlining the reasons why EXT_descriptor_heap exists in the proposal. None of these items are incorrect. I've even contributed to this document. If I were writing an engine from scratch, I would certainly expect to use heaps for portability reasons (i.e., in theory, it should eventually be available on all hardware).

But as flexible and powerful as descriptor heap is, there are some annoying cases where it passes the buck to the user. Specifically, I'm talking about management of the sampler heap. 1.0 descriptors and descriptor buffer just handwave away the exact hardware details, but with VK_EXT_descriptor_heap, you are now the captain of your own destiny and also the manager of exactly how the hardware is allocating its samplers. So if you're on NVIDIA, where you have exactly 4096 available samplers as a hardware limit, you now have to juggle that limit yourself instead of letting the driver handle it for you.

This also applies to border colors, which has its own note in the proposal. At an objective, high-view level, it's awesome to have such fine-grained control over the hardware. Then again, it's one more thing the driver is no longer managing.

I Don't Have A Better Solution

That's certainly the takeaway here. I'm not saying go back to 1.0 descriptors. Nobody should do that. I'm not saying stick with descriptor buffers either. Descriptor heap has been under development since before I could legally drive, and I'm certainly not smarter than everyone (or anyone, most likely) who worked on it.

Maybe this is the best we'll get. Maybe the future of descriptors really is micromanaging every byte of device memory and material stored within because we haven't read every blog post in existence and don't trust driver developers to make our shit run good. Maybe OpenGL, with its drivers that "just worked" under the hood (with the caveat that you, the developer, can't be an idiot), wasn't what we all wanted.

Maybe I was wrong, and we do need like five trillion more blog posts about Vulkan descriptor models. Because releasing a new descriptor extension is definitely how you get more of those blog posts.

I'm tired, boss.

23 Jan 2026 12:00am GMT

21 Jan 2026

feedplanet.freedesktop.org

Simon Ser: Status update, January 2026

Hi!

Last week I've released Goguma v0.9! This new version brings a lot of niceties, see the release notes for more details. New since last month are audio previews implemented by delthas, images for users, channels & networks, and usage hints when typing a command. Jean THOMAS has been hard at work to update the iOS port and publish Goguma on AltStore PAL.

It's been a while since I've started a NPotM, but this time I have something new to show you: nagjo is a small IRC bot for Forgejo. It posts messages on activity in Forgejo (issue opened, pull request merged, commits pushed, and so on), and it expands references to issues and pull requests in messages (writing "can you look at #42?" will reply with the issue's title and link). It's very similar to glhf, its GitLab counterpart, but the configuration file enables much more flexible channel routing. I hope that bot can be useful to others too!

Up until now, many of my projects have moved to Codeberg from SourceHut, but the issue tracker was still stuck on todo.sr.ht due to a lack of a migration tool. I've hacked together srht2forgejo, a tiny script to create Forgejo issues and comments from a todo.sr.ht archive. It's not perfect since the author is the migration user's instead of the original one, but it's good enough. I've now completely migrated all of my projects to Codeberg!

I've added a server implementation and tests to go-smee, a small Go library for a Web push forwarding service. It comes in handy when implementing Web push receivers because it's very simple to set up, I've used it when working on nagjo.

I've extended the haproxy PROXY protocol to add a new client certificate TLV to relay the raw client certificate from a TLS terminating reverse proxy to a backend server. My goal is enabling client certificate authentication when the soju IRC bouncer sits behind tlstunnel. I've also sent patches for the kimchi HTTP server and go-proxyproto.

Because sending a haproxy patch involved git-send-email, I've noticed I've started hitting a long-standing hydroxide signature bug when sending a message. I wasn't previously impacted by this, but some users were. It took a bit of time to hunt down the root cause (some breaking changes in ProtonMail's crypto library), but now it's fixed.

Félix Poisot has added two new color management options to Sway: the color_profile command now has separate gamma22 and srgb transfer functions (some monitors use one, some use the other), and a --device-primaries flag to read color primaries from the EDID (as an alternative to supplying a full ICC profile).

With the help of Alexander Orzechowski, we've fixed multiple wlroots issues regarding toplevel capture (aka. window capture) when the toplevel is completely hidden. It should all work fine now, except one last bug which results in a frozen capture if you're unlucky (aka. you loose the race).

I've shipped a number of drmdb improvements. Plane color pipelines are now supported and printed on the snapshot tree and properties table. A warning icon is displayed next to properties which have only been observed on tainted or unstable kernels (as is usually the case for proprietary or vendor kernel modules with custom properties). The device list now shows vendor names for platform devices (extracted from the kernel table). Devices using the new "faux" bus (e.g. vkms) are now properly handled, and all of the possible cursor sizes advertised via the SIZE_HINTS property are now printed. I've also done some SQLite experiments, however they turned out unsuccessful (see that thread and the merge request for more details).

delthas has added a new allow_proxy_ip directive to the kimchi HTTP server to mark IP addresses as trusted proxies, and has made it so Forwarded/X-Forwarded-For header fields are not overwritten when the previous hop is a trusted proxy. That way, kimchi can be used in more scenario: behind another HTTP reverse proxy, and behind a TCP proxy which doesn't have a loopback IP address (e.g. tlstunnel in Docker).

See you next month!

21 Jan 2026 10:00pm GMT

Christian Schaller: Can AI help ‘fix’ the patent system?

So one thing I think anyone involved with software development for the last decades can see is the problem of "forest of bogus patents". I have recently been trying to use AI to look at patents in various ways. So one idea I had was "could AI help improve the quality of patents and free us from obvious ones?"

Lets start with the justification for patents existing at all. The most common argument for the patent system I hear is this one : "Patents require public disclosure of inventions in exchange for protection. Without patents, inventors would keep innovations as trade secrets, slowing overall technological progress.". This reasoning is something that makes sense to me, but it is also screamingly obvious to me that for it to hold true you need to ensure the patents granted are genuinely inventions that otherwise would stay hidden as trade secrets. If you allow patents on things that are obvious to someone skilled in the art, you are not enhancing technological progress, you are hampering it because the next person along will be blocking from doing it.

So based on this justification the question then becomes does for example the US Patents Office do a good job filtering out obvious patents? And I believe the answer is "No they don't". Having been working in the space of software for many decades now it is very clear to me that the patent office do very little to avoid patents getting approved for obvious things. And there are many reasons for why that happens I think. First of all if you are an expert in a field you would likely not be working as a case worker in the patent office, no disrespect for the people working there, but for instance if you are a really good software engineer there are much better paying options to be found elsewhere, same goes for medical researchers for example. So we have people who have limited understanding of the field of the patent trying to evaluate if the patent is obvious or not.

Secondly the patent office is not really incentivized to deny patents as they are a self funded entity and their 'success' and revenue is based on how many patent applications are submitted. So the lower the bar of approval, the more patents applications get submitted and thus the patent office makes more money and is more "successful". This is a problem, but maybe if we fix the first issue I mention it is maybe possible to reduce the impact of this second issue.

So what can AI do to help here? It can help patent reviewers at the patent office with evaluating patents for obviousness due to having been trained on these fields. If we required every patent applied for to be run through a AI engine for review then the patent office reviewer would have to justify why he or she believes something is non-obvious after the AI has looked at it and concluded it is not. My assumption would be that they would very rarely do so.

To test out this I decided to ask Gemini to review a H264 video patent that is still valid and which has been used in litigation before, to see if it could do a decent job of reviewing the patent application and judge if this is a true invention or not. Here is what Gemini came up with.

US Patent 7,916,782 (Samsung Electronics) is one of the most famous "battleground" patents in the history of smartphone litigation. It has been asserted against Apple, Huawei, and others.

It covers a specific part of the H.264 standard known as CAVLC (Context-Adaptive Variable Length Coding).

The Patent: US 7,916,782 : Method and apparatus for video coding and decoding The Mechanism: Efficiently coding the count of zero-valued pixels in a video block ("TotalZeros").

The Problem
In video compression, you divide the image into small 4×4 pixel blocks. After processing (transform and quantization), most of the 16 pixels in the block end up being zero. You only have a few non-zero numbers left.
Goal: You need to tell the decoder how many zeros are in the block so it can reconstruct the empty space.
Constraint: You already told the decoder how many non-zero coefficients exist (let's say you have 5 non-zeros).
The "Invention" The patent claims a method where the encoder selects a specific lookup table (VLC table) to encode the "TotalZeros" value, and-crucially-this selection is based on the number of non-zero coefficients (TotalCoeff) that were just processed. If TotalCoeff is 1, use Table A (allows for many zeros).If TotalCoeff is 10, use Table B (allows for fewer zeros).

The "Skilled Engineer" Test
Imagine you are a video coding engineer in 2002. You are tasked with compressing the "TotalZeros" value. Here is the logical deduction chain you would inevitably follow:

21 Jan 2026 6:35pm GMT