08 Jan 2026
Planet Mozilla
Matthew Gaudet: Non-Traditional Profiling
Also known as "you can just put whatever you want in a jitdump you know?"
When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.
There's a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.
Of course, the basic range you'd expect to name is "function foo() was compiled to bytes 0x1000-0x1400"
Suppose you get that working. You might get a profile that looks like this one.

This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.
Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly 'template' style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.
What if we looked more closely at that?
Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it's not used. I went through and added AutoCreatedBy to each of the LIR op's codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the 'function name' for that sequence of instructions generated while the AutoCreatedBy was live.
That gets us this profile

While it doesn't look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree

For example, we spend 1.9% of the profiled time doing for a single self-hosted function 'visitHasShape', which is basically:
masm.loadObjShapeUnsafe(obj, output);
masm.cmpPtrSet(Assembler::Equal, output, ImmGCPtr(ins->mir()->shape()),
output);
Which is not particularly complicated.
Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.
Woah. Ok, now we've got something quite different, if really hard to interpret

Even more interesting (easier to interpret) is the inverted call tree:

So across the whole program, we're spending basically 5% of the time doing guardShape. I think that's a super interesting slicing of the data.
Is it actionable? I don't know yet. I haven't opened any bugs really on this yet; a lot of the highlighted code is stuff where it's not clear that there is a faster way to do what's being done, outside of engine architectural innovation.
The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I'm sure there's more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it's low on the benchmarks I checked yo).
So yeah. You can just... put stuff in your JIT dump file.
Edited to Add: I should mention this code is nowhere. Given I don't entirely know how actionable this ends up being, and the code quality is subpar, I haven't even pushed this code. Think of this as an inspiration, not a feature announcement.
08 Jan 2026 9:46pm GMT
The Mozilla Blog: Owners, not renters: Mozilla’s open source AI strategy

The future of intelligence is being set right now, and the path we're on leads somewhere I don't want to go. We're drifting toward a world where intelligence is something you rent - where your ability to reason, create, and decide flows through systems you don't control, can't inspect, and didn't shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you're given.
I think we can do better. Making that happen is now central to what Mozilla is doing.
What we did for the web
Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible - dropping Internet Explorer's market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.
There's a reason the browser is called a "user agent." It was designed to be on your side - blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.
Now AI is becoming the new intermediary. It's what I've started calling "Layer 8" - the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.
The question we have to ask is straightforward: Whose side will your new user agent be on?
Why closed systems are winning (for now)
We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you're a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing - it all comes bundled together in a package that just works. I understand the appeal firsthand, because I've made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.
The open-source AI ecosystem is a different story. It's powerful and advancing rapidly, but it's also deeply fragmented - models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don't have to spare. This is the core challenge we face, and it's important to name it clearly: What we're dealing with isn't a values problem where developers are choosing convenience over principle. It's a developer experience problem. And developer experience problems can be solved.
The ground is already shifting
We've watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway - not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn't match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.
AI has the potential to follow the same path - but only if someone builds it. And several shifts are already reshaping the landscape:
- Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks - and they run on hardware that organizations already own;
- The economics are changing too. As enterprises feel the constraints of closed dependencies, self-hosting is starting to look like sound business rather than ideological commitment (companies like Pinterest have attributed millions of dollars in savings to migrating to open-source AI infrastructure);
- Governments want control over their supply chain. Governments are becoming increasingly unwilling to depend on foreign platforms for capabilities they consider strategically important, driving demand for sovereign systems; and,
- Consumer expectations keep rising. People want AI that responds instantly, understands their context, and works across their tools without locking them into a single platform.
The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn't win by being more principled than the alternatives. Openness wins when it becomes the better deal - cheaper, more capable, and just as easy to use
Where the cracks are forming
If openness is going to win, it won't happen everywhere at once. It will happen at specific tipping points - places where the defaults haven't yet hardened, where a well-timed push can change what becomes normal. We see four.

The first is developer experience. Developers are the ones who actually build the future - every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that's where most of the building is happening. But developers don't want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they'll build the open ecosystem themselves.
The second is data. For a decade, the assumption has been that data is free to scrape - that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it's used and a share in the value it creates. We're moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there's still a chance to build it right.
The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.
The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open - through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.
What an open stack could look like
Today's dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next - data improves models, models improve applications, applications generate more data that only the platform can use. It's a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don't build on the platform; you build inside it.
There's another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.
We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:
- Open developer interfaces at the top. SDKs, guardrails, workflows, and orchestration that don't lock you into a single vendor;
- Open data standards underneath. Provenance, consent, and portability built in by default, so you know where your training data came from and who has rights to it;
- An open model ecosystem below that. Smaller, specialized, interchangeable models that you can inspect, tune to your values, and run where you need them; and
- Open compute infrastructure at the foundation. Distributed and federated hardware across cloud and edge, not routed through a handful of hyperscn/lallers.
Pieces of this stack already exist - good ones, built by talented people. The task now is to fill in the gaps, connect what's there, and make the whole thing as easy to use as the closed alternatives. That's the work.
Why open source matters here
If you've followed Mozilla, you know the Manifesto. For almost 20 years, it's guided what we build and how - not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:
- Human agency. In a world of AI agents, it's more important than ever that technology lets people shape their own experiences - and protects privacy where it matters most;
- Decentralization and open source. An open, accessible internet depends on innovation and broad participation in how technology gets created and used. The success of open-source AI, built around transparent community practices, is critical to making this possible; and
- Balancing commercial and public benefit. The direction of AI is being set by commercial players. We need strong public-benefit players to create balance in the overall ecosystem.
Open-source AI is how these principles become real. It's what makes plurality possible - many intelligences shaped by many communities, not one model to rule them all. It's what makes sovereignty possible - owning your infrastructure rather than renting it. And it's what keeps the door open for public-benefit alternatives to exist alongside commercial ones.
What we'll do in 2026
The window to shape these defaults is still open, but it won't stay open forever. Here's where we're putting our effort - not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.
Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack - model routing, evaluation, guardrails, memory, orchestration - into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.
Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.
Learn from real deployments. Strategy that isn't grounded in practical experience is just speculation, so we're deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.
Invest in the ecosystem. We're not just building; we're backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can't do everything ourselves, and we shouldn't try. The goal is to put resources behind the people and teams already doing the work.
Show up for the community. The open-source AI ecosystem is vast, and it's hard to know what's working, what's hype, and where the real momentum is building. We want to be useful here. We're launching a newsletter to track what's actually happening in open AI. We're running meetups and hackathons to bring builders together. We're fielding developer surveys to understand what people actually need. And at MozFest this year, we're adding a dedicated developer track focused on open-source AI. If you're doing important work in this space, we want to help it find the people who need to see it.
Are you in?
Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it - we just want to help it succeed. There's a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.
We kept the web open not by asking anyone's permission, but by building something that worked better than the alternatives. We're ready to do that again.
So: Are you in?
If you're a developer building toward an open source AI future, we want to work with you. If you're a researcher, investor, policymaker, or founder aligned with these goals, let's talk. If you're at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist - that keeps everyone honest.
The future of intelligence is being set now. The question is whether you'll own it, or rent it.
We're launching a newsletter to track what's happening in open-source AI - what's working, what's hype, and where the real momentum is building. Sign up here to follow along as we build.
Read more here about our emerging strategy, and how we're rewiring Mozilla for the era of AI.
The post Owners, not renters: Mozilla's open source AI strategy appeared first on The Mozilla Blog.
08 Jan 2026 7:05pm GMT
Firefox Add-on Reviews: 2025 Staff Pick Add-ons
While nearly half of all Firefox users have installed an add-on, it's safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…
Falling Snow Animated Theme
Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.
Privacy Badger
The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.
Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage "supercookies," canvas fingerprinting, and other sneaky tracking methods.
Adaptive Tab Bar Color
Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you're visiting.
It's beautifully simple and sublime. No setup required, but you're free to make subtle adjustments to color contrast patterns and assign specific colors for websites.
Rainy Spring Sakura by MaDonna
Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura's bucolic mix of calming colors.
It's like instant Zen mode for Firefox.
Return YouTube Dislike
Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.
Other Firefox users seem to agree…
"Does exactly what the name suggests. Can't see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool."
Firefox user OFG
"i have never smashed 5 stars faster."
Firefox user 12918016
<figcaption class="wp-element-caption">Return YouTube Dislike re-enables a beloved feature.</figcaption>LeechBlock NG
Block time-wasting websites with LeechBlock NG - easily one of our staff-favorite productivity tools.
Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features:
- Block entire websites or just portions (e.g. allow YouTube video pages but block the homepage)
- Block websites based on time of day, day of the week, or both
- Time limit customization (e.g. only 1 hour of Reddit per day)
DarkSpaceBlue
Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.
LanguageTool - Grammar and Spell Checker
Improve your prose anywhere you write on the web. LanguageTool - Grammar and Spell Checker will make you a better writer in 25+ languages.
Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:
- Offers alternate phrasing for brevity and clarity
- Recognizes common misuses of similar sounding words (e.g. there/their, your/you're)
- Works with all web-based email and social media
- Provides synonyms for overused words
<figcaption class="wp-element-caption">LanguageTool can help with subtle syntax improvements. </figcaption>Sink It for Reddit!
Imagine a more focused and free feeling Reddit - that's Sink It for Reddit!
Some of our staff-favorite features include:
- Custom content muting (e.g. ad blocking, remove app install and login prompts)
- Color-coded comments
- Streamlined navigation
- Adaptive dark mode
Sushi Nori
Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori.
08 Jan 2026 2:59pm GMT
07 Jan 2026
Planet Mozilla
Mozilla Localization (L10N): Mozilla Localization in 2025
A Year in Data
As is tradition, we're wrapping up 2025 for Mozilla's localization efforts and offering a sneak peek at what's in store for 2026 (you can find last year's blog post here).
Pontoon's metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.
- 30 projects and 469 locales (+100 compared to 2024) set up in Pontoon.
- 5,019 new user registrations
- 1,190 active users, submitting at least one translation, on average 233 users per month (+5% Year-over-Year)
- 551,378 submitted translations (+18% YoY)
- 472,195 approved translations (+22% YoY)
- 13,002 new strings to translate (-38% YoY).
The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.
As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).
Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla's localization over the last 12 months - or plan to do so in 2026. There is always space for new contributors!
Pontoon Development
A significant part of the work on Pontoon in 2025 isn't immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.
One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.
In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.
Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.
Finally, we've been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we're actively working toward feature parity with Transvision to better support automation and integrations.
Community
Our main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.
This doesn't mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla's localization work possible.
Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we've helped reinforce the message that investing in people - not just tools - is essential to the long-term health of Mozilla's localization ecosystem.
What's coming in 2026
As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.
This excerpt comes from last year's blog post, and while it took longer than expected, the good news is that we're finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.
In parallel, we "silently" launched the Mozilla Language Portal, a unified hub that reflects Mozilla's unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.
On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon's translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.
We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.
Last but not least, we plan to keep investing in our community. While we don't know yet what that will look like in practice, keep an eye on this blog for updates.
If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!
Thank you!
As we look toward 2026, we're grateful for the people who make Mozilla's localization possible. Through shared effort and collaboration, we'll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.
07 Jan 2026 1:51pm GMT
Ludovic Hirlimann: Are mozilla's fork any good?
To answer that question, we first need to understand how complex, writing or maintaining a web browser is.
A "modern" web browser is :
- a network stack,
- and html+[1] parser,
- and image+[2] decoder,
- a javascript[3] interpreter compiler,
- a User's interface,
- integration with the underlying OS[4],
- And all the other things I'm currently forgetting.
Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.
In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure.
Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.
When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.
The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.
Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that for a fact because both Camino and Thunderbird are embeding gecko.
In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :
- Keep the web standards open, so anyone can implement a web browser / web services.
- to have enough traffic to be able to pay all the engineers working on gecko.
Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"
I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :
- following security patches and porting said patches.
- following development and maintain their branch with changes coming all over the place
- how do they test?
If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.
Now, I've set a tone above that foresees a dark future for open web technologies. What Can you do to keep the web open and with some privacy focus?
- Keep using Mozilla Nightly
- Give servo a try
[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.
[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.
[3] Is a computer language that transforms HTML into something that can interact with the person using the web browser. See https://developer.mozilla.org/en-US/docs/Glossary/JavaScript
[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.
[5]https://math.answers.com/math-and-arithmetic/How_many_lines_of_code_in_mozillafirefox
[6] Testing implies, testing the code and also having engineers or users using the unfinished product to see that it doesn't regress. Testing Mozilla, is explained at https://ehsanakhgari.org/wp-content/uploads/talks/test-mozilla/
[7] Read a release equals a version. Version 1.5 is a release, as is version 3.0.1.
[8] https://en.wikipedia.org/wiki/Browser_wars
07 Jan 2026 1:26pm GMT
Wladimir Palant: Backdoors in VStarcam cameras
VStarcam is an important brand of cameras based on the PPPP protocol. Unlike the LookCam cameras I looked into earlier, these are often being positioned as security cameras. And they in fact do a few things better like… well, like having a mostly working authentication mechanism. In order to access the camera one has to know its administrator password.
So much for the theory. When I looked into the firmware of the cameras I discovered a surprising development: over the past years this protection has been systematically undermined. Various mechanisms have been added that leak the access password, and in several cases these cannot be explained as accidents. The overall tendency is clear: for some reason VStarcam really wants to have access to their customer's passwords.
A reminder: "P2P" functionality based on the PPPP protocol means that these cameras will always communicate with and be accessible from the internet, even when located on a home network behind NAT. Short of installing a custom firmware this can only addressed by configuring the network firewall to deny internet access.
Contents
How to recognize affected cameras
Not every VStarcam camera has "VStarcam" printed on the side. I have seen reports of VStarcam cameras being sold under the brand names Besder, MVPower, AOMG, OUSKI, and there are probably more.
Most cameras should be recognizable by the app used to manage them. Any camera managed by one of these apps should be a VStarcam camera: Eye4, EyeCloud, FEC Smart Home, HOTKam, O-KAM Pro, PnPCam, VeePai, VeeRecon, Veesky, VKAM, VsCam, VStarcam Ultra.
Downloading the firmware
VStarcam cameras have a mechanism to deliver firmware updates (LookCam cameras prove that this shouldn't be taken for granted). The app managing the camera will request update information from an address like http://api4.eye4.cn:808/firmware/1.2.3.4/EN where 1.2.3.4 is the firmware version. If a firmware update is available the response will contain a download server and a download path. The app sends these to the device which then downloads and installs the updated firmware.
Both requests are performed over plain HTTP and this is already the first issue. If an attacker can produce a manipulated response either on the network that the app or the device are connected to they will be able to install a malicious update on the camera. The former is particularly problematic, as the camera owner may connect to an open WiFi or similarly untrusted networks while being out.
The last part of a firmware version is a build number which is ignored for the update requests. The first part is a vendor ID where only a few options seem relevant (I checked 10, 48 and 66). The rest of the version number can be easily enumerated. Many firmware branches don't have an active update, and when they do some updates won't download because the servers in question appear no longer operational. Still, I found 380 updates this way.
I managed to unpack all but one of these updates. Firmware version 10.1.110.2 wasn't for a camera but rather some device with an HDMI connector and without any P2P functionality - probably a Network Video Recorder (NVR). Firmware version 10.121.160.42 wasn't using PPPP but something called NHEP2P and an entirely different application-level protocol. Ten updates weren't updating the camera application but only the base system. This left 367 firmware versions for this investigation.
Caveats of this survey
I do not own any VStarcam hardware, nor would it be feasible to investigate hundreds of different firmware versions with real hardware. The results of this article are based solely on reverse engineering, emulation, and automated analysis via running Ghidra in headless mode. While I can easily emulate a PPPP server, doing the same for the VStarcam cloud infrastructure isn't possible, I simply don't know how it behaves. Similarly, the firmware's interaction with hardware had to be left out of the emulation. While I'm still quite confident in my results, these limitations could introduce errors.
More importantly, there are only so many firmware versions that I checked manually. Most of them were checked automatically, and I typically only looked at a few lines of decompiled code that my scripts extracted. There is potential for false negatives here, I expect that there are more issues with VStarcam firmware than what's listed here.
VStarcam's authentication approach
When an app communicates with a camera, it sends commands like GET /check_user.cgi?loginuse=admin&loginpas=888888&user=admin&pwd=888888. Despite the looks of it, these aren't HTTP requests passed on to a web server. Instead, the firmware handles these in function P2pCgiParamFunction which doesn't even attempt to parse the request. The processing code looks for substrings like check_user.cgi to identify the command (yes, you better don't set check_user.cgi as your access password). Parameter extraction works via similar substring matching.
It's worth noting that these cameras have a very peculiar authentication system which VStarcam calls "dual authentication." Here is how the Eye4 application describes it:
The dual authentication mechanism is a measure to upgrade the whole system security
- The device will double check the identity of the visitor and does not support the old version of app.
- Considering the security risk of possible leakage, the plaintext password mode of the device was turned off and ciphertext access was used.
- After the device is added for the first time, it will not be allowed to be added for a second time, and it will be shared by the person who has added it.
I'm not saying that this description is utter bullshit but there is a considerable mismatch with the reality that I can observe. The VStarcam firmware cannot accept anything other than plaintext passwords. Newer firmware versions employ obfuscation on the PPPP-level but this hardly deserves the name "ciphertext".
What I can see is: once a device is enrolled into dual authentication, the authentication is handled by function GetUserPri_doubleVerify rather than GetUserPri. There isn't a big difference between the two, both will try the credentials from the loginuse/loginpas parameters and fall back to the user/pwd credentials pair. Function GetUserPri_doubleVerify merely checks a different password.
From the applications I get the impression that the dual authentication password is automatically generated and probably not even shared with the user but stored in their cloud account. This is an improvement over the regular password that defaults to 888888 and allowed these cameras to be enrolled into a botnet. But it's still a plaintext password used for authentication.
There is a second aspect to dual authentication. When dual authentication is used, the app is supposed to make a second authentication call to eye4_authentication.cgi. The loginAccount and loginToken parameters here appear to belong to the user's cloud account, apparently meant to make sure that only the right user can access a device.
Yet in many firmware versions I've seen the eye4_authentication.cgi request always succeeds. The function meant to perform a web request is simply hardcoded to return the success code 200. Other firmware versions actually make a request to https://verification.eye4.cn, yet this server also seems to produce a 200 response regardless of what parameters I try. It seems that VStarcam never made this feature work the way they intended it.
None of this stopped VStarcam from boasting on their website merely a year ago:

You can certainly count on anything saying "financial grade encryption" being bullshit. I have no idea where AES comes into the picture here, I haven't seen it being used anywhere. Maybe it's their way of saying "we use TLS when connecting to our cloud infrastructure."
Endpoint protection
A reasonable approach to authentication is: authentication is required before any requests unrelated to authentication can be made. This is not the approach taken by VStarcam firmware. Instead, some firmware versions decide for each endpoint individually whether authentication is necessary. Other versions put a bunch of endpoints outside of the code enforcing authentication.
The calls explicitly excluded from authentication differ by firmware version but are for example: get_online_log.cgi, show_prodhwfg.cgi, ircut_test.cgi, clear_log.cgi, alexa_ctrl.cgi, server_auth.cgi. For most of these it isn't obvious why they should be accessible to unauthenticated users. But get_online_log.cgi caught my attention in particular.
Unauthenticated log access
So a request like GET /get_online_log.cgi?enable=1 can be sent to a camera without any authentication. This isn't a request that any of the VStarcam apps seem to support, what does it do?
Despite the name this isn't a download request, it rather sets a flag for the current connection. The logic behind this involves many moving parts including a Linux kernel module but the essence is this: whenever the application logs something via LogSystem_WriteLog function, the application won't merely print that to stderr and write it to the log file on the SD card but also send it to any connection that has this flag set.
What does the application log? Lots and lots of stuff. On average, VStarcam firmware has around 1500 such logging calls. For example, it could log security tokens:
LogSystem_WriteLog("qiniu.c", "upload_qiniu", 497, 0,
"upload_qiniu*** filename = %s, fileid = %s, uptoken = %s\n", …);
LogSystem_WriteLog("pushservice.c", "parsePushServerRequest_cjson", 5281, 1,
"address=%s token =%s master= %d timestamp = %d", …);
LogSystem_WriteLog("queue.c", "CloudUp_Manage_Pth", 347, 2,
"token=%s", …);
It could log cloud server responses:
LogSystem_WriteLog("pushservice.c", "curlPostMqttAuthCb", 4407, 3,
"\n\nrspBuf = %s\n", …);
LogSystem_WriteLog("post/postFileToCloud.c", "curl_post_file_cb", 74, 0,
"\n\nrspBuf = %s\n", …);
LogSystem_WriteLog("pushserver.c", "curl_Eye4Authentication_write_data_cb", 2822, 0,
"rspBuf = %s", …);
And of course it will log the requests coming in via PPPP:
LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
"sit %d, pcmd: %s", …);
Reminder: these requests contain the authentication password as parameter. So an attacker can connect to a vulnerable device, request logs and wait for the legitimate device owner to connect. Once they do their password will show up in the logs - voila, the attacker has access now.
VStarcam appears to be at least somewhat aware of this issue because some firmware versions contain code "censoring" password parameters prior to logging:
memcpy(pcmd, request, sizeof(pcmd));
char* pos = strstr(pcmd, "loginuse");
if (pos)
*pos = 0;
LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0,
"sit %d, pcmd: %s", sit, pcmd);
But that's only the beginning of the story of course.
Explicit password leaking via logs
In addition to the logging calls where the password leaks as a (possibly unintended) side-effect, some logging calls are specifically designed to write the device password to the log. For example, the function GetUserPri meant to handle authentication when dual authentication isn't enabled will often do something like this on a failed login attempt:
LogSystem_WriteLog("sysparamapp.c", "GetUserPri", 177, 0,
"loginuse=%s&loginpas=%s&user=admin&pwd=888888&", gUser, gPassword);
These aren't the parameters of a received login attempt but rather what the parameters should look like for the request to succeed. And if the attacker enabled log access for their connection they will get the device credentials handed on a silver platter - without even having to wait for the device owner to connect.
If dual authentication is enabled, function GetUserPri_doubleVerify often contains a similar call:
LogSystem_WriteLog("web.c", "GetUserPri_doubleVerify", 536, 0,
"pri[%d] system OwnerPwd[%s] app Pwd[%s]",
pri, gOwnerPassword, gAppPassword);
Log uploading
What got me confused at first were the firmware versions that would log the "correct" password on failed authentication attempts but lacked the capability for unauthenticated log access. When I looked closer I found the function DoSendLogToNodeServer. The firmware receives a "node configuration" from a server which includes a "push IP" and the corresponding port number. It then opens a persistent TCP connection to that address (unencrypted of course), so that DoSendLogToNodeServer can send messages to it.
Despite the name this function doesn't upload all of the application logs. There are only three to four DoSendLogToNodeServer calls in the firmware versions I looked at, and two are invariably found in function P2pCgiParamFunction, in code running on first failed authentication attempt:
sprintf(buffer,"password error [doublePwd][%s], [PassWd][%s]", gOwnerPassword, gPassword);
DoSendLogToNodeServer(request);
DoSendLogToNodeServer(buffer);
This is sending both the failed authentication request and the correct passwords to a VStarcam server. So while the password isn't being leaked here to everybody who knows how to ask, it's still being leaked to VStarcam themselves. And anybody who is eavesdropping on the device's traffic of course.
A few firmware versions have log upload functionality in a function called startUploadLogToServer, here really all logging output is being uploaded to the server. This one isn't called unconditionally however but rather enabled by the setLogUploadEnable.cgi endpoint. An endpoint which, you guessed it, can be accessed without authentication. But at least these firmware versions don't seem to have any explicit password logging, only the "regular" logging of requests.
Password-leaking backdoor
With some considerable effort all of the above could be explained as debugging functionality which was mistakenly shipped to production. VStarcam wouldn't be the first company to fail realizing that functionality labeled "for debugging purposes only" will still be abused if released with the production build of their software. But I found yet another password leak which can only be described as a backdoor.
At some point VStarcam introduced a second version of their get_online_log.cgi API. When that second version is requested the device will respond with something like:
result=0;
index=12345678;
str=abababababab;
The result=0 part is typical and indicates that authentication (or lack thereof in this case) was successful. The other two values are unusual, and eventually I decided to check what they were about. Turned out, str is a hex-encoded version of the device password after it was XOR'ed with a random byte. And index is an obfuscated representation of that byte.
I can only explain it like this: somebody at VStarcam thought that leaking passwords via log output was too obvious, people might notice. So they decided to expose the device password in a more subtle way, one that only they knew how to decode (unless somebody notices this functionality and spends two minutes studying it in the firmware).
Mind you, even though this is clearly a backdoor I'm still not ruling out incompetence. Maybe VStarcam made a large enough mess with their dual authentication that their customer support needs to recover device access on a regular basis. However, they do have device reset functionality that should normally be used for this scenario.
In the end, for their customers it doesn't matter what the intention was. The result is a device that cannot be trusted with protecting access. For a security camera this is an unforgivable flaw.
Establishing a timeline
Now we are coming to the tough questions. Why do some firmware versions have this backdoor functionality while others don't? When was this introduced? In what order? What is the current state of affairs?
You might think that after compiling the data on 367 firmware versions the answers would be obvious. But the data is so inconsistent that any conclusions are really difficult. Thing is, we aren't dealing with a single evolving codebase here. We aren't even dealing with two codebases or a dozen of them. 367 firmware versions are 367 different codebases. These codebases are related, they share some code here and there, but they are all being developed independently.
I've seen this development model before. What VStarcam appears to be doing is: for every new camera model they take some existing firmware and fork it. They adjust that firmware for the new hardware, they probably add new features as well. None of this work makes it into the original firmware unless it is explicitly backported. And since VStarcam is maintaining hundreds of firmware variants, the older ones are usually only receiving maintenance changes if any at all.
To make this mess complete, VStarcam's firmware version numbers don't make any sense at all. And I don't mean the fact that VStarcam releases the same camera under 30 different model names, so there is no chance of figuring out the model to firmware version mapping. It's also the firmware version numbers themselves.
As I've already mentioned, the last part of the firmware version is the build number, increased with each release. The first part is the vendor ID: firmware versions starting with 48 are VStarcam's global releases whereas 66 is reserved for their Russian distributor (or rather was I think). Current VStarcam firmware is usually released with vendor ID 10 however, standing for… who knows, VeePai maybe? This leaves the two version parts in between, and I couldn't find any logic here whatsoever. Like, firmware versions sharing the third part of the version number would sometimes be closely related, but only sometimes. At the same time the second part of the version number is supposed to represent the camera model, but that's clearly not always correct either.
I ended up extracting all the logging calls from all the firmware versions and using that data to calculate a distance between every firmware version pair. I then fed this data into GraphViz and asked it to arrange the graph for me. It gave me the VStarcam spiral galaxy:

Click the image above to see the larger and slightly interactive version (it shows additional information when the mouse pointer is at a graph node). The green nodes are the ones that don't allow access to device logs. Yellow are the ones providing unauthenticated log access, always logging incoming requests including their password parameters. The orange ones have additional logging that exposes the correct password on failed authentication attempts - or they call DoSendLogToNodeServer function to send the correct password to a VStarcam server. The red ones have the backdoor in the get_online_log.cgi API leaking passwords. Finally pink are the ones which pretend to improve things by censoring parameters of logged requests - yet all of these without exception leak the password via the backdoor in the get_online_log.cgi API.
Note: Firmware version 10.165.19.37 isn't present in the graph because it is somehow based on an entirely different codebase with no relation to the others. It would be red in the graph however, as the backdoor has been implemented here as well.
Not only does this graph show the firmware versions as clusters, it's also possible to approximately identify the direction of time for each cluster. Let's add cluster names and time arrows to the image:

Of course this isn't a perfect representation of the original data, and I wasn't sure whether it could be trusted. Are these clusters real or merely an artifact produced by the graph algorithm? I verified things manually and could confirm that the clusters are in fact distinctly different on the technical level, particularly when considering updates format:
- Clusters A and B represent firmware for ARM processors. I'm unsure what caused the gap between the two clusters but cluster A contains firmware from years 2019 and 2020, cluster B on the other hand is mostly years 2021 and 2022. Development pretty much stopped here, the only exception being the four red firmware versions which are recent. Updates use the "classic" ZIP format here.
- Cluster C covers years 2019 to 2022. Quite remarkably, in these years the firmware from this cluster moved from ARM processors and LiteOS to MIPS processors and Linux. The original updates based on VStarcam Pack System were replaced by the VeePai-branded ZIP format and later by Ingenic updates with LZO compression. All that happened without introducing significant changes to the code but rather via incremental development.
- Cluster D contains firmware for the MIPS processors from years 2022 and 2023. Updates are using the VeePai-branded ZIP format.
- Cluster E formed around 2023, there is still some development being done here. It uses MIPS processors like cluster D, yet the update format is different (what I called VeePai updates in my previous blog post).
- Cluster F has seen continuous development since approximately 2022, this is firmware based on Ingenic's MIPS hardware and the most active branch of VStarcam development. Originally the VeePai-branded ZIP format was used for updates, this was later transitioned to Ingenic updates with LZO compression and finally to the same format with jzlcma compression.
With the firmware versions ordered like this I could finally make some conclusions about the introduction of the problematic features:
- Unauthenticated logs access via the
get_online_log.cgiAPI was introduced in cluster B around 2022. - Logging the correct password on failed attempts was introduced independently in cluster C. In fact, some firmware versions had this in 2020 already.
- In 2021 cluster C also added the innovation that was
DoSendLogToNodeServerfunction, sending the correct password to a VStarcam server on first failed login attempt. - Unauthenticated logs access and logging the correct password appear to have been combined in cluster D in 2023.
- Cluster E initially also adopted the approach of exposing log access and logging device password on failed attempts, adding the sending of the correct password to a VStarcam server to the mix. However, starting in 2024 firmware versions with the
get_online_log.cgibackdoor start popping up here, and these have all other password leaks removed. These even censor passwords in logged request parameters. Either there were security considerations at play or the other ways to expose the password were considered unnecessary at this point and too obvious. - Cluster F also introduced logging device password on failed attempts around 2023. This cluster appears to be the origin of the
get_online_log.cgibackdoor, it was introduced here around 2024. Unlike with cluster E this backdoor didn't replace the existing password leaks here but only complemented them. In fact, while cluster F was initially "censoring" parameters so that logged requests wouldn't leak passwords, this measure appears to have been dropped later in 2024. Current cluster F firmware tends to have all the issues described in this post simultaneously. Whatever security considerations may have driven the changes in cluster E, the people in charge of cluster F clearly disagreed.
The impact
So, how bad is it? Knowing the access password allows access to the camera's main functionality: audio and video recordings. But these cameras have been known for vulnerabilities allowing execution of arbitrary commands. Also, newer cameras have an API that will start a telnet server with hardcoded and widely known administrator credentials (older cameras had this telnet server start by default). So we have to assume that a compromised camera could become part of a botnet or be used as a starting point for attacks against a network.
But this requires accessing the camera first, and most VStarcam cameras won't be exposed to the internet directly. They will only be reachable via the PPPP protocol. And for that the attackers would need to know the device ID. How would they get it?
There is a number of ways, most of which I've already discussed before. For example, anybody who was briefly connected to your network could have collected device IDs of your cameras. The script to do that won't currently work with newer VStarcam cameras because these obfuscate the traffic on the PPPP level but the necessary adjustments aren't exactly complicated.
PPPP networks still support "supernodes," devices that help route traffic. Back in 2019 Paul Marrapese abused that functionality to register a rogue supernode and collect device IDs en masse. There is no indication that this trick stopped working, and the VStarcam networks are likely susceptible as well.
Users also tend to leak their device IDs themselves. They will post screenshots or videos of the app's user interface. On the first glance this is less problematic with the O-KAM Pro app because this one will display only a vendor-specific device ID (looks similar to a PPPP device ID but has seven digits and only four letters in the verification code). That is, until you notice that the app uses a public web API to translate vendor-specific device IDs into PPPP device IDs.
Anybody who can intercept some PPPP traffic can extract the device IDs from it. Even when VStarcam networks obfuscate the traffic rather than using plaintext transmission - the static keys are well known, removing the obfuscation isn't hard.
And finally, simply guessing device IDs is still possible. With only 5 million possible verification codes for each device IDs and servers not implementing rate limiting, bruteforce attacks are quite realistic.
Let's not forget the elephant in the room however: VStarcam themselves know all the device IDs of course. Not just that, they know which devices are active and where. With a password they can access the cameras of interest to them (or their government) anytime.
Coordinated disclosure attempt
Given the intentional nature of these issues, I was unsure how to deal with this. I mean, what's the point of reporting vulnerabilities to VStarcam that they are clearly aware of? In the end I decided to give them a chance to address the issues before they become public knowledge.
However, all I found was VStarcam boasting about their ISO 27001:2022 compliance. My understanding is that this requires them to have a dedicated person responsible for vulnerability management, but they are not obliged to list any security contact that can be reached from outside the company - and so they don't. I ended up emailing all company addresses I could find, asking whether there is any way to report security issues to them.
I haven't received any response, an experience that in my understanding other people already made with VStarcam. So I went with my initial publication schedule rather than waiting 90 days as I would normally do.
Recommendations
Whatever motives VStarcam had to backdoor their cameras, the consequence for the customers is: these cameras cannot be trusted. Their access protection should be considered compromised. Even with firmware versions shown as green on my map, there is no guarantee that I haven't missed something or that these will still be green after the next update.
If you want to keep using a VStarcam camera, the only safe way to do it is disconnecting it from the internet. They don't have to be disconnected physically, internet routers will often have a way to prohibit internet traffic to and from particular devices. My router for example has this feature under parental control.
Of course this will mean that you will only be able to control your camera while connected to the same network. It might be possible to explicitly configure port forwarding for the camera's RTSP port, allowing you to access at least the video stream from outside. Just make sure that your RTSP password isn't known to VStarcam.
07 Jan 2026 1:01pm GMT
This Week In Rust: This Week in Rust 633
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
Newsletters
Project/Tooling Updates
- Danube Messaging v0.6 - Introduces Schema Registry
- Releasing Fjall 3.0: log-structured key-value storage engine
Observations/Thoughts
- 1160 PRs to improve Rust in 2025
- [series] Who Owns the Memory? Part 3: How Big Is your Type?
- Even Safer Rust with Miri
- [uv] OnceMap: Rust Pattern for Running Concurrent Work Exactly Once
- Rust At Scale: Scaleway's Big Bet To Become THE European Hyperscaler
Rust Walkthroughs
- Introduction to SIMD programming in pure Rust
- Stop Forwarding Errors, Start Designing Them
- Designing APIs for the Pit of Success
- Rusty CDK, an Infrastructure as Code Experiment
- Ergonomic Async trait objects in Rust
- [video] Unlocking Cargo. Towards conncurrent cargo builds and cross workspace caching
- [audio] Netstack.FM episode 21 - GraphQL and Rust with Tom Houlé
- That mockingbird won't sing: a mock API server in Rust
- [ES] GoF Design Patterns in Rust: Necessary or Optional?
- [video] Tock, an embedded OS in Rust, overview and demo (3 videos in playlist)
Research
Crate of the Week
This week's crate is kameo, an asynchronous actor framework with clear, trait-based abstractions for actors and typed messages.
Thanks to edgimar for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- RustWeek 2026 | CFP closes 2026-01-18 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
- RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
341 pull requests were merged in the last week
Compiler
Library
oneshotChannel- add
VecDeque::splice - add specialization for
deque1.prepend(deque2.drain(range)) (VecDeque::prependandextend_front) - avoid index check in
char::to_lowercaseandchar::to_uppercase - make specialization of
Vec::extendandVecDeque::extend_frontwork forvec::IntoIterwith anyAllocator, not justGlobal - implement
TryFrom<char>forusize - improve alloc
Vec::retain_mutperformance
Cargo
feat(report): add cargo report rebuildsfeat(test-support): Use test name for dir when running testsfix(log): adddependenciesfield toUnitRegistered- any build scripts can now use
cargo::metadata=KEY=VALUE - implement fine grain locking for
build-dir - refactor: migrate some cases to expect/reason
Clippy
manual_div_ceil: Added check for variantx.next_multiple_of(y) / ytransmuting_null: Check single expression const blocks and blocks- do not make suggestion machine-applicable if it may change semantics
- fix
bool_assert_comparisonsuggests wrongly for macros - fix
implicit_saturating_subsuggests wrongly on untyped int literal - fix
multiple_inherent_implfalse negatives for generic impl blocks - fix
needless_for_eachfalse negative whenfor_eachis in the expr of a block - fix
new_without_defaultmisses where clause innew - fix
redundant_pattern_matchingmisses)in suggestion span - fix
cmp_ownedwrongly unmangled macros - move
multiple_bound_locationsto style
Rust-Analyzer
- add useless prefix
try_into_forsuggest_name - allow finding references from doc comments
- add
#[rust_analyzer::macro_style()]attribute to control macro completion brace style - add location links for generic parameter type hints
- fix incorrect dyn hint in
impl Trait for - fix source text
- don't fire
non_camel_case_typeslint for structs/enums marked withrepr(C) - have an
upvars_mentioned()query that only computes what upvars a closure captures - suppress false positive missing assoc item diag on specialization
- implement
Span::line()andSpan::column()for proc-macro server - migrate
move_arm_cond_to_match_guardassist to useSyntaxEditor - compress token trees for best memory usage
- only compute lang items for
#![feature(lang_items)]crates - re-use scratch allocations for
try_evaluate_obligations - pre-allocate intern storages with 64kb of data / 1024 elements
- proc-macro-srv: support file and
local_filevia bidirectional callbacks
Rust Compiler Performance Triage
Not many PRs were merged, as it was still mostly a holiday week. #149681 caused small regressions across the board, this is pending investigation.
Triage done by @kobzol. Revision range: 112a2742..7c04f5d2
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
0.5% | [0.1%, 1.4%] | 146 |
| Regressions ❌ (secondary) |
0.6% | [0.0%, 3.5%] | 91 |
| Improvements ✅ (primary) |
-3.1% | [-4.7%, -1.5%] | 2 |
| Improvements ✅ (secondary) |
-0.7% | [-6.4%, -0.1%] | 15 |
| All ❌✅ (primary) | 0.4% | [-4.7%, 1.4%] | 148 |
2 Regressions, 0 Improvements, 7 Mixed; 4 of them in rollups 51 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * build-std: context
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- refactor: remove Ord bound from BinaryHeap::new etc
- regression: "the parameter type
Tmay not live long enough" inoffset_of! - Tracking Issue for
peekable_next_if_map
No Items entered Final Comment Period this week for Cargo, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
Upcoming Events
Rusty Events between 2026-01-07 - 2026-02-04 🦀
Virtual
- 2026-01-07 | Virtual (Girona, ES) | Rust Girona
- 2026-01-07 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2026-01-08 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2026-01-08 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-01-13 | Virtual | libp2p Events
- 2026-01-13 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-01-13 | Virtual (Tel Aviv-yafo, IL) | Code Mavens 🦀 - 🐍 - 🐪
- 2026-01-15 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2026-01-15 | Virtual (Berlin, DE) | Rust Berlin
- 2026-01-18 | Virtual (Tel Aviv-yafo, IL) | Code Mavens 🦀 - 🐍 - 🐪
- 2026-01-20 | Virtual (Washington, DC, US) | Rust DC
- 2026-01-21 | Virtual (Girona, ES) | Rust Girona
- 2026-01-21 | Virtual (Vancouver, BC, CA) | Vancouver Rust
- 2026-01-27 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-01-28 | Virtual (Girona, ES) | Rust Girona
- 2026-01-29 | Virtual (Amsterdam, NL) | Bevy Game Development
- 2026-01-29 | Virtual (Berlin, DE) | Rust Berlin
- 2026-01-29 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2026-02-04 | Virtual (Indianapolis, IN, US) | Indy Rust
Asia
- 2026-01-07 | Tel Aviv-yafo, IL | Rust 🦀 TLV
- 2026-01-08 | Seoul, KR | Seoul Rust (Programming Language) Meetup
- 2026-01-17 | Delhi, IN | Rust Delhi
Europe
- 2026-01-07 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2026-01-08 | Geneva, CH | Post Tenebras Lab
- 2026-01-14 | Girona, ES | Rust Girona
- 2026-01-14 | Reading, UK | Reading Rust Workshop
- 2026-01-16 | Edinburgh, UK | Rust and Friends
- 2026-01-20 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2026-01-20 | Paris, FR | Rust Paris
- 2026-01-21 | Cambridge, UK | Cambridge Rust Meetup
- 2026-01-26 | Augsburg, DE | Rust Meetup Augsburg
- 2026-01-28 | Dortmund, DE | Rust Dortmund
- 2026-02-04 | Oxford, UK | Oxford ACCU/Rust Meetup.
North America
- 2026-01-08 | Lehi, UT, US | Utah Rust
- 2026-01-08 | Mountain View, CA, US | Hacker Dojo
- 2026-01-08 | Portland, OR, US | PDXRust
- 2026-01-08 | San Diego, CA, US | San Diego Rust
- 2026-01-10 | Boston, MA, US | Boston Rust Meetup
- 2026-01-13 | New York, NY, US | Rust NYC
- 2026-01-14 | Chicago, IL, US | Chicago Rust Meetup
- 2026-01-15 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2026-01-17 | Boston, MA, US | Boston Rust Meetup
- 2026-01-17 | Herndon, VA, US | NoVaLUG
- 2026-01-20 | San Francisco, CA, US | San Francisco Rust Study Group
- 2026-01-21 | Austin, TX, US | Rust ATX
- 2026-01-22 | Boston, MA, US | Boston Rust Meetup
- 2026-01-22 | Mountain View, CA, US | Hacker Dojo
- 2026-01-24 | Boston, MA, US | Boston Rust Meetup
- 2026-01-28 | Los Angeles, CA, US | Rust Los Angeles
- 2026-01-29 | Atlanta, GA, US | Rust Atlanta
- 2026-01-29 | Nashville, TN, US | Music City Rust Developers
- 2026-01-31 | Boston, MA, US | Boston Rust Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
I find it amazing that by using Rust and Miri I am using tools that are on the edge of fundamental research in Programming Languages. Actual practically usable tools that anyone can use, not arcane code experiments passed around between academics.
Thanks to Kyllingene for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
07 Jan 2026 5:00am GMT
06 Jan 2026
Planet Mozilla
Olivier Mehani: Pausing a background process
It's common, in a Unix shell, to pause a foreground process with Ctrl+Z. However, today I needed to pause a _background_ process.
tl;dr: SIGTSTP and SIGCONT
The context was a queue processor spinning too fast, and preventing us from dequeuing unwanted messages.
Unsurprisingly, there are standard POSIX signals to pause and resume a target PID.
So we just need to grab the PID, and kill away.
$ kill -TSTP ${PID}
[... do what's needed ...]
$ kill -TCONT ${PID}
The post Pausing a background process first appeared on Narf.
06 Jan 2026 12:48am GMT
05 Jan 2026
Planet Mozilla
Jonathan Almeida: Rebase all WIPs to the new main
A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!
The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.
jj git fetch
# assuming 'z' is the rev-id of the previous main.
jj rebase -s "mutable()&z+" -d main
I haven't learnt how to make aliases accept params with it yet, so this will have to do for now.
Update: After a bit of searching, it seems that today this is only possible by wrapping it in a shell script. Based on the examples in the jj documentation an alias would look like this:
[aliases]
# Update all revs to the latest main; point to the previous one.
hoist = ["util", "exec", "--", "bash", "-c", """
set -euo pipefail
jj rebase -s "mutable()&$1+" -d "main"
""", ""]
05 Jan 2026 11:10pm GMT
Wladimir Palant: Analysis of PPPP “encryption”
My first article on the PPPP protocol already said everything there was to say about PPPP "encryption":
- Keys are static and usually trivial to extract from the app.
- No matter how long the original key, it is mapped to an effective key that's merely four bytes long.
- The "encryption" is extremely susceptible to known-plaintext attacks, usually allowing reconstruction of the effective key from a single encrypted packet.
So this thing is completely broken, why look any further? There is at least one situation where you don't know the app being used so you cannot extract the key and you don't have any traffic to analyze either. It's when you are trying to scan your local network for potential hidden cameras.
This script will currently only work for cameras using plaintext communication. Other cameras expect a properly encrypted "LAN search" packet and will ignore everything else. How can this be solved without listing all possible keys in the script? By sending all possible ciphertexts of course!
TL;DR: What would be completely ridiculous with any reasonable protocol turned out to be quite possible with PPPP. There are at most 157,092 ways in which a "LAN search" packet can be encrypted. I've opened a pull request to have the PPPP device detection script adjusted.
Note: Cryptanalysis isn't my topic, I am by no means an expert here. These issues are simply too obvious.
Contents
Mapping keys to effective keys
The key which is specified as part of the app's "init string" is not being used for encryption directly. Nor is it being fed into any of the established key stretching algorithms. Instead, a key represented by the byte sequence is mapped to four bytes that become the effective key. These bytes are calculated as follows ( means rounding down, stands for the bitwise XOR operation):
In theory, a 4 byte long effective key means possible values. But that would only be the case if these bytes were independent of each other.
Redundancies within the effective key
Of course the bytes of the effective key are not independent. This is most obvious with which is completely determined by :
This means that we can ignore , bringing the number of possible effective keys down to .
Now let's have a look at the relationship between and . Addition and bitwise XOR operations are very similar, the latter merely ignores carry. This difference affects all the bits of the result but the lowest one, no carry to be considered here. This means that the lowest bits of and are always identical. So has only 128 possible values for any value of , bringing the total number of effective keys down to .
And that's how far we can get considering only redundancies. It can be shown that a key can be constructed resulting in any combination of and values. Similarly, it can be shown that any combination of and is possible as long as the lowest bit is identical.
ASCII to the rescue
But the keys we are dealing with here aren't arbitrary bytes. These aren't limited to alphanumeric characters, some keys also contain punctuation, but they are all invariably limited to the ASCII range. And that means that the highest bit is never set in any of the values.
Which in turn means that the highest bit is never set in due to the nature of the bitwise XOR operation. We can once again rule out half of the effective keys, for any given value of there are only 64 possible values of . We now have possible effective keys.
How large is n?
Now let's have a thorough look at how relates to , ignoring the modulo operation at first. We are taking one third of each byte, rounding it down and summing that up. What if we were to sum up first and round down at the end, how would that relate? Well, it definitely cannot be smaller than rounding down in each step, so we have an upper bound here.
How much smaller can the left side get? Each time we round down this removes at most two thirds, and we do this times. So altogether these rounding operations reduce the result by at most . This gives us a lower bound:
If is arbitrary these bounds don't help us at all. But isn't arbitrary, the keys used for PPPP encryption tend to be fairly short. Let's say that we are dealing with keys of length 16 at most which is a safe bet. If we know the sum of the bytes these bounds allow us to narrow down to possible values.
But we don't know the sum of bytes. What we have is which is that sum modulo 256, and the sum is actually where is some nonnegative integer. How large can get? Remembering that we are dealing with ASCII keys, each byte has at most the value 127. And we have at most 16 bytes. So the sum of bytes cannot be higher than (or 7F0 in hexadecimal). Consequently, is 7 at most.
Let's write down the bounds for now:
We have to consider this for eight possible values of . Wait, do we really?
Once we move into modulo 256 space again, the part of our bounds (which is the only part dependent on ) will assume the same value after every three values. So only three values of are really relevant, say 0, 1 and 2. Meaning that for each value of we have possible values for .
This gives us as the number of possible effective keys. My experiments with random keys indicate that this should be pretty much as far down as it goes. There may still be more edge conditions rendering some effective keys impossible, but if these exist their impact is insignificant.
Not all effective keys are equally likely however, the values at the outer edges of the possible range are very unlikely. So one could prioritize the keys by probability - if the total number weren't already low enough to render this exercise moot.
How many ciphertexts is that?
We have the four byte plaintext F1 30 00 00 and we have 540,672 possible effective keys. How many ciphertexts does this translate to? With any reasonable encryption scheme the answer would be: slightly less than 540,672 due to a few unlikely collisions which could occur here.
But PPPP doesn't use a reasonable encryption scheme. With merely four bytes of plaintext there is a significant chance that PPPP will only use part of the effective key for encryption, resulting in identical ciphertexts for every key sharing that part. I didn't bother analyzing this possibility mathematically, my script simply generated all possible ciphertexts. So the exact answer is: 540,672 effective keys produce 157,092 ciphertexts.
And that's why you should leave cryptography to experts.
Understanding the response
Now let's say we send 157,092 encrypted requests. An encrypted response comes back. How do we decrypt it without knowing which of the requests was accepted?
All PPPP packets start with the magic byte F1, so the first byte of our response's plaintext must be F1 as well. The "encryption" scheme used by PPPP allows translating that knowledge directly into the value of . Now one could probably (definitely) guess more plaintext parts and with some clever tricks deduce the rest of the effective key. But there are only possible effective keys for each value of anyway. It's much easier to simply try out all 2,112 possibilities and see which one results in a response that makes sense.
The response here is 24 bytes large, making ambiguous decryptions less likely. Still, my experiments show that in approximately 4% of the cases closely related keys will produce valid but different decryption results. So you will get two or more similar device IDs and any one of them could be correct. I don't think that this ambiguity can be resolved without further communication with the device, but at least with my changes the script reliably detects when a PPPP device is present on the network.
05 Jan 2026 3:50pm GMT
The Rust Programming Language Blog: Project goals update — December 2025
The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.
Flagship goals
"Beyond the `&`"
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (TC) |
| Task owners |
1 detailed update available.
-
Key developments: forbid manual impl of
Unpinfor#[pin_v2]types. -
Blockers: PRs waiting for review:
- impl
Drop::pin_drop(the submodule issue) - coercion of
&pin mut|const T<->&[mut] T
- impl
-
Help wanted: None yet.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
5 detailed updates available.
Since we have chosen virtual places as the new approach, we reviewed what open questions are most pressing for the design. Our discussion resulted in the following five questions:
- Should we have 1-level projections xor multi-level projections?
- What is the semantic meaning of the borrow checker rules (
BorrowKind)? - How should we add "canonical projections" for types such that we have nice and short syntax (like
x~yorx.@y)? - What to do about non-indirected containers (Cell, MaybeUninit, Mutex, etc)?
- How does one inspect/query
Projectiontypes?
We will focus on these questions in December as well as implementing FRTs.
Canonical Projections
We have discussed canonical projections and come up with the following solution:
pub trait CanonicalReborrow: HasPlace {
type Output<'a, P: Projection<Source = Self::Target>>: HasPlace<Target = P::Target>
where
Self: PlaceBorrow<'a, P, Self::Output<'a, P>>;
}
Implementing this trait permits using the syntax @$place_expr where the place's origin is of the type Self (for example @x.y where x: Self and y is an identifier or tuple index, or @x.y.z etc). It is desugared to be:
@<<Self as CanonicalReborrow>::Output<'_, projection_from_place_expr!($place_expr)>> $place_expr
(The names of the trait, associated type and syntax are not final, better suggestions welcome.)
Reasoning
- We need the
Outputassociated type to support the@x.ysyntax forArcandArcRef. - We put the FRT and lifetime parameter on
Outputin order to force implementers to always provide a canonical reborrow, so if@x.aworks, then@x.balso works (whenbalso is a field of the struct contained byx).- This (sadly or luckily) also has the effect that making
@x.aand@x.breturn different wrapper types is more difficult to implement and requires a fair bit of trait dancing. We should think about discouraging this in the documentation.
- This (sadly or luckily) also has the effect that making
Non-Indirected Containers
Types like MaybeUninit<T>, Cell<T>, ManuallyDrop<T>, RefCell<T> etc. currently do not fit into our virtual places model, since they don't have an indirection. They contain the place directly inline (and some are even repr(transparent)). For this reason, we currently don't have projections available for &mut MaybeUninit<T>.
Enter our new trait PlaceWrapper which these types implement in order to make projections available for them. We call these types place wrappers. Here is the definition of the trait:
pub unsafe trait PlaceWrapper<P: Projection<Source = Self::Target>>: HasPlace {
type WrappedProjection: Projection<Source = Self>;
fn wrap_projection(p: P) -> Self::WrappedProjection;
}
This trait should only be implemented when Self doesn't contain the place as an indirection (so for example Box must not implement the trait). When this trait is implemented, then Self has "virtual fields" available (actually all kinds of place projections). The name of these virtual fields/projections is the same as the ones of the contained place. But their output type is controlled by this trait.
As an example, here is the implementation for MaybeUninit:
impl<T, P: Projection<Source = T>> PlaceWrapper<P> for MaybeUninit<T> {
type WrappedProjection = TransparentProjection<P, MaybeUninit<T>, MaybeUninit<P::Target>>;
fn wrap_projection(p: P) -> Self::WrappedProjection {
TransparentProjection(p, PhantomData, PhantomData)
}
}
Where TransparentProjection will be available in the standard library defined as:
pub struct TransparentProjection<P, Src, Tgt>(P, PhantomData<Src>, PhantomData<Tgt>);
impl<P: Projection, Src, Tgt> Projection for TransparentProjection<P, Src, Tgt> {
type Source = Src;
type Target = Tgt;
fn offset(&self) -> usize {
self.0.offset()
}
}
When there is ambiguity, because the wrapper and the wrapped types both have the same field, the wrapper's field takes precedence (this is the same as it currently works for Deref). It is still possible to refer to the wrapped field by first dereferencing the container, so x.field refers to the wrapper's field and (*x).field refers to the field of the wrapped type.
Field-by-Field Projections vs One-Shot Projections
We have used several different names for these two ways of implementing projections. The first is also called 1-level projections and the second multi-level projections.
The field-by-field approach uses field representing types (FRTs), which represent a single field of a struct with no indirection. When writing something like @x.y.z, we perform the place operation twice, first using the FRT field_of!(X, y) and then again with field_of!(T, z) where T is the resulting type of the first projection.
The second approach called one-shot projections instead extends FRTs with projections, these are compositions of FRTs, can be empty and dynamic. Using these we desugar @x.y.z to a single place operation.
Field-by-field projections have the advantage that they simplify the implementation for users of the feature, the compiler implementation and the mental model that people will have to keep in mind when interacting with field projections. However, they also have pretty big downsides, which either are fundamental to their design or would require significant complification of the feature:
- They have less expressiveness than one-shot projections. For example, when moving out a subsubfield of
x: &own Structby doinglet a = @x.field.a, we have to move outfield, which prevents us from later writinglet b = @x.field.b. One-shot projections allow us to track individual subsubfields with the borrow checker. - Field-by-field projections also make it difficult to define type-changing projections in an inference friendly way. Projecting through multiple fields could result in several changes of types in between, so we would have to require only canonical projections in certain places. However, this requires certain intermediate types for which defining their safety invariants is very complex.
We additionally note that the single function call desugaring is also a simplification that also lends itself much better when explaining what the @ syntax does.
All of this points in the direction of proceeding with one-shot projections and we will most likely do that. However, we must note that the field-by-field approach might yield easier trait definitions that make implementing the various place operations more manageable. There are several open issues on how to design the field-by-field API in the place variation (the previous proposal did have this mapped out clearly, but it does not translate very well to places), which would require significant effort to solve. So at this point we cannot really give a fair comparison. Our initial scouting of the solutions revealed that they all have some sort of limitation (as we explained above for intermediate projection types for example), which make field-by-field projections less desirable. So for the moment, we are set on one-shot projections, but when the time comes to write the RFC we need to revisit the idea of field-by-field projections.
Wiki Project
We started a wiki project at https://rust-lang.github.io/beyond-refs to map out the solution space. We intend to grow it into the single source of truth for the current state of the field projection proposal as well as unfinished and obsolete ideas and connections between them. Additionally, we will aim to add the same kind of information for the in-place initialization effort, since it has overlap with field projections and, more importantly, has a similarly large solution space.
In the beginning you might find many stub pages in the wiki, which we will work on making more complete. We will also mark pages that contain old or abandoned ideas as such as well as mark the current proposal.
This issue will continue to receive regular detailed updates, which are designed for those keeping reasonably up-to-date with the feature. For anyone out of the loop, the wiki project will be a much better place when it contains more content.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Purpose
A refresher on what we want to achieve here: the most basic form of reborrowing we want to enable is this:
// Note: not Clone or Copy
#[derive(Reborrow)]
struct MyMutMarker<'a>(...);
// ...
let marker: MyMarkerMut = MyMutMarker::new();
some_call(marker);
some_call(marker);
ie. make it possible for an owned value to be passed into a call twice and have Rust inject a reborrow at each call site to produce a new bitwise copy of the original value for the passing purposes, and mark the original value as disabled for reads and writes for the duration of the borrow.
A notable complication appears with implementing such reborrowing in userland using explicit cals when dealing with returned values:
return some_call(marker.reborrow());
If the borrowed lifetime escapes through the return value, then this will not compile as the borrowed lifetime is based on a value local to this function. Alongside convenience, this is the major reason for the Reborrow traits work.
CoerceShared is a secondary trait that enables equivalent reborrowing that only disables the original value for writes, ie. matching the &mut T to &T coercion.
Update
We have the Reborrow trait working, albeit currently with a bug in which the marker must be bound as let mut. We are working towards a working CoerceShared trait in the following form:
trait CoerceShared<Target: Copy> {}
Originally the trait had a type Target ADT but this turned out to be unnecessary, as there is no reason to particularly disallow multiple coercion targets. The original reason for using an ADT to disallow multiple coercion targets was based on the trait also having an unsafe method, at which point unscrupulous users could use the trait as a generic coercion trait. Because the trait method was found to be unnecessary, the fear is also unnecessary.
This means that the trait has better chances of working with multiple coercing lifetimes (think a collection of &muts all coercing to &s, or only some of them). However, we are currently avoiding any support of multiple lifetimes as we want to avoid dealing with rmeta before we have the basic functionality working.
"Flexible, fast(er) compilation"
| Progress | |
| Point of contact | |
| Champions |
cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
rust-lang/rfcs#3873 is waiting on one checkbox before entering the final comment period. We had our sync meeting on the 11th and decided that we would enter FCP on rust-lang/rfcs#3874 and rust-lang/rfcs#3875 after rust-lang/rfcs#3873 is accepted. We've responded to almost all of the feedback on the next two RFCs and expect the FCP to act as a forcing-function so that the relevant teams take a look, they can always register concerns if there are things we need to address, and if we need to make any major changes then we'll restart the FCP.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
bjorn3, Folkert de Vries, [Trifecta Tech Foundation] |
1 detailed update available.
We did not receive the funding we needed to work on this goal, so no progress has been made.
Overall I think the improvements we felt comfortable promising are on the low side. Overall the amount of time spent in codegen for realistic changes to real code bases was smaller than expected, meaning that the improvements that cranelift can deliver for the end-user experience are smaller.
We still believe larger gains can be made with more effort, but did not feel confident in promising hard numbers.
So for now, let's close this.
| Progress | |
| Point of contact | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
@dropbear32, @osiewicz |
"Higher-level Rust"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) |
| Task owners |
1 detailed update available.
Key developments
- A fence length limit was added in response to T-lang feedback (https://github.com/rust-lang/rust/pull/149358)
- Whether to disallow or lint for CR inside of a frontmatter is under discussion (https://github.com/rust-lang/rust/pull/149823)
Blockers
- https://github.com/rust-lang/rust/pull/146377
- rustdoc deciding on and implementing how they want frontmatter handled in doctests
"Unblocking dormant traits"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Taylor Cramer, Taylor Cramer & others |
1 detailed update available.
Current status:
- The RFC for
auto implsupertraits has been updated to address SemVer compatibility issues. - There is a parsing PR kicking off an experimental implementation. The tracking issue for this experimental implementation is here.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We've continued to fix a bunch of smaller issues over the last month. Tim (Theemathas Chirananthavat) helped uncover a new potential issue due to non-fatal overflow which we'll have to consider before stabilizing the new solver: https://github.com/rust-lang/trait-system-refactor-initiative/issues/258.
I fixed two issues myself in https://github.com/rust-lang/rust/pull/148823 and https://github.com/rust-lang/rust/pull/148865.
tiif with help by Boxy fixed query cycles when evaluating constants in where-clauses: https://github.com/rust-lang/rust/pull/148698.
@adwinwhite fixed a subtle issues involving coroutine witnesses in https://github.com/rust-lang/rust/pull/149167 after having diagnosed the underlying issue there last month. They've also fixed a smaller diagnostics issue in https://github.com/rust-lang/rust/pull/149299. Finally, they've also fixed an edge case of impl well-formedness checking in https://github.com/rust-lang/rust/pull/149345.
Shoyu Vanilla fixed a broken interaction of aliases and fudging in https://github.com/rust-lang/rust/pull/149320. Looking into fudging and HIR typeck Expectation handling also uncovered a bunch of broken edge-cases and I've openedhttps://github.com/rust-lang/rust/issues/149379 to track these separately.
I have recently spent some time thinking about the remaining necessary work and posted a write-up on my personal blog: https://lcnr.de/blog/2025/12/01/next-solver-update.html. I am currently trying to get a clearer perspective on our cycle handling while slowly working towards an RFC for the changes there. This is challenging as we don't have a good theoretical foundation here yet.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
2 detailed updates available.
This month's key developments were:
- borrowck support in
a-mir-formalityhas been progressing steadily - it has its own dedicated updates in https://github.com/rust-lang/rust-project-goals/issues/122 for more details - we were also able to find a suitable project for the master's student project on a-mir-formality (and they accepted and should start around February) and which will help expand our testing coverage for the polonius alpha as well.
- tiif has kept making progress on fixing opaque type soundness issue https://github.com/rust-lang/trait-system-refactor-initiative/issues/159. It is the one remaining blocker for passing all tests. By itself it will not immediately fix the two remaining (soundness) issues with opaque type region liveness, but we'll able to use the same supporting code to ensure the regions are indeed live where they need to be.
- I quickly cleaned up some inefficiencies in constraint conversion, it hasn't landed yet but it maybe won't need to because of the next item
- but most of the time this month was spent on this final item: we have the first interesting results from the rewriting effort. After a handful of wrong starts, I have a branch almost ready to switch the constraint graph to be lazy and computed during traversal. It removes the need to index the numerous list of constraints, or to convert liveness data to a different shape. It thus greatly reduces the current alpha overhead (some rare cases look faster than NLLs but I don't yet know why, maybe due to being able to better use the sparseness, low connectivity of the constraint graph, and a small number of loans). The overhead wasn't entirely removed of course: the worst offending benchmark has a +5% wall-time regression, but icounts are worse looking (+13%). This was also only benchmarking the algorithm itself, without the improvements to the rest of borrowck mentioned in previous updates. I should be able to open a PR in the next couple days, once I figure out how to best convert the polonius mermaid graph dump to the new lazy localized constraint generation.
- and finally, happy holidays everyone!
- I should be able to open a PR in the next couple days
done in https://github.com/rust-lang/rust/pull/150551
Goals looking for help
Other goal updates
| Progress | |
| Point of contact | |
| Champions |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
4 detailed updates available.
PR https://github.com/rust-lang/a-mir-formality/pull/206 contains a "first draft" for the NLL rules. It checks for loan violations (e.g., mutating borrowed data) as well as some notion of outlives requirements. It does not check for move errors and there aren't a lot of tests yet.
The PR also includes two big improvements to the a-mir-formality framework:
- support for
(for_all)rules that can handle "iteration" - tracking proof trees, making it much easier to tell why something is accepted that should not be
Update: opened https://github.com/rust-lang/a-mir-formality/pull/207 which contains support for &mut, wrote some new tests (including one FIXME), and added a test for NLL Problem Case #3 (which behaved as expected).
One interesting thing (cc Ralf Jung) is that we have diverged from MiniRust in a few minor ways:
- We do not support embedding value expressions in place expressions.
- Where MiniRust has a
AddrOfoperator that uses thePtrTypeto decide what kind of operation it is, we have added aRefMIR operation. This is in part because we need information that is not present in MiniRust, specifically a lifetime. - We have also opted to extend
gotowith the ability to take multiple successors, so thatgoto b1, b2can be seen as "goto either b1 or b2 non-deterministically" (the actual opsem would probably be to always go to b1, making this a way to add "fake edges", but the analysis should not assume that).
Update: opened https://github.com/rust-lang/a-mir-formality/pull/210 with today's work. We are discussing how to move the checker to support polonius-alpha. To that end, we introduced feature gates (so that a-mir-formality can model nightly features) and did some refactoring of the type checker aiming at allowing outlives to become flow-sensitive.
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
3 detailed updates available.
Since the last update both of my PRs I mentioned have landed, allowing for constructing ADTs in const arguments while making use of generic parameters. This makes MGCA effectively a "full" prototype where it can now fully demonstrate the core concept of the feature. There's still a lot of work left to do but now we're at the point of finishing out the feature :)
Once again huge thanks to camelid for sticking with me throughout this. Also thanks to errs, oli and lcnr for reviewing some of the work and chatting with me about possible impl decisions.
Some examples of what is possible with MGCA as of the end of this goal cycle:
#![feature(const_default, const_trait_impl, min_generic_const_args)]
trait Trait {
#[type_const]
const ASSOC: usize;
}
fn mk_array<T: const Default + Trait>() -> [T; T::ASSOC] {
[const { T::default() }; _]
}
#![feature(adt_const_params, min_generic_const_args)]
fn foo<const N: Option<u32>>() {}
trait Trait {
#[type_const]
const ASSOC: usize;
}
fn bar<T: Trait, const N: u32>() {
// the initializer of `_0` is a `N` which is a legal const argument
// so this is ok.
foo::<{ Some::<u32> { 0: N } }>();
// this is allowed as mgca supports uses of assoc consts in the
// type system. ie `<T as Trait>::ASSOC` is a legal const argument
foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();
// this on the other hand is not allowed as `N + 1` is not a legal
// const argument
foo::<{ Some::<u32> { 0: N + 1 } }>(); // ERROR
}
As for adt_const_params we now have a zulip stream specifically for discussion of the upcoming RFC and the drafting of the RFC: #project-const-generics/adt_const_params-rfc. I've gotten part of the way through actually writing the RFC itself though it's gone slower than I had originally hoped as I've also been spending more time thinking through the implications of allowing private data in const generics.
I've debugged the remaining two ICEs making adt_const_params not fully ready for stabilization and written some brief instructions on how to resolve them. One ICE has been incidentally fixed (though more masked) by some work that Kivooeo has been doing on MGCA. The other has been picked up by someone I'm not sure the github handle of so that will also be getting fixed soon.
Ah I forgot to mention, even though MGCA has a tonne of work left to do I expect it should be somewhat approachable for people to help out with. So if people are interested in getting involved now is a good time :)
Ah another thing I forgot to mention. David Wood spent some time looking into the name mangling scheme for adt_const_params stuff to make sure it would be fine to stabilize and it seems it is so that's another step closer to adt_const_params being stabilizable
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur) |
| Task owners |
Pete LeVasseur, Contributors from Ferrous Systems and others TBD, |
1 detailed update available.
Meeting notes here: FLS team meeting 2025-12-12
Key developments: We're close to completing the FLS release for 1.91.0, 1.91.1. We've started to operate as a team, merging a PR with the changelog entries, then opening up issues for each change required: ✅ #624(https://github.com/rust-lang/fls/issues/624), ✅ #625(https://github.com/rust-lang/fls/issues/625), ✅ #626(https://github.com/rust-lang/fls/issues/626), ⚠️ #623(https://github.com/rust-lang/fls/issues/623). #623(https://github.com/rust-lang/fls/issues/623) is still pending, as it requires a bit of alignment with the Reference on definitions and creation of a new example. Blockers: None currently Help wanted: We'd love more folks from the safety-critical community to contribute to picking up issues or opening an issue if you notice something is missing.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Here's our December status update!
-
We have revised our prototype of the pre-RFC based on Ralf Jung's feedback. Now, instead of having two different retag functions for operands and places, we emit a single
__rust_retagintrinsic in every situation. We also track interior mutability precisely. At this point, the implementation is mostly stable and seems to be ready for an MCP. -
There's been some discussion here and in the pre-RFC about whether or not Rust will still have explicit MIR retag statements. We plan on revising our implementation so that we no longer rely on MIR retags to determine where to insert our lower-level retag calls. This should be a relatively straightforward change to the current prototype. If anything, it should make these changes easier to merge upstream, since they will no longer affect Miri.
-
BorrowSanitizer continues to gain new features, and we've started testing it on our first real crate (lru) (which has uncovered a few new bugs in our implementation). The two core Tree Borrows features that we have left to support are error reporting and garbage collection. Once these are finished, we will be able to expand our testing to more real-world libraries and confirm that we are passing each of Miri's test cases (and likely find more bugs lurking in our implementation). Our instrumentation pass ignores global and thread-local state for now, and it does not support atomic memory accesses outside of atomic
loadandstoreinstructions. These operations should be relatively straightforward to add once we've finished higher-priority items. -
Performance is slow. We do not know exactly how slow yet, since we've been focusing on feature support over benchmarking and optimization. This is at least partially due to the lack of garbage collection, based on what we're seeing from profiling. We will have a better sense of what our performance is like once we can compare against Miri on more real-world test cases.
As for what's next, we plan on posting an MCP soon, now that it's clear that we will be able to do without MIR retags. You can expect a more detailed status update on BorrowSanitizer by the end of January. This will discuss our implementation and plans for 2026. We will post that here and on our project website.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby |
1 detailed update available.
In addition to further ongoing work on reference material (some of which is on track to be merged), we've had some extensive discussions about reference processes, maintenance, and stability markers. Niko Matsakis is putting together a summary and proposal for next steps.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Manuel Drehwald), lang (TC) |
| Task owners |
Manuel Drehwald, LLVM offload/GPU contributors |
2 detailed updates available.
It's only been two weeks, but we got a good number of updates, so I already wanted to share them.
autodiff
- On the autodiff side, we landed the support for rlib and better docs. This means that our autodiff frontend is "almost" complete, since there are almost no cases left where you can't apply autodiff. There are a few features like custom-derivatives or support for
dynarguments that I'd like to add, but they are currently waiting for better docs on the Enzyme side. There is also a long-term goal off replacing the fat-lto requirement with the less invasive embed-bc requirement, but this proved to be tricky in the past and only affects compile times. - @sgasho picked up my old PR to dlopen enzyme, and found the culprit of it failing after my last rebase. A proper fix might take a bit longer, but it might be worth waiting for. As a reminder, using dlopen in the future allows us to ship autodiff on nightly without increasing the size of rustc and therefore without making our infra team sad.
All in all, we have landed most of the hard work here, so that's a very comfortable position to be in before enabling it on nightly.
offload
- We have landed the intrinsic implementation of Marcelo Domínguez, so now you can offload functions with almost arbitrary arguments. In my first prototype, I had limited it to pointers to 256 f64 values. The updated usage example continues to live here in our docs. As you can see, we still require
#[cfg(target_os=X)]annotations. Under the hood, the LLVM-IR which we generate is also still a bit convoluted. In his next PRs, he'll clean up the generated IR, and introduce an offload macro that users shall call instead of the internal offload intrinsic. - I spend more time on enabling offload in our CI, to enable
std::offloadin nightly. After multiple iterations and support from LLVM offload devs, we found a cmake config that does not run into bugs, should not increase Rust CI time too much, and works with both in-tree llvm/clang builds, as well as external clang's (the current case in our Rust CI). - I spend more time on simplifying the usage instructions in the dev guide. We started with two cargo calls, one rustc call, two clang calls, and two clang-helper binary calls. I was able to remove the rustc and one of the clang-offload-packager calls, by directly calling the underlying LLVM APIs. I also have an unmerged PR which removes the two clang calls. Once I cleaned it up and landed it, we would be down to only two cargo calls and one binary call to
clang-linker-wrapper. Once I automated this last wrapper (and enabled offload in CI), nightly users should be able to experiment withstd::offload.
Time for the next round of updates. Again, most of the updates were on the GPU side, but with some notable autodiff improvements too.
autodiff:
-
@sgasho finished his work on using dlopen to load enzyme and the pr landed. This allowed Jakub Beránek and me to start working on distributing Enzyme via a standalone component.
-
As a first step, I added a nicer error if we fail to find or dlopen our Enzyme backend. I also removed most of our autodiff fallbacks, we now unconditionally enable our macro frontend on nightly: https://github.com/rust-lang/rust/pull/150133 You may notice that
cargo expandnow works on autodiff code. This also allowed the first bug reports about ICE (internal compiler error) in our macro parser logic. -
Kobzol opened a PR to build Enzyme in CI. In theory, I should have been able to download that artifact, put it into my sysroot, and use the latest nightly to automatically load it. If that had worked, we could have just merged his PR, and everyone could have started using AD on nightly. Of course, things are never that easy. Even though both Enzyme, LLVM, and rustc were built in CI, the LLVM version shipped along with rustc does not seem compatible with the LLVM version Enzyme was built against. We assume some slight cmake mismatch during our CI builds, which we will have to debug.
offload:
-
On the gpu side, Marcelo Domínguez finished his cleanup PR, and along the way also fixed using multiple kernels within a single codebase. When developing the offload MVP I had taken a lot of inspiration from the LLVM-IR generated by clang - and it looks like I had gotten one of the (way too many) LLVM attributes wrong. That caused some metadata to be fused when multiple kernels are present, confusing our offload backend. We started to find more bugs when working on benchmarks, more about the fixes for those in the next update.
-
I finished cleaning up my offload build PR, and Oliver Scherer reviewed and approved it. Once the dev-guide gets synced, you should see much simpler usage instructions. Now it's just up to me to automate the last part, then you can compile offload code purely with cargo or rustc. I also improved how we build offload, which allows us to build it both in CI and locally. CI had some very specific requirements to not increase build times, since our x86-64-dist runner is already quite slow.
-
Our first benchmarks directly linked against NVIDIA and AMD intrinsics on llvm-ir level. However, we already had an nvptx Rust module for a while, and since recently also an amdgpu module which nicely wraps those intrinsics. I just synced the stdarch repository into rustc a few minutes ago, so from now on, we can replace both with the corresponding Rust functions. In the near future we should get a higher level GPU module, which abstracts away naming differences between vendors.
-
Most of my past rustc contributions were related to LLVM projects or plugins (Offload and Enzyme), and I increasingly encountered myself asking other people for updates or backports of our LLVM submodule, since upstream LLVM has fixes which were not yet merged into our LLVM submodule. Our llvm working group is quite small and I didn't want to burden them too much with my requests, so I recently asked them to join it, which also got approved. In the future I intend to help a little with the maintenance here.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
(depending on the flag) |
1 detailed update available.
Update from the 2025-12-03 meeting:
-Zharden-sls
Wesley reviewed it again, provided a qualification, more changes requested.
| Progress | |
| Point of contact | |
| Champions |
lang (Josh Triplett), lang-docs (TC) |
| Task owners |
2 detailed updates available.
Update from the 2025-12-03 meeting.
Deref / Receiver
Ding keeps working on the Reference draft. The idea is still not well-proliferated and people are not convinced this is a good way to go. We hope the method-probing section in Reference PR could clear thins up.
We're keeping the supertrait auto-impl experiment as an alternative.
RFC #3851: Supertrait Auto-impl
Ding addressed Predrag's requests on SemVer compatibility. He's also opened an implementation PR: https://github.com/rust-lang/rust/pull/149335. Here's the tracking issue: https://github.com/rust-lang/rust/issues/149556.
derive(CoercePointee)
Ding opened a PR to require additional checks for DispatchFromDyn: https://github.com/rust-lang/rust/pull/149068
In-place initialization
Ding will prepare material for a discussion at the LPC (Linux Plumbers Conference). We're looking to hear feedback on the end-user syntax for it.
The feature is going quite large, Ding will check with Tyler on the whether this might need a series of RFCs.
The various proposals on the table continue being discussed and there are signs (albeit slow) of convergence. The placing function and guaranteed return ones are superseded by outpointer. The more ergonomic ideas can be built on top. The guaranteed value placement one would be valuable in the compiler regardless and we're waiting for Olivier to refine it.
The feeling is that we've now clarified the constraints that the proposals must operate under.
Field projections
Nadri's Custom places proposal is looking good at least for the user-facing bits, but the whole thing is growing into a large undertaking. Benno's been focused on academic work that's getting wrapped up soon. The two will sync afterwards.
Quick bit of great news: Rust in the Linux kernel is no longer treated as an experiment, it's here to stay 🎉
https://lwn.net/SubscriberLink/1050174/63aa7da43214c3ce/
| Progress | |
| Point of contact |
|
| Champions |
cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) |
| Task owners |
3 detailed updates available.
Ed Page hey i would like to contribute to this I reached out on zulip. Bumping up the post in case it might have gone under the radar
Hi @sladyn98 - feel free to ping me on Zulip about this.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
The RFC draft was reviewed in detail and Ralf Jung pointed out that the proposed semantics introduce issues because they rely on "no-behavior" (NB) with regards to choosing an address for a local. This can lead to surprising "time-traveling" behavior where the set of possible addresses that a local may have (and whether 2 locals can have the same address) depends on information from the future. For example:
// This program has DB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
x = String::new(); // assuming this does not change the address of x
// x and y are both live here. Therefore, they can't have the same address.
assume(xaddr != yaddr);
drop(x);
drop(y);
// This program has UB
let x = String::new();
let xaddr = &raw const x;
let y = x; // Move out of x and de-initialize it.
let yaddr = &raw const y;
// So far, there has been no constraint that would force the addresses to be different.
// Therefore we can demonically choose them to be the same. Therefore, this is UB.
assume(xaddr != yaddr);
// If the addresses are the same, this next line triggers NB. But actually this next
// line is unreachable in that case because we already got UB above...
x = String::new();
// x and y are both live here.
drop(x);
drop(y);
With that said, there is still a possibility of achieving the optimization, but the scope will need to be scaled down a bit. Specifically, we would need to:
- no longer perform a "partial free"/"partial allocation" when initializing or moving out of a single field of a struct. The lifetime of a local starts when any part of it is initialized and ends when it is fully moved out.
- allow a local's address to change when it is re-initialized after having been fully moved out, which eliminates the need for NB.
This reduces the optimization opportunities since we can't merge arbitrary sub-field moves, but it still allows for eliminating moves when constructing a struct from multiple values.
The next step is for me to rework the RFC draft to reflect this.
| Progress | |
| Point of contact |
|
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
2 detailed updates available.
Key developments: HTML replay logic has merge. Once it gets into nightly cargo report timings can open the timing report you have previously logged.
- https://github.com/rust-lang/cargo/pull/16377
- https://github.com/rust-lang/cargo/pull/16378
- https://github.com/rust-lang/cargo/pull/16382
Blockers: No, except my own availability
Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575
Key developments:
Headline: You should always enable build analysis locally, if you are using nightly and want the timing info data always available.
[unstable]
build-analysis = true
[build.analysis]
enabled = true
- More log events are emitted: https://github.com/rust-lang/cargo/pull/16390
- dependency resolution time
- unit-graph construction
- unit-registration (which contain unit metadata)
- Timing replay from
cargo report timingsnow has almost the same feature parity ascargo build --timings, except CPU usage: https://github.com/rust-lang/cargo/pull/16414 - Rename
rebuildevent tounit-fingerprint, and is emitted also for fresh unit: https://github.com/rust-lang/cargo/pull/16408. - Proposed a new
cargo report sessionscommand so that people can retrieve previous sessions IDs not use the latest one: https://github.com/rust-lang/cargo/pull/16428 - Proposed to remove
--timings=jsonwhich timing info in log files should be a great replacement: https://github.com/rust-lang/cargo/pull/16420 - Documenting efforts for having man pages for nested commands `cargo report : https://github.com/rust-lang/cargo/pull/16430 and https://github.com/rust-lang/cargo/pull/16432
Besides implementations, we also discussed about:
- The interaction of
--message-formatand structured logging system, as well as log event schemas and formats: https://rust-lang.zulipchat.com/#narrow/channel/246057-t-cargo/topic/build.20analysis.20log.20format/with/558294271 - A better name for
RunId. We may lean towardsSessionIdwhich is a common name for logging/tracing ecosystem. - Nested Cargo calls to have a sticky session ID. At least a way to show they were invoked from the same top-level Cargo call.
Blockers: No, except my own availability
Help wanted: Same as https://github.com/rust-lang/rust-project-goals/issues/398#issuecomment-3571897575
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) |
| Task owners |
oli-obk |
1 detailed update available.
Updates
- https://github.com/rust-lang/rust/pull/148820 adds a way to mark functions and intrinsics as only callable during CTFE
- https://github.com/rust-lang/rust/pull/144363 has been unblocked and just needs some minor cosmetic work
Blockers
- https://github.com/rust-lang/rust/pull/146923 (reflection MVP) has not been reviewed yet
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Status update December 23, 2025
The majority of December was spent iterating on https://github.com/rust-lang/cargo/pull/16155 . As mentioned in the previous update, the original locking design was not correct and we have been working through other solutions.
As locking is tricky to get right and there are many scenarios Cargo needs to support, we are trying to descope the initial implementation to an MVP, even if that means we lose some of the concurrency. Once we have an MVP on nightly, we can start gathering feedback on the scenarios that need improvement and iterate.
I'm hopeful that we get an unstable -Zfine-grain-locking on nightly in January for folks to try out in their workflows.
Also we are considering adding an opt-in for the new build-dir layout using an env var (CARGO_BUILD_DIR_LAYOUT_V2=true) to allow tool authors to begin migrating to the new layout. https://github.com/rust-lang/cargo/pull/16336
Before stabilizing this, we are doing crater run to test the impact of the changes and proactively reaching out to projects to minimize breakage as much as possible. https://github.com/rust-lang/rust/pull/149852
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Task owners |
[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec) |
1 detailed update available.
Based on the gathered feedback I opened a new MCP for the proposed new Tier 2 targets with sanitizers enabled. (https://github.com/rust-lang/compiler-team/issues/951)
| Progress | |
| Point of contact | |
| Task owners |
vision team |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We have enabled the second x64 machine, so we now have benchmarks running in parallel 🎉 There are some smaller things to improve, but next year we can move onto running benchmarks on Arm collectors.
| Progress | |
| Point of contact |
|
| Champions | |
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Opened stabilization PR but we have blockers I didn't hear of, so stabilization will be postponed until then.
| Progress | |
| Point of contact | |
| Champions |
compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) |
| Task owners |
3 detailed updates available.
I haven't made any progress on Deref::Target yet, but I have been focusing on landing rust-lang/rust#143924 which has went through two rounds of review and will hopefully be approved soon.
Update: David and I chatted on Zulip. Key points:
David has made "progress on the non-Sized Hierarchy part of the goal, the infrastructure for defining scalable vector types has been merged (with them being Sized in the interim) and that'll make it easier to iterate on those and find issues that need solving".
On the Sized hierarchy part of the goal, no progress. We discussed options for migrating. There seem to be three big options:
(A) The conservative-but-obvious route where the T: Derefin the old edition is expanded to T: Deref<Target: SizeOfVal> (but in the new edition it means T: Deref<Target: Pointee>, i.e., no additional bounds). The main downside is that new Edition code using T: Deref can't call old Edition code using T: Deref as the old edition code has stronger bounds. Therefore new edition code must either use stronger bounds than it needs or wait until that old edition code has been updated.
(B) You do something smart with Edition.Old code where you figure out if the bound can be loose or strict by bottom-up computation. So T: Deref in the old could mean either T: Deref<Target: Pointee> or T: Deref<Target: SizeOfVal>, depending on what the function actually does.
(C) You make Edition.Old code always mean T: Deref<Target: Pointee> and you still allow calls to size_of_val but have them cause post-monomorphization errors if used inappropriately. In Edition.New you use stricter checking.
Options (B) and (C) have the downside that changes to the function body (adding a call to size_of_val, specifically) in the old edition can stop callers from compiling. In the case of Option (B), that breakage is at type-check time, because it can change the where-clauses. In Option (C), the breakage is post-monomorphization.
Option (A) has the disadvantage that it takes longer for the new bounds to roll out.
Given this, (A) seems the preferred path. We discussed options for how to encourage that roll-out. We discussed the idea of a lint that would warn Edition.Old code that its bounds are stronger than needed and suggest rewriting to T: Deref<Target: Pointee> to explicitly disable the stronger Edition.Old default. This lint could be implemented in one of two ways
- at type-check time, by tracking what parts of the environment are used by the trait solver. This may be feasible in the new trait solver, someone from @rust-lang/types would have to say.
- at post-mono time, by tracking which functions actually call
size_of_valand propagating that information back to callers. You could then compare against the generic bounds declared on the caller.
The former is more useful (knowing what parts of the environment are necessary could be useful for more things, e.g., better caching); the latter may be easier or more precise.
Update to the previous post.
Tyler Mandry pointed me at this thread, where lcnr posted this nice blog post that he wrote detailing more about (C).
Key insights:
- Because the use of
size_of_valwould still cause post-mono errors when invoked on types that are notSizeOfVal, you know that addingSizeOfValinto the function's where-clause bounds is not a breaking change, even though adding a where clause is a breaking change more generally. - But, to David Wood's point, it does mean that there is a change to Rust's semver rules: adding
size_of_valwould become a breaking change, where it is not today.
This may well be the best option though, particularly as it allows us to make changes to the defaults across-the-board. A change to Rust's semver rules is not a breaking change in the usual sense. It is a notable shift.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
This month I've written some documentation for how Const Generics is implemented in the compiler. This mostly covers the implementation of the stable functionality as the unstable features are quite in flux right now. These docs can be found here: https://rustc-dev-guide.rust-lang.org/const-generics.html
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
05 Jan 2026 12:00am GMT
Jonathan Almeida: Update jj bookmarks to the latest revision
Got this one from another colleague as well but it seems like most folks use some version of this daily that it might be good to have this built-in.
Before I can jj git push my current bookmark to my remote, I need to update where my (tracked) bookmark is, to the latest change:
@ ptuqwsty git@jonalmeida.com 2026-01-05 16:00:22 451384bf <-- move 'main' here.
│ TIL: Update remote bookmark to the latest revision
◆ xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main git_head() 9ad7ce11
│ TIL: Preserve image scale with ImageMagick
~
A quick one-liner jj tug does that for me:
@ ptuqwsty git@jonalmeida.com 2026-01-05 16:03:54 main* 6e7173b4
│ TIL: Update remote bookmark to the latest revision
◆ xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main@origin git_head() 9ad7ce11
│ TIL: Preserve image scale with ImageMagick
~
The alias is quite straight-forward:
[aliases]
# Update your bookmarks to your latest rev.
tug = ["bookmark", "move", "--from", "heads(::@ & bookmarks())", "--to", "@"]
05 Jan 2026 12:00am GMT
31 Dec 2025
Planet Mozilla
This Week In Rust: This Week in Rust 632
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Project/Tooling Updates
- reqwest v0.13 - rustls by default
- rama 0.3.0-alpha.4 is released - modular service framework to move and transform network packets
- Ratatui 0.30.0 is released! - a Rust library for cooking up terminal user interfaces
Observations/Thoughts
- Four Years of Rust: An Odyssey of Failures, Achievements, and Hard Lessons
- Simple Bidirectional Type Inference
- serde's borrowing can be treacherous
- Garbage collection in Rust got a little better
- [audio] Netstack.FM episode 20 - Netstack.FM New Year Special, 2025 Wrap-Up
Rust Walkthroughs
- Why is calling my asm function from Rust slower than calling it from C?
- Rust Errors Without Dependencies
- [video] Building your first APP using the new Hotaru Web Framework!
Miscellaneous
- [audio] 2025 Holiday Special - Rust in Production Podcast
- Investigating and fixing a nasty clone bug
Crate of the Week
This week's crate is wgsl-bindgen, a binding generator for WGSL, the WebGPU shading language, to be used with wgpu.
Thanks to Artem Borisovskiy for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- Rustup 1.29.0 beta: Call for Testing!
-
Testing steps: See "How to Test" section from above link.
-
No calls for testing were issued this week by Rust, Cargo or Rust language RFCs.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
- Spindalis - Create an AST parser
- Spindalis - Add procedural macro for definite integral
- Spindalis - Add a function and macro that can expand polynomials
- Spindalis - Add display trait to functions in spindalis core
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- RustWeek 2026 | CFP closes 2026-01-18 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
- RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
297 pull requests were merged in the last week
Compiler
- recursive delegation improvements
- miri: fix ICE for particular data race situations
- miri: show a warning when combing native-lib mode and many-seeds
- miri: tree Borrows: improve protector end access child skipping
Library
- add
MaybeDanglingtocore - alloc: specialize
String::extendfor slices of str - implement
Duration::div_duration_{floor,ceil} - implement flatten for
Option<&Option<T>>andOption<&mut Option<T>> - optimized implementation for
uN::{gather,scatter}_bits - rewrite
String::replace_range - stabilize
lazy_get
Cargo
index: Stabilize pubtimereport: new commandcargo report sessionsreport: support --manifest-path incargo report timingsresolver: List features when no close matchtoml: TOML 1.1 parse supportvendor: recursively filter git files in subdirectoriesvendor: unpack from local-registry cache pathbuild-rs: Reduce from 'build' to 'check' where possible- experiment: render timing pipeline in SVG
- patch: Display where the patch was defined in patch-related error messages
Rustdoc
- if line number setting is disabled, do not make line numbers take space
- fix copy code example with line numbers
- fix duplicate Re-exports sections
- fix incorrect type filter name in help popup
Clippy
- fix
assertions_on_constantsfalse positive when there is non-constant value in the condition expr - fix
double_parensfalse positive on macro repetition patterns - fix
obfuscated_if_elsewrongly unmangled macros - fix
result_large_errfalse negative on closures - preserve explicit lifetime information when removing
mut - various fixes for handling of macros
Rust-Analyzer
- add bidirectional messaging proc-macro-srv prototype
- add macro segment completion
- implement configuration to change sub command for test, bench and doctest
- provide a setting to disable showing rename conflicts
- stabilize type mismatch diagnostic 🎉
- indent for
convert_to_guarded_return - fix LSP configuration request handling
- fix parsing of
format_args!("...", keyword=...) - fix type inference when hovering on
_ - reenable fixpoint variance
- do not really expand builtin derives, instead treat them specifically
- pre-allocate some buffers in parsing
- reduce channel lock contention for drop-threads
- prompt the user in VSCode to add the rust-anaylzer componenet to the toolchain file
Rust Compiler Performance Triage
Not a lot of changes this week. Overall result is positive, largely thanks to #142881, which makes computing an expensive data structure for JumpThreading MIR optimization lazy.
Triage done by @panstromek. Revision range: e1212ea7..112a2742
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
0.5% | [0.1%, 1.7%] | 11 |
| Regressions ❌ (secondary) |
0.2% | [0.1%, 0.5%] | 6 |
| Improvements ✅ (primary) |
-0.5% | [-1.3%, -0.1%] | 74 |
| Improvements ✅ (secondary) |
-0.6% | [-1.8%, -0.2%] | 71 |
| All ❌✅ (primary) | -0.4% | [-1.3%, 1.7%] | 85 |
2 Regressions, 0 Improvements, 3 Mixed; 1 of them in rollups 37 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Proposal for a dedicated test suite for the parallel frontend
- Promote tier 3 riscv32 ESP-IDF targets to tier 2
- Proposal for Adapt Stack Protector for Rust
- Give integer literals a sign instead of relying on negation expressions
- Also enable ICE file dumps on stable
- New Tier-3 target proposal:
loongarch64-linux-android
No Items entered Final Comment Period this week for Cargo, Rust, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
Tracking Issues & PRs
New and Updated RFCs
Upcoming Events
Rusty Events between 2025-12-31 - 2026-01-28 🦀
Virtual
- 2026-01-03 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2026-01-07 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2026-01-08 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2026-01-08 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-01-13 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-01-13 | Virtual | libp2p Events
- 2026-01-13 | Virtual (Tel Aviv-yafo, IL) | Code Mavens 🦀 - 🐍 - 🐪
- 2026-01-14 | Virtual (Girona, ES) | Rust Girona
- 2026-01-15 | Virtual (Berlin, DE) | Rust Berlin
- 2026-01-20 | Virtual (Washington, DC, US) | Rust DC
- 2026-01-21 | Virtual (Girona, ES) | Rust Girona
- 2026-01-21 | Virtual (Vancouver, BC, CA) | Vancouver Rust
- 2026-01-27 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-01-28 | Virtual (Girona, ES) | Rust Girona
Asia
- 2026-01-07 | Tel Aviv-yafo, IL | Rust 🦀 TLV
- 2026-01-08 | Seoul, KR | Seoul Rust (Programming Language) Meetup
- 2026-01-17 | Delhi, IN | Rust Delhi
Europe
- 2026-01-07 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2026-01-07 | Girona, ES | Rust Girona
- 2026-01-08 | Geneva, CH | Post Tenebras Lab
- 2026-01-14 | Reading, UK | Reading Rust Workshop
- 2026-01-20 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2026-01-20 | Paris, FR | Rust Paris
North America
- 2026-01-01 | Saint Louis, MO, US | STL Rust
- 2026-01-03 | Boston, MA, US | Boston Rust Meetup
- 2026-01-08 | Lehi, UT, US | Utah Rust
- 2026-01-08 | Mountain View, CA, US | Hacker Dojo
- 2026-01-10 | Boston, MA, US | Boston Rust Meetup
- 2026-01-13 | New York, NY, US | Rust NYC
- 2026-01-13 | Spokane, WA, US | Spokane Rust
- 2026-01-15 | Seattle, WA, US | Seattle Rust User Group
- 2026-01-17 | Boston, MA, US | Boston Rust Meetup
- 2026-01-17 | Herndon, VA, US | NoVaLUG
- 2026-01-20 | San Francisco, CA, US | San Francisco Rust Study Group
- 2026-01-21 | Austin, TX, US | Rust ATX
- 2026-01-22 | Boston, MA, US | Boston Rust Meetup
- 2026-01-24 | Boston, MA, US | Boston Rust Meetup
- 2026-01-28 | Los Angeles, CA, US | Rust Los Angeles
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
what even is time?!?
Thanks to llogiq for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
31 Dec 2025 5:00am GMT
24 Dec 2025
Planet Mozilla
This Week In Rust: This Week in Rust 631
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
- What do people love about Rust?
- Please submit 2026 Project goal proposals
- December 2025 Project Director Update
- Program management update - End of 2025
- Rustup 1.29.0 beta: Call for Testing!
Newsletters
Project/Tooling Updates
- What's "new" in Miri (and also, there's a Miri paper!)
- cargo-coupling: Visualizing Coupling in Rust Projects
- Announcing Asterinas 0.17.0
- Tuitar - A portable guitar training tool & DIY kit
- Gitoxide in December
- Announcing GotaTun, the future of WireGuard at Mullvad VPN
- wgpu v28.0.0 - Mesh Shaders, Immediates, and More!
- rustc_codegen_gcc: Progress Report #39
Observations/Thoughts
- Syntactic musings on the fallibility effect
- Rust's Block Pattern
- [audio] Netstack.FM episode 19 - Firezone and Zero-Trust Network Security with Thomas Eizinger
Rust Walkthroughs
- Rust Unit Testing: Basic HTTP Server
- Async Rust Bluetooth Plumbing: Where the Throughput Goes
- [series] Part 2: Tensor Operations, Building an LLM from Scratch in Rust
Crate of the Week
This week's crate is arcshift, an Arc replacement for read-heavy workloads that supports lock-free atomic replacement.
Thanks to rustkins for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- RustWeek 2026 | CFP closes 2026-01-18 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
- RustConf 2026 | CFP closes 2026-02-16 | Montreal, Quebec, Canada | 2026-09-08 - 2026-09-10
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
475 pull requests were merged in the last week
Compiler
- add
target_feature = "gc"for Wasm - better closure requirement propagation
- correctly encode doc attribute metadata
- don't treat asserts as a call in cross-crate inlining
- improve filenames encoding and misc
- make closure capturing have consistent and correct behaviour around patterns
- support recursive delegation
Library
- add
try_as_dynandtry_as_dyn_mut - add const default for OnceCell and OnceLock
- expand
str_as_strto more types - make
const BorrowMutrequireconst Borrowand makeconst Fnrequireconst FnMut - hashbrown: add
hash_map::{OccupiedEntry::into_entry,VacantEntryRef::insert_entry_with_key}, makeEntryRefuseToOwnedagain - hashbrown: add
hash_table::OccupiedEntry::replace_entry_withto mirror HashMap API - hashbrown: add
hash_table::UnsafeIter,iter()method to various iterators
Rustdoc
- Add missing close tags in extern crate reexports
- Fix invalid handling of field followed by negated macro call
- generate macro expansion for rust compiler crates docs
- handle macro expansions in types
Clippy
transmuting_null: Check const integer casts- allow multiline suggestions in
map-unwrap-or - do not attempt to use
nthwith non-usize argument - don't emit
collapsible_else_iflint when all arms contain onlyif {} else {}expressions - fix
cmp_nullmissing parens in the example - fix
empty_enum_variants_with_bracketsmisses removing brackets in patterns - fix
if_then_some_else_nonesuggests wrongly when then ends with comment - fix
needless_type_castsuggesting invalid code for non-literal initializers - fix
println_empty_stringsuggestion caused error - fix
use_selffalse positive on type in const generics - fix an incorrect error message regarding the size of
usizeandisizeincast_precision_loss - move
collapsible_else_iftopedantic - new lint -
same_length_and_capacity
Rust-Analyzer
- add 'Use of AI tools' section to CONTRIBUTING.md
- add BreakExpr completion suggest
- add an lsp extension to get failed obligations for a given function
- add default varname for TryEnum postfix completion
- add guess braces doc
T![]forT_ - add ide-assist:
add_explicit_method_call_deref - complete reference
&T→&&T - introduce
crate_attrsfield inrust-project.json - pretty print attributes up to
cfg(false) - fix applicable on non naked if for
move_guardassist - fix guess renamed macro braces
- fix indent for
convert_iter_for_each_to_for - fix indent for
merge_nested_if - fix match arm nested body invalid expected type
- fix nested if-let for
merge_nested_if - fix flycheck generations not being synced for multiple workspaces
- more perf improvements, made possible after non-Salsa interneds
- non-Salsa-interned solver types - with GC for them
- remove conflicting advice
- support undotted-self for
thisparam closure
Rust Compiler Performance Triage
Very quiet week, with essentially no change in performance.
Triage done by @simulacrum. Revision range: 21ff67df..e1212ea7
1 Regression, 1 Improvement, 3 Mixed; 2 of them in rollups 36 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Raise travel grant limit to $100,000 for 2026
- Fund program management program for 2026
- Raise automatic travel grant to $2000
No Items entered Final Comment Period this week for Rust RFCs, Language Team, Language Reference or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
Upcoming Events
Rusty Events between 2025-12-24 - 2026-01-21 🦀
Virtual
- 2025-12-30 | Virtual (Tel Aviv-yafo, IL) | Code Mavens 🦀 - 🐍 - 🐪
- 2026-01-03 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2026-01-07 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2026-01-08 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2026-01-08 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-01-13 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-01-13 | Virtual | libp2p Events
- 2026-01-15 | Virtual (Berlin, DE) | Rust Berlin
- 2026-01-20 | Virtual (Washington, DC, US) | Rust DC
- 2026-01-21 | Virtual (Vancouver, BC, CA) | Vancouver Rust
Asia
- 2026-01-07 | Tel Aviv-yafo, IL | Rust 🦀 TLV
Europe
- 2026-01-07 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2026-01-07 | Girona, ES | Rust Girona
- 2026-01-08 | Geneva, CH | Post Tenebras Lab
- 2026-01-14 | Reading, UK | Reading Rust Workshop
- 2026-01-20 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2026-01-20 | Paris, FR | Rust Paris
North America
- 2025-12-27 | Boston, MA, US | Boston Rust Meetup
- 2026-01-03 | Boston, MA, US | Boston Rust Meetup
- 2026-01-08 | Mountain View, CA, US | Hacker Dojo
- 2026-01-10 | Boston, MA, US | Boston Rust Meetup
- 2026-01-15 | Seattle, WA, US | Seattle Rust User Group
- 2026-01-17 | Boston, MA, US | Boston Rust Meetup
- 2026-01-20 | San Francisco, CA, US | San Francisco Rust Study Group
- 2026-01-21 | Austin, TX, US | Rust ATX
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
they should just rename
unsafetoCso people can shut up
- /u/thisismyfavoritename on /r/rust
Thanks to Brian Kung for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
24 Dec 2025 5:00am GMT
20 Dec 2025
Planet Mozilla
Tarek Ziadé: all the code are belong to claude*
I have been writing code for a long time, long enough to be suspicious of tools that claim to fundamentally change how I work. And yet, here we are.
The latest iterations of Claude Code are genuinely impressive. Not in a flashy demo way, but in the quiet, dangerous way where you suddenly realize you have delegated large parts of your thinking to it. This post is about that experience, how Claude helped me build rustnn, what worked remarkably well, and where I had to consciously pull myself back.
Claude as a serious coding partner
For rustnn, I leaned heavily on Claude Code. The quality of the generated Rust was consistently high. Beyond producing correct syntax, it reasoned about what the code was supposed to do. It was context-aware in a way that made iterative design feel natural. I could ask for refactors, architectural changes, or alternative approaches, and get answers that actually respected the existing codebase and long-running tests.
This mirrors what many developers have been reporting toward the end of 2025. Claude Code's agent-oriented design and large-context reasoning make it particularly strong for repository-wide work: multi-file refactors, non-trivial debugging sessions, and architectural changes that need to fit an existing mental model. Compared to Codex-style systems, which still shine for fast edits and local completions, Claude tends to perform better when the task requires sustained reasoning and understanding of project-wide constraints.
Anthropic's recent Claude releases have reinforced that positioning. Improvements in long-context handling, reasoning depth, and agentic workflows make it easier to treat Claude as something closer to a collaborator than an autocomplete engine.
The turning point for me was when I stopped treating Claude like a chat bot and started treating it like a constrained agent.
That is where CLAUDE.md comes in.
Tuning CLAUDE.md
I stumbled upon an excellent LangChain article on how to turn Claude Code into a domain-specific coding agent.
It clicked immediately. Instead of repeatedly explaining the same constraints, goals, and conventions, I encoded them once. Rust style rules. Project intent. Explicit boundaries. How to react to test failures.
The effect was immediate. Output quality improved, and the amount of back-and-forth dropped significantly. Claude stopped proposing things that were clearly out of scope and started behaving like someone who had actually read and understood the project.
For rustnn, I went one step further and anchored development around WPT conformance tests. That gave both Claude and me a shared, objective target. Tests either pass or they do not. No bikeshedding.
Tweaking CLAUDE.md quickly revealed itself as a never-ending process. There are plenty of articles describing different approaches, and none of them are definitive. The current direction seems to be layering information across multiple files, structuring project documentation so it is optimized for agent consumption while remaining readable for humans, and doing so without duplicating the same knowledge in multiple places.
That balance turns out to be just as important as the model itself.
The slippery slope
There is a trap though, and it is a subtle one.
Once Claude is good enough, you start routing everything through it.
- Re-running tests.
- Interpreting obvious build errors.
- Copying and pasting logs that you already understand.
It feels efficient, but it is not free. Each interaction has a cost, and when you are in a tight edit-build-test loop, those costs add up fast. Worse, you start outsourcing mechanical thinking that you should probably still be doing yourself.
I definitely fell into that trap.
Reducing costs
The solution, for me, was to drastically reduce how much I talk to Claude, and to stop using its prompt environment as a catch-all interface to the project.
Claude became an extra terminal. One I open for very specific tasks, then close. It is not a substitute for my own brain, nor for the normal edit-build-test loop.
Reducing the context window is also critical. A concrete example is Python tracebacks. They are verbose, repetitive, and largely machine-generated noise. Sending full tracebacks back to the model is almost always wasteful.
That is why I added a hook to rewrite them on the fly into a compact form.
The idea is simple: keep the signal, drop the boilerplate. Same information, far fewer tokens. In practice, this not only lowers costs, it often produces better answers because the model is no longer drowning in irrelevant frames and runtime noise. On Python-heavy codebases, this change alone reduced my usage costs by roughly 20%.
Pre-compacting inputs turned out to be one of the most effective cost-control strategies I have found so far, especially when combined with a more deliberate, intentional way of interacting with the model.
Memory across sessions actually matters
Another pain point is session amnesia. You carefully explain design decisions, trade-offs, and long-term goals, only to repeat them again tomorrow.
A well-crafted CLAUDE.md mitigates part of this problem. It works well for static knowledge: coding style, project constraints, architectural boundaries, and things that rarely change. It gives Claude a stable baseline and avoids a lot of repetitive explanations.
But it does not capture evolving context.
It does not remember why a specific workaround exists, which approach you rejected last week, or what subtle behavior a particular test exposed yesterday. As soon as the session ends, that knowledge is gone, and you are back to re-teaching the same mental model.
This is where cross-session, cross-project memory becomes interesting.
I am currently experimenting with claude-mem
The idea is simple but powerful: maintain a centralized, persistent memory that is automatically updated based on interactions. Instead of manually curating context, relevant facts, decisions, and preferences are summarized and carried forward. Over time, this builds a lightweight but durable understanding of how you work and how your projects evolve.
Compared to CLAUDE.md, this kind of memory is dynamic rather than declarative. It captures intent, not just rules. It also scales across projects, which matters when you jump between repositories that share design philosophy, tooling, or constraints.
It is still early, and it is not magic. You need to be careful about what gets remembered and how summaries are formed. But the direction feels right. Persistent memory reduces cognitive reset costs, shortens warm-up time, and makes the interaction feel less like starting over and more like continuing a conversation you paused yesterday.
That difference adds up.
Final thoughts
Claude Code is good. Very good. Good enough that you need discipline to use it well.
With a tuned CLAUDE.md, clear test-driven goals like WPT conformance, and some tooling to reduce noise and cost, it becomes a powerful accelerator. Without that discipline, it is easy to overuse it and slowly burn budget on things you already know how to do.
I do not think this replaces engineering skill. If anything, it amplifies both good and bad habits. The trick is to make sure it is amplifying the right ones.
References
- My Claude tools
- How to Turn Claude Code into a Domain-Specific Coding Agent
- OpenAI Codex vs GitHub Copilot vs Claude
- Anthropic bolsters AI model Claude's coding and agentic abilities with Opus 4.5
- claude-mem
*The title is a deliberate reference to "All your base are belong to us." The grammar is broken on purpose. It is a joke, but also a reminder that when tools like Claude get this good, it is easy to give them more control than you intended
20 Dec 2025 12:00am GMT
19 Dec 2025
Planet Mozilla
Mozilla Privacy Blog: Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025)
Welcome to the blog series "Behind the Manifesto," where we unpack core issues that are critical to Mozilla's mission. The Mozilla Manifesto represents our commitment to advancing an open, global internet that gives people meaningful choice in their online experiences, promotes transparency and innovation and protects the public interest over private walled gardens. This blog series digs deeper on our vision for the web and the people who use it and how these goals are advanced in policymaking and technology.
In 2025, global tech policy raced to keep up with technological change and opportunity. In the midst of this evolution, Mozilla sought to ensure that solutions remained centered on openness, competition and user agency.
From AI Agents and the future of the open web to watershed antitrust cases, competition debates surged. Efforts to drive leadership and innovation in AI led governments across the globe to evaluate priorities. Perennial privacy and security questions remained on the radar, with US states intensifying efforts to pass laws and the EU working to streamline rules on AI, cybersecurity and data. Debates amongst industry, civil society and policymakers reflected the intensity of these moments.
Just as we have for over 20 years, Mozilla showed up to build, convene, debate and advocate. It's clear that more than ever, there must be urgency to truly put people first. Below are a selection of some key moments we're reflecting on, as we head into 2026.
FEBRUARY 2025
Mozilla Participates in Paris AI Action Summit as Part of the Steering Committee
Mozilla participated in the Paris AI Action Summit as Part of the Steering Committee with an 'action packed' schedule that included appearances on panels, a live recording of the podcast "Computer Says Maybe" and a reception to reflect on discussions and thank all the officials and researchers who had worked so hard to make the Summit a success.
Additionally, Mozilla and other partners, including Hugging Face, Microsoft and OpenAI, launched Robust Open Online Safety Tools (ROOST) at the Paris AI Action Summit. The entity is designed to create open source foundations for safer and more responsible AI development, ensuring that safety and transparency remain central to innovation.
The launch of ROOST happened at exactly the right time and in the right place. The Paris AI Action Summit provided a global backdrop for launching work that will ultimately help make AI safety a field that everyone can shape and improve.
Mozilla Event: AI & Competition featuring the President of the German Competition Authority
On February 12, we hosted a public event in Berlin on AI & competition, in partnership with German daily newspaper Tagesspiegel. Addressing the real risk of market concentration at various elements of the AI stack, the President of the German competition authority (Bundeskartellamt), Andreas Mundt, delivered a keynote address setting out his analysis of competition in AI and the role of his authority in ensuring contestable markets as technology rapidly evolves.
MARCH 2025
America's AI Action Plan
In March, Mozilla responded to the White House's request for information on AI policy, urging policymakers to ensure that AI remained open, competitive and accountable. The comments also warned that concentrated control by a few tech giants threatened innovation and public trust, and called for stronger support of open source AI, public AI infrastructure, transparent energy use and workforce development. Mozilla underscored these frameworks are essential to building an AI ecosystem that serves the public interest rather than purely corporate bottom lines.
Mozilla Mornings: Promoting a privacy-preserving online ads ecosystem
The same month, we also hosted a special edition of Mozilla Mornings focused on the future of online advertising and the role Privacy-Enhancing Technologies (PETs) can play in reshaping it. The conversation came at a critical moment in Europe, amidst discussions on updating privacy legislation while enforcing existing rules.
The session brought together policymakers, technologists, and civil-society experts to examine how Europe can move toward a fairer and more privacy-respecting advertising ecosystem. Speakers explored the limitations of today's surveillance-driven model and outlined how PETs and Privacy-Preserving Technologies (PPTs) could offer a viable alternative that protects users while sustaining the economic foundations of the open web. The event underscored Mozilla's commitment to advancing privacy-respecting technologies and ensuring that both policy and technical design converge toward a healthier online advertising ecosystem.
MAY 2025
CPDP: The Evolution of PETs in Digital Ads
At the Brussels 2025 International CPDP Conference, Mozilla organized and participated in a panel titled "The Evolution of PETs in Digital Ads: Genuine Privacy Innovation or Market Power Play?" The discussion explored how Privacy-Enhancing Technologies (PETs) - tools designed to minimize data collection and protect user privacy - are reshaping the digital advertising landscape. Panelists debated how to encourage genuine privacy innovation without reinforcing existing power structures, and how regulations like the GDPR and the Digital Markets Act (DMA) can help ensure PETs foster transparency and competition.
Competition in Focus: U.S. vs Google
The U.S. v. Google remedies trial was a defining moment - not just for 2025, but for the future of browser and search competition. While the remedies phase was about creating competition in the search market, some of the proposed remedies risked weakening independent browsers like Firefox, the very players that make real choice possible.
In early May, Mozilla's CFO, Eric Muhlheim, testified to this very point. Muhlheim's testimony, and Mozilla's amicus brief in the case, spoke to the vital role of small, independent browsers in driving competition and innovation across the web and warned about the risks of harming their ability to select the search default that best serves their users. Ensuring a competitive search ecosystem while avoiding harm to browser competition remains an important issue in 2026.
JUNE 2025
Open by Design: How Nations Can Compete in the Age of AI
The choices governments make today, about who gets to build, access and benefit from AI, will shape economic competitiveness, national security and digital rights for decades. In June, Mozilla supported a new report by the UK think tank Demos, exploring how and why embracing openness in key AI resources can spur innovation and adoption. Enabling safer, more transparent development and boosting digital sovereignty is a recipe, if there ever was one, for 'winning' at AI.
EU Digital Summit: Advocating for Open and Secure Digital Ecosystems
Digital competitiveness depends on open, secure, and interoperable ecosystems that foster innovation while respecting users' rights. We spoke at the 2025 European Digital Summit-a flagship forum bringing together policymakers, regulators, industry leaders, and civil society-and argued that openness and security reinforce each other, that smart regulation has the potential to lower entry barriers and curb gatekeeping power, and that innovation does not require sacrificing privacy when incentives are aligned toward rights-respecting designs. The takeaway was clear: enforcing interoperability, safeguarding pro-competition rules, and embedding privacy-by-design incentives are essential to a resilient, innovative, and trustworthy open web.
JULY 2025
Joint Letter to the UK Secretary of State on DMCCA
When choice disappears, innovation stalls. In July, Mozilla sent an open letter to UK Ministers and the Competition & Markets Authority to urge faster implementation of the UK Digital Markets, Competition & Consumers Act (DMCCA). As an organisation that exists to create an internet that is open and accessible to all, Mozilla has long supported competitive digital markets. Since the EU Digital Markets Act took effect in 2024, users have begun to benefit from genuine choice for the first time, with interventions like browser choice screens offering people browser choice. The result? People are choosing independent alternatives to gatekeepers defaults: Firefox daily active users on iOS rose by 150% across the EU. The UK's DMCCA could be similarly revolutionary for UK consumers and the many challenger businesses taking on market dominance.
SEPTEMBER 2025
Digital Bootcamp: Bringing Internet Architecture to the Heart of EU Policymaking
In September, Mozilla officially launched its Digital Bootcamp initiative, developed in partnership with Cloudflare, Proton and CENTR, to strengthen policymakers' understanding of how the internet actually works and why this technical foundation is essential for effective regulation. We delivered interactive sessions across EU institutions, including a workshop for Members of the European Parliament, the European Commission, and representatives of the EU member states.
Across these workshops, we demystified the layered architecture of the internet, explained how a single website request moves through the stack, and clarified which regulatory obligations apply at each layer. By bridging the gap between engineering and policymaking, Digital Bootcamp is helping ensure EU digital laws remain grounded in technical reality, supporting evidence-based decisions that protect innovation, security and the long-term health of the open web.
OCTOBER 2025
Mozilla Meetup: The Future of Competition
On October 8, Mozilla hosted a Meetup on Competition in Washington, D.C., bringing together leading voices in tech policy - including Alissa Cooper (Knight-Georgetown Institute), Amba Kak (AI Now Institute), Luke Hogg (Foundation for American Innovation) and Kush Amlani (Mozilla) - to discuss the future of browser competition, antitrust enforcement and AI's growing influence on the digital landscape. Moderated by Bloomberg's Leah Nylen, the event reinforced our ongoing efforts to establish a more open and competitive internet, highlighting how policy decisions in these areas directly shape user choice, innovation, and the long-term health of the open web.
Global Encryption Day
On October 21, Mozilla marked Global Encryption Day by reaffirming our commitment to strong encryption as a cornerstone of online privacy, security, and trust. For years, Mozilla has played an active role in shaping the broader policy debate on encryption by consistently pushing back against efforts to weaken it and working with partners around the world to safeguard the technology that helps to keep people secure online - from joining the Global Encryption Coalition Steering Committee, to challenging U.S. legislation like the EARN IT Act and leading multi-year efforts in the EU to address encryption risks in the eIDAS Regulation.
California's Opt Me Out Act: A Continuation of the Fight For Privacy
The passage of California's Opt Me Out Act (AB 566) marked a major step forward in Mozilla's ongoing effort to strengthen digital privacy and give users control of their personal data. For years, Mozilla has spoken in support of Global Privacy Control (GPC) - a tool already integrated into Firefox - as a model for privacy-by-design solutions that can be both effective and user-friendly.
NOVEMBER 2025
Mozilla Submits Recommendations on the Digital Fairness Act
In November, Mozilla submitted its response to the European Commission's consultation on the Digital Fairness Act (DFA), framing it as a key opportunity to modernise consumer protection for AI-driven and highly personalised digital services. Mozilla argued that effective safeguards must tackle both interface design and underlying system choices, prohibit harmful design practices, and set clear fairness standards for personalization and advertising. A well-designed DFA can complement existing EU laws, strengthen user autonomy, provide legal certainty for innovators, and support a more competitive digital ecosystem built on genuine user choice.
Mozilla hosts AI breakfast in UK Parliament
Mozilla President, Mark Surman, hosted MPs and Peers for a breakfast in Parliament to discuss how policymakers can nurture AI that supports public good. As AI policy moves from principle to implementation, the breakfast offered insight into the models, trade-offs and opportunities that will define the next phase of the UK's AI strategy.
DECEMBER 2025
Mozilla Joins Tech Leaders at US House AI Caucus Briefing
Internet Works, an association of "Middle Tech" companies, organized a briefing with the Congressional AI Caucus. The goal was to provide members of congress and their staff a better understanding of the Middle Tech ecosystem and how smaller companies are adopting and scaling AI technologies. Mozilla spoke on the panel, lending valued technical expertise and setting out how we're thinking about keeping the web open for innovation, competition and user choice with this new technology stack.
eIDAS2 Regulation: Defending Web Security and Trust
In December, the EU published the final implementing rules for eIDAS2, closing a multi-year fight over proposals that would have required browsers to automatically trust government-mandated website certificates-putting encryption, user trust, and the open web at risk. Through sustained advocacy and deep technical engagement, Mozilla helped secure clear legal safeguards preserving independent browser root programs and strong TLS security. We also ensured that the final standards respect existing security norms and reflect how the web actually works. With all rules now published, users can continue to rely on browsers to verify websites independently with strict security requirements, governments are prevented from weakening web encryption by default, and a dangerous global precedent for state-controlled trust on the internet has been avoided.
This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla's policy priorities.
The post Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025) appeared first on Open Policy & Advocacy.
19 Dec 2025 3:23pm GMT