21 Nov 2025
Planet Mozilla
Niko Matsakis: Move Expressions
This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.
TL;DR
The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression ("rvalue") that desugars into a temporary value that is moved into the closure. So
|| something(&move($expr))
is roughly equivalent to something like:
{
let tmp = $expr;
|| something(&{tmp})
}
How it would look in practice
Let's go back to one of our running examples, the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today - note the let _some_value = ... lines for dealing with captures:
// task: listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
do_something_else_with(_some_a, _some_b, _some_c)
});
Under this proposal it would look something like this:
tokio::task::spawn(async {
do_something_else_with(
move(self.some_a.clone()),
move(self.some_b.clone()),
move(self.some_c.clone()),
)
});
There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like
data_source_iter
.inspect(|item| {
inspect_item(item, move(tx.clone()).clone())
// ---------- -------
// | |
// move a clone |
// into the closure |
// |
// clone the clone
// on each iteration
})
.collect();
// some code that uses `tx` later...
Credit for this idea
This idea is not mine. It's been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it's come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary's proposal, like earlier proposals I've heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that's the version I ran with here.
This proposal makes closures more "continuous"
The reason that I love the move variant of this proposal is that it makes closures more "continuous" and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:
A Rust closure captures the places you use in the "minimal way that it can" - so
|| vec.len()will capture a shared reference to thevec,|| vec.push(22)will capture a mutable reference, and|| drop(vec)will take ownership of the vector.You can use
moveexpressions to control exactly what is captured: so|| move(vec).push(22)will move thevectorinto the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:|| { let vec = move(input.vec); // take full ownership of vec let data = move(&cx.data); // take a reference to data let output_tx = move(output_tx); // take ownership of the output channel process(&vec, &mut output_tx, data) }As a shorthand, you can write
move ||at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match withmoveexpressions to get more control. > So the previous closure might be written more concisely like so:move || { process(&input.vec, &mut output_tx, move(&cx.data)) // --------- --------- -------- // | | | // | | closure still // | | captures a ref // | | `&cx.data` // | | // because of the `move` keyword on the clsoure, // these two are captured "by move" // }
This proposal makes move "fit in" for me
It's a bit ironic that I like this, because it's doubling down on part of Rust's design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:
To be honest, I don't like the choice of
movebecause it's so operational. I think if I could go back, I would try to refashion our closures around two concepts
- Attached closures (what we now call
||) would always be tied to the enclosing stack frame. They'd always have a lifetime even if they don't capture anything.- Detached closures (what we now call
move ||) would capture by-value, likemovetoday.I think this would help to build up the intuition of "use
detach ||if you are going to return the closure from the current stack frame and use||otherwise".
move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don't have "ref closures" and "move closures" - you just have closures that sometimes capture moves, and a "move" closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it's quite elegant.
Why not suffix?
One question is whether a move expression should be a prefix or a postfix operator. So e.g.
|| something(&$expr.move)
instead of &move($expr).
My feeling is that it's not a good fit for a postfix operator because it doesn't just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:
|| process(foo(bar()).move)
When does bar() get called? If you think about it, it has to be closure creation time, but it's not very "obvious".
We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a "scope" of code ought to be prefix - though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.
Edit: I added this section after-the-fact in response to questions.
Conclusion
I'm going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust's existing design. I love that. To me, it joins the set of "yep, we should clearly do that" pieces in this puzzle:
- Add a
Sharetrait (I've gone back to preferring the nameshareš) - Add
moveexpressions
These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:
"low-level enough for a Kernel, usable enough for a GUI"
but they are moving in the right direction.
21 Nov 2025 10:45am GMT
The Servo Blog: Servo Sponsorship Tiers
The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:
- Platinum: 10,000 USD/month
- Gold: 5,000 USD/month
- Silver: 1,000 USD/month
- Bronze: 100 USD/month
Organizations and individual sponsors donating in these tiers will be acknowledged on the servo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be "no strings attached" donations. All the information about these new tiers is available at the Sponsorship page on this website.
Please contact us at join@servo.org if you are interested in sponsoring the project through one of these tiers.
Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187.
Last, but not least, we're excited to welcome our first bronze sponsor LambdaTest who has recently started donating to the Servo project. Thank you very much!
21 Nov 2025 12:00am GMT
20 Nov 2025
Planet Mozilla
Mozilla Localization (L10N): Localizer spotlight: Robb
About You
My profile in Pontoon is robbp, but I go by Robb. I'm based in Romania and have been contributing to Mozilla localization since 2018 - first between 2018 and 2020, and now again after a break. I work mainly on Firefox (desktop and mobile), Thunderbird, AMO, and SUMO. When I'm not volunteering for open-source projects, I work as a professional translator in Romanian, English, and Italian.
Getting Started
Q: How did you first get interested in localization? Do you remember how you got involved in Mozilla localization?
A: I've used Thunderbird for many years, and I never changed the welcome screen. I'd always see that invitation to contribute somehow.
Back in 2018, I was using freeware only - including Thunderbird - and I started feeling guilty that I wasn't giving back. I tried donating, but online payments seemed shady back then, and I thought a small, one-time donation wouldn't make a difference.
Around the same time, my mother kept asking questions like, "What is this trying to do on my phone? I think they're asking me something, but it's in English!" My generation learned English from TV, Cartoon Network, and software, but when the internet reached the older generation, I realized how big of a problem language barriers could be. I wasn't even aware that there was such a big wave of localizing everything seen on the internet. I was used to having it all in English (operating system, browser, e-mail client, etc.).
After translating for my mom for a year, I thought, why not volunteer to localize, too? Mozilla products were the first choice - Thunderbird was "in my face" all day, all night, telling me to go and localize. I literally just clicked the button on Thunderbird's welcome page - that's where it all started.
I had also tried contributing to other open-source projects, but Mozilla's Pontoon just felt more natural to me. The interface is very close to the CAT tools I am used to.
Your Localization Journey
Q: What do you do professionally? How does that experience influence your Mozilla work and motivate you to contribute to open-source localization?
A: I've been a professional translator since 2012. I work in English, Romanian, and Italian - so yes, I type all the time.
In Pontoon, I treat the work as any professional project. I check for quality, consistency, and tone - just like I would for a client.
I was never a writer. I love translating. That's why I became a translator (professionally). And here⦠I actually got more feedback here than in my professional translation projects. I think that's why I stayed for so long, that's why I came back.
It is a change of scenery when I don't localize professionally, a long way from the texts I usually deal with. This is where I unwind, where I translate for the joy of translation, where I find my translator freedom.
Q: At what moment did you realize that your work really mattered?
A: When my mom stopped asking me what buttons to click! Now she just uses her phone in Romanian. I can't help but smile when I see that. It makes me think I'm a tiny little part of that confidence she has now.
Community & Collaboration
Q: Since your return, Romanian coverage has risen from below 70% to above 90%. You translate, review suggestions, and comment on other contributors' work. What helps you stay consistent and motivated?
A: I set small goals - I like seeing the completion percentage climb. I celebrate every time I hit a milestone, even if it's just with a cup of coffee.
I didn't realize it was such a big deal until the localization team pointed it out. It's hard to see the bigger picture when you work in isolation. But it's the same motivation that got me started and brought me back - you just need to find what makes you hum.
Q: Do you conduct product testing after you localize the strings or do you test them by being an active user?
A: I'm an active user of both Firefox and Thunderbird - I use them daily and quite intensely. I also have Firefox Nightly installed in Romanian, and I like to explore it to see what's changed and where. But I'll admit, I'm not as thorough as I should be! Our locale manager gives me a heads-up about things to check which helps me stay on top of updates. I need to admit that the testing part is done by the team manager. He is actively monitoring everything that goes on in Pontoon and checks how strings in Pontoon land in the products and to the end users.
Q: How do you collaborate with other contributors and support new ones?
A: I'm more of an independent worker, but in Pontoon, I wanted to use the work that was already done by the "veterans" and see how I could fit in. We had email conversations over terms, their collaboration, their contributions, personal likes and dislikes etc. I think they actually did me a favor with the email conversations, given I am not active on any channels or social media and email was my only way of talking to them.
This year I started leaving comments in Pontoon - it's such an easy way to communicate directly on specific strings. Given I was limited to emails until now, I think comments will help me reach out to other members of the team and start collaborating with them, too.
I keep in touch with the Romanian managers by email or Telegram. One of them helps me with technical terms, he helped get the Firefox project to 100% before the deadline. He contacts me with information on how to use options (I didn't know about) in Pontoon and ideas on wording (after he tests and reviews strings). Collaboration doesn't always mean meetings; sometimes it's quiet cooperation over time.
Mentoring is a big word, but I'm willing for the willing. If someone reaches out, I'll always try to help.
Q: Have you noticed improvements in Pontoon since 2020? How does it compare to professional tools you use, and what features do you wish it had?
A: It's fast - and I love that.
There's no clutter - and that's a huge plus. Some of the "much-tooted" professional tools are overloaded with features and menus that slow you down instead of helping. Pontoon keeps things simple and focused.
I also appreciate being able to see translations in other languages. I often check the French and Italian versions, just to compare terms.
The comments section is another great feature - it makes collaboration quick and to the point, perfect for discussing terms or string-specific questions. Machine translation has also improved a lot across the board, and Pontoon is keeping pace.
As for things that could be better - I'd love to try the pre-translation feature, but I've noticed that some imported strings confirm the wrong suggestion out of several options. That's when a good translation-memory cleanup becomes necessary. It would be helpful if experienced contributors could trim the TM, removing obsolete or outdated terms so new contributors won't accidentally use them.
Pontoon sometimes lags when I move too quickly through strings - like when approving matches or applying term changes across projects. And, unlike professional CAT tools, it doesn't automatically detect repeated strings or propagate translations for identical text. That's a small but noticeable gap compared to professional tools.
Personal Reflections
Q: Professional translators often don't engage in open-source projects because their work is paid elsewhere. What could attract more translators - especially women - to contribute?
A: It's tricky. Translation is a profession, not a hobby, and people need to make a living.
But for me, working on open-source projects is something different - a way to learn new things, use different tools, and have a different mindset. Maybe if more translators saw it as a creative outlet instead of extra work, they'd give it a try.
Involvement in open source is a personal choice. First, one has to hear about it, understand it, and realize that the software they use for free is made by people - then decide they want to be part of that.
I don't think it's a women's thing. Many come and many go. Maybe it's just the thrill at the beginning. Some try, but maybe translation is not for themā¦
Q: What does contributing to Mozilla mean to you today?
A: It's my way of giving back - and of helping people like my mom, who just want to understand new technology without fear or confusion. That thought makes me smile every time I open Firefox or Thunderbird.
Q: Any final wordsā¦
A: I look forward to more blogs featuring fellow contributors and learning and being inspired from their personal stories.
20 Nov 2025 6:46pm GMT
The Mozilla Blog: Rewiring Mozilla: Doing for AI what we did for the web

AI isn't just another tech trend - it's at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks - and further concentrating power over how tech works in the hands of a few.
This leaves us with a choice: push the trajectory of AI in a direction that's good for humanity - or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity.
Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it's currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital - everything people build on the internet - it's imperative that we step in to shape where it goes.
This post is the first in a series that will lay out Mozilla's evolving strategy to do for AI what we did for the web.
What did we do for the web?
Twenty five years ago, Microsoft Internet Explorer had 95% browser market share - controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft's monopoly control of the web, and dropped Internet Explorer's market share to 55% in just a few short years.
The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer - and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era.
How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto - values like privacy, openness and trust. And by gathering a global community of tens of thousands - a rebel alliance of sorts - to build an alternative to the big tech behemoth of the time.
What does success look like?
This is what we intend to do again: grow an alliance of people, communities, companies who envision - and want to build - a different future for AI.
What does 'different' look like? There are millions of good answers to this question. If your native tongue isn't a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it's probably apps and services that become more useful and delightful as they add AI - and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice.
Our task is to create a future for AI that is built around these values. We've started to rewire Mozilla to take on this task - and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework - a way to measure our progress against both mission and money:
| Double bottom line | In the world | In Mozilla |
| Mission | Empower people with tech that promotes agency and choice - make AI for and about people. | Build AI that puts humanity first. 100% of Mozilla orgs building AI that advances the Mozilla Manifesto. |
| Money | Decentralize the tech industry - and create an tech ecosystem where the 'people part' of AI can flourish. | Radically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue. |
Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit - and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla's portfolio will design their strategies - and measure their success - against this double bottom line.
What will we build?
As we've rewired Mozilla, we've not only laid out a new strategy - we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things - real technology and products and services that start to carve a different path for AI.
While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:
| Open source AI - for developers |
Public interest AI - by and for communities |
Trusted AI experiences - for everyone |
| Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI - and that enables people everywhere to build with AI on their own terms. | Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won't build it for them. | Focus: create trusted AI-driven products that give people new ways to interact with the web - with user choice and openness as guiding principles. |
| Early examples: Mozilla.ai's Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI. | Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. | Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 - offering an opt-in way to choose models and add AI features in a browser you trust. |
The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment - and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction.
We need to do this - together
These are the stakes: if we can't push AI in a better direction, the internet - a place where 6 billion of us now spend much of our lives - will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI.
For Mozilla, whether or not to tackle this challenge isn't a question anymore. We need to do this. The question is: how? The high level strategy that I've laid out is our answer. It doesn't prescribe all the details - but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things - and we know that we can't do this alone.
Which means it's incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well - and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow - and win - just as we did in the web era.
You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI.
The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.
20 Nov 2025 3:00pm GMT
Mozilla Thunderbird: Thunderbird Pro November 2025 Update

Welcome back to the latest update on our progress with Thunderbird Pro, a set of additional subscription services designed to enhance the email client you know, while providing a powerful open-source alternative to many of the big tech offerings available today. These services include Appointment, an easy to use scheduling tool; Send, which offers end-to-end encrypted file sharing; and Thundermail, an email service from the Thunderbird team. If you'd like more information on the broader details of each service and the road to getting here you can read our past series of updates here. Do you want to receive these and other updates and be the first to know when Thunderbird Pro is available? Be sure to sign up for the waitlist.
With that said, here's how progress has shaped up on Thunderbird Pro since the last update.
Current Progress
Thundermail
It took a lot of work to get here, but Thundermail accounts are now in production testing. Internal testing with our own team members has begun, ensuring everything is in place for support and onboarding of the Early Bird wave of users. On the visual side, we've implemented improved designs for the new Thundermail dashboard, where users can view and edit their settings, including adding custom domains and aliases.

The new Thunderbird Pro add-on now features support for Thundermail, which will allow future users who sign-up through the add-on to automatically add their Thundermail account in Thunderbird. Work to boost infrastructure and security has also continued, and we've migrated our data hosting from the Americas to Germany and the EU where possible. We've also been improving our email delivery to reduce the chances of Thundermail messages landing in spam folders.

Appointment
The team has been busy with design work, getting Zoom and CalDAV better integrated, and addressing workflow, infrastructure, and bugs. Appointment received a major visual update in the past few months, which is being applied across all of Thunderbird Pro. While some of these updates have already been implemented, there's still lots of remodelling happening and under discussion - all in preparation for the Early Bird beta release.

Send
One of the main focuses for Send has been migrating it from its own add-on to the new Thunderbird Pro add-on, which will make using it in Thunderbird desktop much smoother. Progress continues on improving file safety through better reporting and prevention of illegal uploads. Our security review is now complete, with an external assessor validating all issues scheduled for fixing and once finalized, this report will be shared publicly with our community. Finally, we've refined the Send user experience by optimizing mobile performance, improving upload and download speeds, enhancing the first-time user flow, and much more.

Bringing it all together
Our new Thunderbird Pro website is now live, marking a major milestone in bringing the project to life. The website offers more details about Thunderbird Pro and serves as the first step for users to sign up, sign in and access their accounts.
Our initial subscription tier, the Early Bird Plan, priced at $9 per month, will include all three services: Thundermail, Send, and Appointment. Email hosting, file storage, and the security behind all of this come at a cost, and Thunderbird Pro will never be funded by selling user data, showing ads, or compromising its independence. This introductory rate directly supports Thunderbird Pro's early development and growth, positioning it for long-term sustainability. We will also be actively listening to your feedback and reviewing the pricing and plans we offer. Once the rough edges are smoothed out and we're ready to open the doors to everyone, we plan to introduce additional tiers to better meet the needs of all our users.
What's next
Thunderbird Pro is now awaiting its initial closed test run which will include a core group of community contributors. This group will help conduct a broader test and identify critical issues before we gradually open Early Bird access to our waitlist subscribers in waves. While these services will still be considered under active development, with your help this early release will continue to test and refine them for all future users.
Be sure you sign up for our Early Bird waitlist at tb.pro and help us shape the future of Thunderbird Pro. See you soon!
The post Thunderbird Pro November 2025 Update appeared first on The Thunderbird Blog.
20 Nov 2025 12:00pm GMT
The Rust Programming Language Blog: Switching to Rust's own mangling scheme on nightly
TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in nightly-2025-11-21
Context
When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.
In C, the symbol name of a function is just the name that the function was defined with, such as strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.
Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as a::foo and b::foo would have clashing symbols.
Rust originally used a symbol mangling scheme based on the Itanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:
- Information about generic parameter instantiations is lost during mangling
- It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
- Symbol names can contain
.characters which aren't supported on all platforms - Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
- There is no straightforward way to differentiate between Rust and C++ symbols
If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.
Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:
- An unambiguous encoding for everything that can end up in a binary's symbol table
- Information about generic parameters are encoded in a reversible way
- Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
- It doesn't rely on compiler internals
- Symbols are restricted to only
A-Z,a-z,0-9and_, helping ensure compatibility with tools on varied platforms - It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding
However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!
Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.
Benefits
Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.
It's easy to see the new mangling scheme in action, consider the following example:
With the legacy mangling scheme, all of the useful information about the generic instantiation of foo is lost in the symbol f::foo..
thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
0: std::panicking::begin_panic
at /rustc/d6c...582/library/std/src/panicking.rs:769:5
1: f::foo
2: f::main
3: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
..but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:
thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
0: std::panicking::begin_panic
at /rustc/d6c...582/library/std/src/panicking.rs:769:5
1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>
2: f::main
3: <fn() as core::ops::function::FnOnce<()>>::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Possible drawbacks
Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.
Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.
In any case, using the new mangling scheme can be disabled if any problem occurs: use the -Csymbol-mangling-version=legacy -Zunstable-options flag to revert to using the legacy mangling scheme.
Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.
Adding v0 support in your tools
If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demangle repository that can be integrated into your project.
Summary
rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (nightly-2025-11-21).
Let us know if you encounter problems, by opening an issue on GitHub.
If that happens, you can use the legacy mangling scheme with the -Csymbol-mangling-version=legacy -Zunstable-options flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:
[]
= ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]
If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the -Csymbol-mangling-version=v0 flag today via RUSTFLAGS or .cargo/config.toml:
[]
= ["-Csymbol-mangling-version=v0"]
20 Nov 2025 12:00am GMT
19 Nov 2025
Planet Mozilla
Nick Fitzgerald: A Function Inliner for Wasmtime and Cranelift
Note: I cross-posted this to the Bytecode Alliance blog.
Function inlining is one of the most important compiler optimizations, not because of its direct effects, but because of the follow-up optimizations it unlocks. It may reveal, for example, that an otherwise-unknown function parameter value is bound to a constant argument, which makes a conditional branch unconditional, which in turn exposes that the function will always return the same value. Inlining is the catalyst of modern compiler optimization.
Wasmtime is a WebAssembly runtime that focuses on safety and fast Wasm execution. But despite that focus on speed, Wasmtime has historically chosen not to perform inlining in its optimizing compiler backend, Cranelift. There were two reasons for this surprising decision: first, Cranelift is a per-function compiler designed such that Wasmtime can compile all of a Wasm module's functions in parallel. Inlining is inter-procedural and requires synchronization between function compilations; that synchronization reduces parallelism. Second, Wasm modules are generally produced by an optimizing toolchain, like LLVM, that already did all the beneficial inlining. Any calls remaining in the module will not benefit from inlining - perhaps they are on slow paths marked [[unlikely]] or the callee is annotated with #[inline(never)]. But WebAssembly's component model changes this calculus.
With the component model, developers can compose multiple Wasm modules - each produced by different toolchains - into a single program. Those toolchains only had a local view of the call graph, limited to their own module, and they couldn't see cross-module or fused adapter function definitions. None of them, therefore, had an opportunity to inline calls to such functions. Only the Wasm runtime's compiler, which has the final, complete call graph and function definitions in hand, has that opportunity.
Therefore we implemented function inlining in Wasmtime and Cranelift. Its initial implementation landed in Wasmtime version 36, however, it remains off-by-default and is still baking. You can test it out via the -C inlining=y command-line flag or the wasmtime::Config::compiler_inlining method. The rest of this article describes function inlining in more detail, digs into the guts of our implementation and rationale for its design choices, and finally looks at some early performance results.
Function Inlining
Function inlining is a compiler optimization where a call to a function f is replaced by a copy of f's body. This removes function call overheads (spilling caller-save registers, setting up the call frame, etcā¦) which can be beneficial on its own. But inlining's main benefits are indirect: it enables subsequent optimization of f's body in the context of the call site. That context is important - a parameter's previously unknown value might be bound to a constant argument and exposing that to the optimizer might cascade into a large code clean up.
Consider the following example, where function g calls function f:
fn f(x: u32) -> bool {
return x < u32::MAX / 2;
}
fn g() -> u32 {
let a = 42;
if f(a) {
return a;
} else {
return 0;
}
}
After inlining the call to f, function g looks something like this:
fn g() -> u32 {
let a = 42;
let x = a;
let f_result = x < u32::MAX / 2;
if f_result {
return a;
} else {
return 0;
}
}
Now the whole subexpression that defines f_result only depends on constant values, so the optimizer can replace that subexpression with its known value:
fn g() -> u32 {
let a = 42;
let f_result = true;
if f_result {
return a;
} else {
return 0;
}
}
This reveals that the if-else conditional will, in fact, unconditionally transfer control to the consequent, and g can be simplified into the following:
fn g() -> u32 {
let a = 42;
return a;
}
In isolation, inlining f was a marginal transformation. When considered holistically, however, it unlocked a plethora of subsequent simplifications that ultimately led to g returning a constant value rather than computing anything at run-time.
Implementation
Cranelift's unit of compilation is a single function, which Wasmtime leverages to compile each function in a Wasm module in parallel, speeding up compile times on multi-core systems. But inlining a function at a particular call site requires that function's definition, which implies parallelism-hurting synchronization or some other compromise, like additional read-only copies of function bodies. So this was the first goal of our implementation: to preserve as much parallelism as possible.
Additionally, although Cranelift is primarily developed for Wasmtime by Wasmtime's developers, it is independent from Wasmtime. It is a reusable library and is reused, for example, by the Rust project as an alternative backend for rustc. But a large part of inlining, in practice, are the heuristics for deciding when inlining a call is likely beneficial, and those heuristics can be domain specific. Wasmtime generally wants to leave most calls out-of-line, inlining only cross-module calls, while rustc wants something much more aggressive to boil away its Iterator combinators and the like. So our second implementation goal was to separate how we inline a function call from the decision of whether to inline that call.
These goals led us to a layered design where Cranelift has an optional inlining pass, but the Cranelift embedder (e.g. Wasmtime) must provide a callback to it. The inlining pass invokes the callback for each call site, the callback returns a command of either "leave the call as-is" or "here is a function body, replace the call with it". Cranelift is responsible for the inlining transformation and the embedder is responsible for deciding whether to inline a function call and, if so, getting that function's body (along with whatever synchronization that requires).
The mechanics of the inlining transformation - wiring arguments to parameters, renaming values, and copying instructions and basic blocks into the caller - are, well, mechanical. Cranelift makes extensive uses of arenas for various entities in its IR, and we begin by appending the callee's arenas to the caller's arenas, renaming entity references from the callee's arena indices to their new indices in the caller's arenas as we do so. Next we copy the callee's block layout into the caller and replace the original call instruction with a jump to the caller's inlined version of the callee's entry block. Cranelift uses block parameters, rather than phi nodes, so the call arguments simply become jump arguments. Finally, we translate each instruction from the callee into the caller. This is done via a pre-order traversal to ensure that we process value definitions before value uses, simplifying instruction operand rewriting. The changes to Wasmtime's compilation orchestration are more interesting.
The following pseudocode describes Wasmtime's compilation orchestration before Cranelift gained an inlining pass and also when inlining is disabled:
// Compile each function in parallel.
let objects = parallel map for func in wasm.functions {
compile(func)
};
// Combine the functions into one region of executable memory, resolving
// relocations by mapping function references to PC-relative offsets.
return link(objects)
The naive way to update that process to use Cranelift's inlining pass might look something like this:
// Optionally perform some pre-inlining optimizations in parallel.
parallel for func in wasm.functions {
pre_optimize(func);
}
// Do inlining sequentially.
for func in wasm.functions {
func.inline(|f| if should_inline(f) {
Some(wasm.functions[f])
} else {
None
})
}
// And then proceed as before.
let objects = parallel map for func in wasm.functions {
compile(func)
};
return link(objects)
Inlining is performed sequentially, rather than in parallel, which is a bummer. But if we tried to make that loop parallel by logically running each function's inlining pass in its own thread, then a callee function we are inlining might or might not have had its transitive function calls inlined already depending on the whims of the scheduler. That leads to non-deterministic output, and our compilation must be deterministic, so it's a non-starter.1 But whether a function has already had transitive inlining done or not leads to another problem.
With this naive approach, we are either limited to one layer of inlining or else potentially duplicating inlining effort, repeatedly inlining e into f each time we inline f into g, h, and i. This is because f may come before or after g in our wasm.functions list. We would prefer it if f already contained e and was already optimized accordingly, so that every caller of f didn't have to redo that same work when inlining calls to f.
This suggests we should topologically sort our functions based on their call graph, so that we inline in a bottom-up manner, from leaf functions (those that do not call any others) towards root functions (those that are not called by any others, typically main and other top-level exported functions). Given a topological sort, we know that whenever we are inlining f into g either (a) f has already had its own inlining done or (b) f and g participate in a cycle. Case (a) is ideal: we aren't repeating any work because it's already been done. Case (b), when we find cycles, means that f and g are mutually recursive. We cannot fully inline recursive calls in general (just as you cannot fully unroll a loop in general) so we will simply avoid inlining these calls.2 So topological sort avoids repeating work, but our inlining phase is still sequential.
At the heart of our proposed topological sort is a call graph traversal that visits callees before callers. To parallelize inlining, you could imagine that, while traversing the call graph, we track how many still-uninlined callees each caller function has. Then we batch all functions whose associated counts are currently zero (i.e. they aren't waiting on anything else to be inlined first) into a layer and process them in parallel. Next, we decrement each of their callers' counts and collect the next layer of ready-to-go functions, continuing until all functions have been processed.
let call_graph = CallGraph::new(wasm.functions);
let counts = { f: call_graph.num_callees_of(f) for f in wasm.functions };
let layer = [ f for f in wasm.functions if counts[f] == 0 ];
while layer is not empty {
parallel for func in layer {
func.inline(...);
}
let next_layer = [];
for func in layer {
for caller in call_graph.callers_of(func) {
counts[caller] -= 1;
if counts[caller] == 0 {
next_layer.push(caller)
}
}
}
layer = next_layer;
}
This algorithm will leverage available parallelism, and it avoids repeating work via the same dependency-based scheduling that topological sorting did, but it has a flaw. It will not terminate when it encounters recursion cycles in the call graph. If function f calls function g which also calls f, for example, then it will not schedule either of them into a layer because they are both waiting for the other to be processed first. One way we can avoid this problem is by avoiding cycles.
If you partition a graph's nodes into disjoint sets, where each set contains every node reachable from every other node in that set, you get that graph's strongly-connected components (SCCs). If a node does not participate in a cycle, then it will be in its own singleton SCC. The members of a cycle, on the other hand, will all be grouped into the same SCC, since those nodes are all reachable from each other.
In the following example, the dotted boxes designate the graph's SCCs:
Ignoring edges between nodes within the same SCC, and only considering edges across SCCs, gives us the graph's condensation. The condensation is always acyclic, because the original graph's cycles are "hidden" within the SCCs.
Here is the condensation of the previous example:
We can adapt our parallel-inlining algorithm to operate on strongly-connected components, and now it will correctly terminate because we've removed all cycles. First, we find the call graph's SCCs and create the reverse (or transpose) condensation, where an edge aāb is flipped to bāa. We do this because we will query this graph to find the callers of a given function f, not the functions that f calls. I am not aware of an existing name for the reverse condensation, so, at Chris Fallin's brilliant suggestion, I have decided to call it an evaporation. From there, the algorithm largely remains as it was before, although we keep track of counts and layers by SCC rather than by function.
let call_graph = CallGraph::new(wasm.functions);
let components = StronglyConnectedComponents::new(call_graph);
let evaoporation = Evaporation::new(components);
let counts = { c: evaporation.num_callees_of(c) for c in components };
let layer = [ c for c in components if counts[c] == 0 ];
while layer is not empty {
parallel for func in scc in layer {
func.inline(...);
}
let next_layer = [];
for scc in layer {
for caller_scc in evaporation.callers_of(scc) {
counts[caller_scc] -= 1;
if counts[caller_scc] == 0 {
next_layer.push(caller_scc);
}
}
}
layer = next_layer;
}
This is the algorithm we use in Wasmtime, modulo minor tweaks here and there to engineer some data structures and combine some loops. After parallel inlining, the rest of the compiler pipeline continues in parallel for each function, yielding unlinked machine code. Finally, we link all that together and resolve relocations, same as we did previously.
Heuristics are the only implementation detail left to discuss, but there isn't much to say that hasn't already been said. Wasmtime prefers not to inline calls within the same Wasm module, while cross-module calls are a strong hint that we should consider inlining. Beyond that, our heuristics are extremely naive at the moment, and only consider the code sizes of the caller and callee functions. There is a lot of room for improvement here, and we intend to make those improvements on-demand as people start playing with the inliner. For example, there are many things we don't consider in our heuristics today, but possibly should:
- Hints from WebAssembly's compilation-hints proposal
- The number of edges to a callee function in the call graph
- Whether any of a call's arguments are constants
- Whether the call is inside a loop or a block marked as "cold"
- Etcā¦
Some Initial Results
The speed up you get (or don't get) from enabling inlining is going to vary from program to program. Here are a couple synthetic benchmarks.
First, let's investigate the simplest case possible, a cross-module call of an empty function in a loop:
(component
;; Define one module, exporting an empty function `f`.
(core module $M
(func (export "f")
nop
)
)
;; Define another module, importing `f`, and exporting a function
;; that calls `f` in a loop.
(core module $N
(import "m" "f" (func $f))
(func (export "g") (param $counter i32)
(loop $loop
;; When counter is zero, return.
(if (i32.eq (local.get $counter) (i32.const 0))
(then (return)))
;; Do our cross-module call.
(call $f)
;; Decrement the counter and continue to the next iteration
;; of the loop.
(local.set $counter (i32.sub (local.get $counter)
(i32.const 1)))
(br $loop))
)
)
;; Instantiate and link our modules.
(core instance $m (instantiate $M))
(core instance $n (instantiate $N (with "m" (instance $m))))
;; Lift and export the looping function.
(func (export "g") (param "n" u32)
(canon lift (core func $n "g"))
)
)
We can inspect the machine code that this compiles down to via the wasmtime compile and wasmtime objdump commands. Let's focus only on the looping function. Without inlining, we see a loop around a call, as we would expect:
00000020 wasm[1]::function[1]:
;; Function prologue.
20: pushq %rbp
21: movq %rsp, %rbp
;; Check for stack overflow.
24: movq 8(%rdi), %r10
28: movq 0x10(%r10), %r10
2c: addq $0x30, %r10
30: cmpq %rsp, %r10
33: ja 0x89
;; Allocate this function's stack frame, save callee-save
;; registers, and shuffle some registers.
39: subq $0x20, %rsp
3d: movq %rbx, (%rsp)
41: movq %r14, 8(%rsp)
46: movq %r15, 0x10(%rsp)
4b: movq 0x40(%rdi), %rbx
4f: movq %rdi, %r15
52: movq %rdx, %r14
;; Begin loop.
;;
;; Test our counter for zero and break out if so.
55: testl %r14d, %r14d
58: je 0x72
;; Do our cross-module call.
5e: movq %r15, %rsi
61: movq %rbx, %rdi
64: callq 0
;; Decrement our counter.
69: subl $1, %r14d
;; Continue to the next iteration of the loop.
6d: jmp 0x55
;; Function epilogue: restore callee-save registers and
;; deallocate this functions's stack frame.
72: movq (%rsp), %rbx
76: movq 8(%rsp), %r14
7b: movq 0x10(%rsp), %r15
80: addq $0x20, %rsp
84: movq %rbp, %rsp
87: popq %rbp
88: retq
;; Out-of-line traps.
89: ud2
ā°āā¼ trap: StackOverflow
When we enable inlining, then M::f gets inlined into N::g. Despite N::g becoming a leaf function, we will still push %rbp and all that in the prologue and pop it in the epilogue, because Wasmtime always enables frame pointers. But because it no longer needs to shuffle values into ABI argument registers or allocate any stack space, it doesn't need to do any explicit stack checks, and nearly all the rest of the code also goes away. All that is left is a loop decrementing a counter to zero:3
00000020 wasm[1]::function[1]:
;; Function prologue.
20: pushq %rbp
21: movq %rsp, %rbp
;; Loop.
24: testl %edx, %edx
26: je 0x34
2c: subl $1, %edx
2f: jmp 0x24
;; Function epilogue.
34: movq %rbp, %rsp
37: popq %rbp
38: retq
With this simplest of examples, we can just count the difference in number of instructions in each loop body:
- 12 without inlining (7 in
N::gand 5 inM::fwhich are 2 to push the frame pointer, 2 to pop it, and 1 to return) - 4 with inlining
But we might as well verify that the inlined version really is faster via some quick-and-dirty benchmarking with hyperfine. This won't measure only Wasm execution time, it also measures spawning a whole Wasmtime process, loading code from disk, etcā¦, but it will work for our purposes if we crank up the number of iterations:
$ hyperfine \
"wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm" \
"wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm"
Benchmark 1: wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm
Time (mean ± Ļ): 138.2 ms ± 9.6 ms [User: 132.7 ms, System: 6.7 ms]
Range (min ⦠max): 128.7 ms ⦠167.7 ms 19 runs
Benchmark 2: wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm
Time (mean ± Ļ): 37.5 ms ± 1.1 ms [User: 33.0 ms, System: 5.8 ms]
Range (min ⦠max): 35.7 ms ⦠40.8 ms 77 runs
Summary
'wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm' ran
3.69 ± 0.28 times faster than 'wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm'
Okay so if we measure Wasm doing almost nothing but empty function calls and then we measure again after removing function call overhead, we get a big speed up - it would be disappointing if we didn't! But maybe we can benchmark something a tiny bit more realistic.
A program that we commonly reach for when benchmarking is a small wrapper around the pulldown-cmark markdown library that parses the CommonMark specification (which is itself written in markdown) and renders that to HTML. This is Real World⢠code operating on Real World⢠inputs that matches Real World⢠use cases people have for Wasm. That is, good benchmarking is incredibly difficult, but this program is nonetheless a pretty good candidate for inclusion in our corpus. There's just one hiccup: in order for our inliner to activate normally, we need a program using components and making cross-module calls, and this program doesn't do that. But we don't have a good corpus of such benchmarks yet because this kind of component composition is still relatively new, so let's keep using our pulldown-cmark program but measure our inliner's effects via a more circuitous route.
Wasmtime has tunables to enable the inlining of intra-module calls4 and rustc and LLVM have tunables for disabling inlining5. Therefore we can roughly estimate the speed ups our inliner might unlock on a similar, but extensively componentized and cross-module calling, program by:
-
Disabling inlining when compiling the Rust source code to Wasm
-
Compiling the resulting Wasm binary to native code with Wasmtime twice: once with inlining disabled, and once with intra-module call inlining enabled
-
Comparing those two different compilations' execution speeds
Running this experiment with Sightglass, our internal benchmarking infrastructure and tooling, yields the following results:
execution :: instructions-retired :: pulldown-cmark.wasm
Π= 7329995.35 ± 2.47 (confidence = 99%)
with-inlining is 1.26x to 1.26x faster than without-inlining!
[35729153 35729164.72 35729173] without-inlining
[28399156 28399169.37 28399179] with-inlining
Conclusion
Wasmtime and Cranelift now have a function inliner! Test it out via the -C inlining=y command-line flag or via the wasmtime::Config::compiler_inlining method. Let us know if you run into any bugs or whether you see any speed-ups when running Wasm components containing multiple core modules.
Thanks to Chris Fallin and Graydon Hoare for reading early drafts of this piece and providing valuable feedback. Any errors that remain are my own.
-
Deterministic compilation gives a number of benefits: testing is easier, debugging is easier, builds can be byte-for-byte reproducible, it is well-behaved in the face of incremental compilation and fine-grained caching, etc⦠ā©
-
For what it is worth, this still allows collapsing chains of mutually-recursive calls (
acallsbcallsccallsa) into a single, self-recursive call (abccallsabc). Our actual implementation does not do this in practice, preferring additional parallelism instead, but it could in theory. ā© -
Cranelift cannot currently remove loops without side effects, and generally doesn't mess with control-flow at all in its mid-end. We've had various discussions about how we might best fit control-flow-y optimizations into Cranelift's mid-end architecture over the years, but it also isn't something that we've seen would be very beneficial for actual, Real World⢠Wasm programs, given that (a) LLVM has already done much of this kind of thing when producing the Wasm, and (b) we do some branch-folding when lowering from our mid-level IR to our machine-specific IR. Maybe we will revisit this sometime in the future if it crops up more often after inlining. ā©
-
-C cranelift-wasmtime-inlining-intra-module=yesā© -
-Cllvm-args=--inline-threshold=0,-Cllvm-args=--inlinehint-threshold=0, and-Zinline-mir=noā©
19 Nov 2025 8:00am GMT
This Week In Rust: This Week in Rust 626
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
- Launching the 2025 State of Rust Survey
- Google Summer of Code 2025 results
- Project goals update - October 2025
- Project goals update - September 2025
Newsletters
- Scientific Computing in Rust #12 (November 2025)
- Secure-by-design firmware development with Wasefire
- Rust Trends Issue #72: From Experimental to Enterprise: Rust's Production Moment
Project/Tooling Updates
Observations/Thoughts
- [audio] Netstack.FM Episode 14 - Roto And Cascade with Terts and Arya from NLnet Labs
- Improving the Incremental System in the Rust Compiler
- Truly First-Class Custom Smart Pointers
- Pinning is a kind of static borrow
- Rust in Android: move fast and fix things
- Match it again Sam
- Humanity is stained by the sins of C and no LLM can rewrite them away to Rust
- UV and Ruff: Turbocharging Python Development with Rust-Powered Tools
- A Function Inliner for Wasmtime and Cranelift
Rust Walkthroughs
- Rust Unit Tests: Assertion libraries
- Rust Unit Tests: Using a mocking library
- A Practical Guide to Transitioning to Memory-Safe Languages
- Building WebSocket Protocol in Apache Iggy using io_uring and Completion Based I/O Architecture
- Building serverless applications with Rust on AWS Lambda
- Disallow code usage with a custom
clippy.toml
Miscellaneous
- Absurd Rust? Never!
- [video] Linus Torvalds - Speaks up on the Rust Divide and saying NO
- October 2025 Rust Jobs Report
- Rust's Strategic Advantage
Crate of the Week
This week's crate is cargo cat, a cargo-subcommand to put a random ascii cat face on your terminal.
Thanks to Alejandra GonzƔles for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
- GuardianDB - Create and translate documentation to English
- GuardianDB - Increase test coverage (currently 13%)
- GuardianDB - Create cohesive usage examples
- GuardianDB - Backend Iroh IPFS Node
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. * Rustikon 2026 | CFP closes: 2025-11-24 23:59 | Warsaw, Poland | Event: 2025-03-19-2025-03-20 Event website
- TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
- RustWeek 2026| CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
427 pull requests were merged in the last week
Compiler
- add new
function_casts_as_integerlint - miri: initial implementation of wildcard provenence for tree borrows
Library
- new
format_args!()andfmt::Argumentsimplementation vec_recycle: implementation- implement
Read::read_array - stabilize
char_max_len - stabilize
duration_from_nanos_u128 - stabilize
extern_system_varargs - stabilize
vec_into_raw_parts - constify
ManuallyDrop::take - constify
mem::take - remove
rustc_inherit_overflow_checksfromposition()in slice iterators
Cargo
cli: add support for completing--configvalues in Bashtree: support long forms for --format variablesconfig: fallback to non-canonical path for workspace-path-hashmanifest: point out when a key belongs to configpackage: all tar entries timestamp be the same- do not lock the artifact-dir for check builds
- add unstable rustc-unicode flag
Rustdoc
- Fix invalid jump to def macro link generation
- don't ignore path distance for doc aliases
- don't pass
RenderOptionstoDocContext - microoptimize
render_item,move stuff out of common path - quality of life changes
Clippy
ok_expect: add autofix- {
unnecessary,panicking}_unwrap: lint field accesses equatable_if_let: don't suggest=in const contextrc_buffer: don't touch the path toRc/Arcin the suggestionincompatible_msrv: don't check the contents of anystdmacro- add a
doc_paragraphs_missing_punctuationlint - fix
single_range_in_vec_initfalse positive for explicitRange - fix
sliced_string_as_bytesfalse positive with aRangeFull - fix website history interactions
- rework
missing_docs_in_private_items
Rust-Analyzer
Rust Compiler Performance Triage
Positive week, most notably because of the new format_args!() and fmt::Arguments implementation from #148789. Another notable improvement came from moving some computations from one compiler stage to another to save memory and unnecessary tree traversals in #148706
Triage done by @panstromek. Revision range: 055d0d6a..6159a440
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ā (primary) |
1.6% | [0.2%, 5.6%] | 11 |
| Regressions ā (secondary) |
0.3% | [0.1%, 1.1%] | 26 |
| Improvements ā
(primary) |
-0.8% | [-4.5%, -0.1%] | 161 |
| Improvements ā
(secondary) |
-1.4% | [-38.1%, -0.1%] | 168 |
| All āā (primary) | -0.6% | [-4.5%, 5.6%] | 172 |
2 Regressions, 4 Improvements, 10 Mixed; 4 of them in rollups 48 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- No New or Updated RFCs were created this week.
Upcoming Events
Rusty Events between 2025-11-19 - 2025-12-17 š¦
Virtual
- 2025-11-19 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-11-19 | Virtual (Girona, ES) | Rust Girona
- 2025-11-20 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-11-20 | Virtual (Berlin, DE) | Rust Berlin
- 2025-11-20 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2025-11-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (Boulder, CO, US) | Boulder Elixir
- 2025-11-25 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (London, UK) | Women in Rust
- 2025-11-26 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-27 | Virtual (Buenos Aires, AR) | Rust en EspaƱol
- 2025-11-30 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-02 | Virtual (London, UK) | Women in Rust
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-05 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-06 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2025-12-07 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-10 | Virtual (Girona, ES) | Rust Girona
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-12-16 | Virtual (Washington, DC, US) | Rust DC
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Virtual (Girona, ES) | Rust Girona
Asia
- 2025-11-20 | Tokyo, JP | Tokyo Rust Meetup
Europe
- 2025-11-19 | Ostrava, CZ | TechMeetup Ostrava
- 2025-11-20 | Aarhus, DK | Rust Aarhus
- 2025-11-20 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2025-11-20 | Luzern, CH | Rust Luzern
- 2025-11-26 | Bern, CH | Rust Bern
- 2025-11-27 | Augsburg, DE | Rust Meetup Augsburg
- 2025-11-27 | Barcelona, ES | BcnRust
- 2025-11-27 | Edinburgh, UK | Rust and Friends
- 2025-11-28 | Prague, CZ | Rust Prague
- 2025-12-03 | Girona, ES | Rust Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-08 | Paris, FR | Rust Paris
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
- 2025-12-16 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
North America
- 2025-11-19 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-11-20 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-11-20 | Spokane, WA, US | Spokane Rust
- 2025-11-23 | Boston, MA, US | Boston Rust Meetup
- 2025-11-26 | Austin, TX, US | Rust ATX
- 2025-11-26 | Phoenix, AZ, US | Desert Rust
- 2025-11-27 | Mountain View, CA, US | Hacker Dojo
- 2025-11-29 | Boston, MA, US | Boston Rust Meetup
- 2025-12-02 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-04 | MƩxico City, MX | Rust MX
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Lehi, UT, US | Utah Rust
- 2025-12-11 | San Diego, CA, US | San Diego Rust
- 2025-12-13 | Boston, MA, US | Boston Rust Meetup
- 2025-12-16 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
Oceania
- 2025-12-11 | Brisbane City, QL, AU | Rust Brisbane
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android's C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one.
- Jeff Vander Stoep on the Google Android blog
Thanks to binarycat for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
19 Nov 2025 5:00am GMT
The Rust Programming Language Blog: Project goals update ā September 2025
The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.
Flagship goals
"Beyond the `&`"
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (TC) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Key Developments
- coordinating with
# to ensure compatibility between the two features (allow custom pin projections to be the same as the ones for&pin mut T) - identified connection to auto reborrowing
- https://github.com/rust-lang/rust-project-goals/issues/399
- https://github.com/rust-lang/rust/issues/145612
- held a design meeting
- very positive feedback from the language team
- approved lang experiment
- got a vibe check on design axioms
- created a new Zulip channel #t-lang/custom-refs for all new features needed to make custom references more similar to
&T/&mut Tsuch as field projections, auto reborrowing and more - created the tracking issue for
#![feature(field_projections)] - opened https://github.com/rust-lang/rust/pull/146307 to implement field representing types (FRTs) in the compiler
Next Steps
- Get https://github.com/rust-lang/rust/pull/146307 reviewed & merged
Help Wanted
- When the PR for FRTs lands, try out the feature & provide feedback on FRTs
- if possible using the field-projection crate and provide feedback on projections
Internal Design Updates
Shared & Exclusive Projections
We want users to be able to have two different types of projections analogous to &T and &mut T. Each field can be projected independently and a single field can only be projected multiple times in a shared way. The current design uses two different traits to model this. The two traits are almost identical, except for their safety documentation.
We were thinking if it is possible to unify them into a single trait and have coercions similar to autoreborrowing that would allow the borrow checker to change the behavior depending on which type is projected.
Syntax
There are lots of different possibilities for which syntax we can choose, here are a couple options: [Devon Peticolas][]->f/[Andrea D'Angelo][] x->f, [Devon Peticolas][].f/[Andrea D'Angelo][] x.f, x.[Fatih Kadir Akın][]/x.mut[Fatih Kadir Akın][], x.ref.[Fatih Kadir Akın][]/x.[Fatih Kadir Akın][]. Also many alternatives for the sigils used: x[Fatih Kadir Akın][], x~f, x.@.f.
We have yet to decide on a direction we want to go in. If we are able to merge the two project traits, we can also settle on a single syntax which would be great.
Splitting Projections into Containers & Pointers
There are two categories of projections: Containers and Pointers:
- Containers are types like
MaybeUninit<T>,Cell<T>,UnsafeCell<T>,ManuallyDrop<T>. They arerepr(transparent)and apply themselves to each field, soMaybeUninit<MyStruct>has a field of typeMaybeUninit<MyField>(ifMyStructhas a field of typeMyField). - Pointers are types like
&T,&mut T,cell::Ref[Mut]<'_, T>,*const T/*mut T,NonNull<T>. They support projectingPointer<'_, Struct>toPointer<'_, Field>.
In the current design, these two classes of projections are unified by just implementing Pointer<'_, Container<Struct>> -> Pointer<'_, Container<Field>> manually for the common use-cases (for example &mut MaybeUninit<Struct> -> &mut MaybeUninit<Field>). However this means that things like &Cell<MaybeUninit<Struct>> doesn't have native projections unless we explicitly implement them.
We could try to go for a design that has two different ways to implement projections -- one for containers and one for pointers. But this has the following issues:
- there are two ways to implement projections, which means that some people will get confused which one they should use.
- making projections through multiple container types work out of the box is great, however this means that when defining a new container type and making it available for projections, one needs to consider all other container types and swear coherence with them. If we instead have an explicit way to opt in to projections through multiple container types, the implementer of that trait only has to reason about the types involved in that operation.
- so to rephrase, the current design allows more container types that users actually use to be projected whereas the split design allows arbitrary nestings of container types to be projected while disallowing certain types to be considered container types.
- The same problem exists for allowing all container types to be projected by pointer types, if I define a new pointer type I again need to reason about all container types and if it's sound to project them.
We might be able to come up with a sensible definition of "container type" which then resolves these issues, but further investigation is required.
Projections for &Custom<U>
We want to be able to have both a blanket impl<T, F: Field<Base = T>> Project<F> for &T as well as allow people to have custom projections on &Custom<U>. The motivating example for custom projections is the Rust-for-Linux Mutex that wants these projections for safe RCU abstractions.
During the design meeting, it was suggested we could add a generic to Project that only the compiler is allowed to insert, this would allow disambiguation between the two impls. We have now found an alternative approach that requires less specific compiler magic:
- Add a new marker trait
ProjectableBasethat's implemented for all types by default. - People can opt out of implementing it by writing
impl !ProjectableBase for MyStruct;(needs negative impls for marker traits). - We add
where T: ProjectableBaseto theimpl Project for &T. - The compiler needs to consider the negative impls in the overlap check for users to be able to write their own
impl<U, F> Project<F> for &Custom<U> where ...(needs negative impl overlap reasoning)
We probably want negative impls for marker traits as well as improved overlap reasoning for different reasons too, so it is probably fine to depend on them here.
enum support
enum and union shouldn't be available for projections by default, take for example &Cell<Enum>, if we project to a variant, someone else could overwrite the value with a different variant, invalidating our &Cell<Field>. This also needs a new trait, probably AlwaysActiveField (needs more name bikeshedding, but too early for that) that marks fields in structs and tuples.
To properly project an enum, we need:
- a new
CanProjectEnum(TBB) trait that provides a way to read the discriminant that's currently inhabiting the value.- it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for
&Cell)
- it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for
- a new
matchoperator that will project all mentioned fields (for&Enumthis already is the behavior formatch)
Field Representing Types (FRTs)
While implementing https://github.com/rust-lang/rust/pull/146307 we identified the following problems/design decisions:
- a FRT is considered local to the orphan check when each container base type involved in the field path is local or a tuple (see the top comment on the PR for more infos)
- FRTs cannot implement
Drop - the
Fieldtrait is not user-implementable - types with fields that are dynamically sized don't have a statically known offset, which complicates the
UnalignedFieldtrait,
I decided to simplify the first implementation of FRTs and restrict them to sized structs and tuples. It also doesn't support packed structs. Future PRs will add support for enums, unions and packed structs as well as dynamically sized types.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Flexible, fast(er) compilation"
| Progress | |
| Point of contact | |
| Champions |
cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
Recently we've been working on feedback on the multi-staged format of the RFC. We've also shared the RFC outside of our sync call group to people from a variety of project teams and potential users too.
We're now receiving feedback that is much more detail-oriented, as opposed to being about the direction and scope of the RFC, which is a good indication that the overall strategy for shipping this RFC seems promising. We're continuing to address feedback to ensure the RFC is clear, consistent and technically feasible. David's feeling is that we've probably got another couple rounds of feedback from currently involved people and then we'll invite more people from various groups before publishing parts of the RFC formally.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
bjorn3, Folkert de Vries, [Trifecta Tech Foundation] |
| Progress | |
| Point of contact | |
| Task owners |
Help test the deadlock code in the issue list and try to reproduce the issue
1 detailed update available.
- Key developments: We have added more tests for deadlock issues. And we can say that deadlock problems are almost resolved. And we are currently addressing issues related to reproducible builds, and some of these have already been resolved.
- Blockers: null
- Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Higher-level Rust"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) |
| Task owners |
1 detailed update available.
Key developments:
- Overall polish
- https://github.com/rust-lang/rust/pull/145751
- https://github.com/rust-lang/rust/pull/145754
- https://github.com/rust-lang/rust/pull/146106
- https://github.com/rust-lang/rust/pull/146137
- https://github.com/rust-lang/rust/pull/146211
- https://github.com/rust-lang/rust/pull/146340
- https://github.com/rust-lang/rust/pull/145568
- https://github.com/rust-lang/cargo/pull/15878
- https://github.com/rust-lang/cargo/pull/15886
- https://github.com/rust-lang/cargo/pull/15899
- https://github.com/rust-lang/cargo/pull/15914
- https://github.com/rust-lang/cargo/pull/15927
- https://github.com/rust-lang/cargo/pull/15939
- https://github.com/rust-lang/cargo/pull/15952
- https://github.com/rust-lang/cargo/pull/15972
- https://github.com/rust-lang/cargo/pull/15975
- rustfmt work
- https://github.com/rust-lang/rust/pull/145617
- https://github.com/rust-lang/rust/pull/145766
- Reference work
- https://github.com/rust-lang/reference/pull/1974
"Unblocking dormant traits"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Taylor Cramer, Taylor Cramer & others |
1 detailed update available.
Current status: there is an RFC for auto impl supertraits that has received some discussion and updates (thank you, Ding Xiang Fei!).
The major open questions currently are:
Syntax
The current RFC proposes:
trait Subtrait: Supertrait {
auto impl Supertrait {
// Supertrait items defined in terms of Subtrait items, if any
}
}
Additionally, there is an open question around the syntax of auto impl for unsafe supertraits. The current proposal is to require unsafe auto impl Supertrait.
Whether to require impls to opt-out of auto impls
The current RFC proposes that
impl Supertrait for MyType {}
impl Subtrait for MyType {
// Required in order to manually write `Supertrait` for MyType.
extern impl Supertrait;
}
This makes it explicit via opt-out whether an auto impl is being applied. However, this is in conflict with the goal of allowing auto impls to be added to existing trait hierarchies. The RFC proposes to resolve this via a temporary attribute which triggers a warning. See my comment here.
Note that properly resolving whether or not to apply an auto impl requires coherence-like analysis.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Goals looking for help
No goals listed.
Other goal updates
| Progress | |
| Point of contact | |
| Champions |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Just removed the duplicate posts, guessing from a script that had a bad day.
| Progress | |
| Point of contact | |
| Champions |
bootstrap (Jakub BerƔnek), lang (Niko Matsakis), spec (Pete LeVasseur) |
| Task owners |
Pete LeVasseur, Contributors from Ferrous Systems and others TBD, |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Key developments:
- libtest2
- libtest env variables were deprecated, reducing the API surface for custom test harnesses, https://github.com/rust-lang/rust/pull/145269
- libtest2 was updated to reflect deprecations
- https://github.com/assert-rs/libtest2/pull/105
- libtest2 is now mostly in shape for use
- json schema
- https://github.com/assert-rs/libtest2/pull/107
- https://github.com/assert-rs/libtest2/pull/108
- https://github.com/assert-rs/libtest2/pull/111
- https://github.com/assert-rs/libtest2/pull/120
- starting exploration of extension through custom messages, see https://github.com/assert-rs/libtest2/pull/122
New areas found for further exploration
- Failable discovery
- Nested discovery
| Progress | |
| Point of contact | |
| Champions |
compiler (Manuel Drehwald), lang (TC) |
| Task owners |
Manuel Drehwald, LLVM offload/GPU contributors |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
(depending on the flag) |
| Progress | |
| Point of contact | |
| Champions |
lang (Josh Triplett), lang-docs (TC) |
| Task owners |
| Progress | |
| Point of contact |
|
| Champions |
cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact |
|
| Task owners |
|
1 detailed update available.
Key developments:
- https://github.com/crate-ci/cargo-plumbing/pull/53
- https://github.com/crate-ci/cargo-plumbing/pull/62
- https://github.com/crate-ci/cargo-plumbing/pull/68
- https://github.com/crate-ci/cargo-plumbing/pull/96
- Further schema discussions at https://github.com/crate-ci/cargo-plumbing/discussions/18
- Writing up https://github.com/crate-ci/cargo-plumbing/issues/82
Major obstacles
- Cargo, being designed for itself, doesn't allow working with arbitrary data, see https://github.com/crate-ci/cargo-plumbing/issues/82
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) |
| Task owners |
oli-obk |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Task owners |
[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec) |
| Progress | |
| Point of contact | |
| Task owners |
vision team |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
It is possible to now run the system with two different machines on two different architectures however there is work to be done to make this more robust.
We have worked on ironing out the last bits and pieces for dequeuing benchmarks as well as creating a new user interface to reflect multiple collectors doing work. Presently work is mostly on polishing the UI and handing edge cases through manual testing.
Queue Work:
- https://github.com/rust-lang/rustc-perf/pull/2212
- https://github.com/rust-lang/rustc-perf/pull/2214
- https://github.com/rust-lang/rustc-perf/pull/2216
- https://github.com/rust-lang/rustc-perf/pull/2221
- https://github.com/rust-lang/rustc-perf/pull/2226
- https://github.com/rust-lang/rustc-perf/pull/2230
- https://github.com/rust-lang/rustc-perf/pull/2231
Ui:
- https://github.com/rust-lang/rustc-perf/pull/2217
- https://github.com/rust-lang/rustc-perf/pull/2220
- https://github.com/rust-lang/rustc-perf/pull/2224
- https://github.com/rust-lang/rustc-perf/pull/2227
- https://github.com/rust-lang/rustc-perf/pull/2232
- https://github.com/rust-lang/rustc-perf/pull/2233
- https://github.com/rust-lang/rustc-perf/pull/2236
| Progress | |
| Point of contact |
|
| Champions | |
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
19 Nov 2025 12:00am GMT
The Rust Programming Language Blog: Project goals update ā October 2025
The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.
Flagship goals
"Beyond the `&`"
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (TC) |
| Task owners |
1 detailed update available.
Status update:
Regarding the TODO list in the next 6 months, here is the current status:
Introduce &pin mut|const place borrowing syntax
- [x] parsing: #135731, merged.
- [ ] lowering and borrowck: not started yet.
I've got some primitive ideas about borrowck, and I probably need to confirm with someone who is familiar with MIR/borrowck before starting to implement.
A pinned borrow consists two MIR statements:
- a borrow statement that creates the mutable reference,
- and an adt aggregate statement that put the mutable reference into the
Pinstruct.
I may have to add a new borrow kind so that pinned borrows can be recognized. Then traverse the dataflow graph to make sure that pinned places cannot been moved.
Pattern matching of &pin mut|const T types
In the past few months, I have struggled with the !Unpin stuffs (the original design sketch Alternative A), trying implementing it, refactoring, discussing on zulips, and was constantly confused; luckily, we have finally reached a new agreement of the Alternative B version.
- [ ] #139751 under review (reimplemented regarding Alternative B).
Support drop(&pin mut self) for structurally pinned types
- [ ] adding a new
Drop::pin_drop(&pin mut self)method: draft PR #144537
Supporting both Drop::drop(&mut self) and Drop::drop(&pin mut self) seems to introduce method-overloading to Rust, which I think might need some more general ways to handle (maybe by a rustc attribute?). So instead, I'd like to implemenent this via a new method Drop::pin_drop(&pin mut self) first.
Introduce &pin pat pattern syntax
Not started yet (I'd prefer doing that when pattern matching of &pin mut|const T types is ready).
Support &pin mut|const T -> &|&mut T coercion (requires T: Unpin of &pin mut T -> &mut T)
Not started yet. (It's quite independent, probably someone else can help with it)
Support auto borrowing of &pin mut|const place in method calls with &pin mut|const self receivers
Seems to be handled by Autoreborrow traits?
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
There have been lots of internal developments since the last update:
- field representing types and chained projections have received a fundamental overhaul: disallowing field paths and requiring projections to decompose. Additionally we explored how const generics could emulate FRTs.
- we discussed a potential solution to having only a single project operator & trait through a decay operation with special borrow checker treatment.
- we were able to further simplify the project trait moving the generic argument of the represented field to the project function. We also discovered the possibility that FRTs are not fundamentally necessary for field projections -- however, they are still very useful in other applications and my gut feeling is that they are also right for field projections. So we will continue our experiment with them.
- we talked about making
Project::projectbe a safe function by introducing a new kind of type.
Next Steps:
- we're still planning to merge https://github.com/rust-lang/rust/pull/146307, after I have updated it with the new FRT logic and it has been reviewed
- once that PR lands, I plan to update the library experiment to use the experimental FRTs
- then the testing using that library can begin in the Linux kernel and other projects (this is where anyone interested in trying field projections can help out!)
4 detailed updates available.
Decomposing Projections
A chained projection operation should naturally decompose, so foo.[Ber Clausen][].[Baz Shkara][] should be the same as writing (foo.[Ber Clausen][]).[Baz Shkara][]. Until now, the different parenthesizing would have allowed different outcomes. This behavior is confusing and also makes many implementation details more complicated than they need to be.
Field Representing Types
Since projections now decompose, we have no need from a design perspective for multi-level FRTs. So field_of!(Foo, bar.baz) is no longer required to work. Thus we have decided to restrict FRTs to only a single field and get rid of the path. This simplifies the implementation in the compiler and also avoids certain difficult questions such as the locality of FRTs (if we had a path, we would have to walk the path and it is local, if all structs included in the path are local). Now with only a single field, the FRT is local if the struct is.
We also discovered that it is a good idea to make FRTs inhabited (they still are ZSTs), since then it allows the following pattern to work:
fn project_free_standing<F: Field>(_: Field, r: &F::Base) -> &F::Type { ... }
// can now call the function without turbofish:
let my_field = project_free_standing(field_of!(MyStruct, my_field), &my_struct);
FRTs via const Generics
We also spent some time thinking about const generics and FRTs on zulip:
- https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/const.20generics.3A.20implementing.20field.20representing.20types/with/544617587
- https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/field.20representing.20values.20.26.20.60Field.3Cconst.20F.3A.20.3F.3F.3F.3E.60.20trait/with/542855620
In short, this won't be happening any time soon. However, it could be a future implementation of the field_of! macro depending on how reflection through const generics evolves (but also only in the far-ish future).
Single Project Operator & Trait via Exclusive Decay
It would be great if we only had to add a single operator and trait and could obtain the same features as we have with two. The current reason for having two operators is to allow both shared and exclusive projections. If we could have another operation that decays an exclusive reference (or custom, exclusive smart-pointer type) into a shared reference (or the custom, shared version of the smart pointer). This decay operation would need borrow checker support in order to have simultaneous projections of one field exclusively and another field shared (and possibly multiple times).
This goes into a similar direction as the reborrowing project goal https://github.com/rust-lang/rust-project-goals/issues/399, however, it needs extra borrow checker support.
fn add(x: cell::RefMut<'_, i32>, step: i32) {
*x = *x + step;
}
struct Point {
x: i32,
y: i32,
}
fn example(p: cell::RefMut<'_, Point>) {
let y: cell::Ref<'_, i32> = coerce_shared!(p.[@y][]);
let y2 = coerce_shared!(p.[@y][]); // can project twice if both are coerced
add(p.[Devon Peticolas][], *y);
add(p.[Devon Peticolas][], *y2);
assert_eq!(*y, *y2); // can still use them afterwards
}
Problems:
- explicit syntax is annoying for these "coercions", but
- we cannot make this implicit:
- if this were an implicit operation, only the borrow checker would know when one had to coerce,
- this operation is allowed to change the type,
- this results in borrow check backfeeding into typecheck, which is not possible or at least extremely difficult
Syntax
Not much movement here, it depends on the question discussed in the previous section, since if we only have one operator, we could choose .@, -> or ~; if we have to have two, then we need additional syntax to differentiate them.
Simplifying the Project trait
There have been some developments in pin ergonomics https://github.com/rust-lang/rust/issues/130494: "alternative B" is now the main approach which means that Pin<&mut T> has linear projections, which means that it doesn't change its output type depending on the concrete field (really depending on the field, not only its type). So it falls into the general projection pattern Pin<&mut Struct> -> Pin<&mut Field> which means that Pin doesn't need any where clauses when implementing Project.
Additionally we have found out that RCU also doesn't need where clauses, as we can also make its projections linear by introducing a MutexRef<'_, T> smart pointer that always allows projections and only has special behavior for T = Rcu<U>. Discussed on zulip after this message.
For this reason we can get rid of the generic argument to Project and mandate that all types that support projections support them for all fields. So the new Project trait looks like this:
// still need a common super trait for `Project` & `ProjectMut`
pub trait Projectable {
type Target: ?Sized;
}
pub unsafe trait Project: Projectable {
type Output<F: Field<Base = Self::Target>>;
unsafe fn project<F: Field<Base = Self::Target>>(
this: *const Self,
) -> Self::Output<F>;
}
Are FRTs even necessary?
With this change we can also think about getting rid of FRTs entirely. For example we could have the following Project trait:
pub unsafe trait Project: Projectable {
type Output<F>;
unsafe fn project<const OFFSET: usize, F>(
this: *const Self,
) -> Self::Output<F>;
}
There are other applications for FRTs that are very useful for Rust-for-Linux. For example, storing field information for intrusive data structures directly in that structure as a generic.
More concretely, in the kernel there are workqueues that allow you to run code in parallel to the currently running thread. In order to insert an item into a workqueue, an intrusive linked list is used. However, we need to be able to insert the same item into multiple lists. This is done by storing multiple instances of the Work struct. Its definition is:
pub struct Work<T, const ID: u64> { ... }
Where the ID generic must be unique inside of the struct.
struct MyDriver {
data: Arc<MyData>,
main_work: Work<Self, 0>,
aux_work: Work<Self, 1>,
// more fields ...
}
// Then you call a macro to implement the unsafe `HasWork` trait safely.
// It asserts that there is a field of type `Work<MyDriver, 0>` at the given field
// (and also exposes its offset).
impl_has_work!(impl HasWork<MyDriver, 0> for MyDriver { self.main_work });
impl_has_work!(impl HasWork<MyDriver, 1> for MyDriver { self.aux_work });
// Then you implement `WorkItem` twice:
impl WorkItem<0> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the main work here");
}
}
impl WorkItem<1> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the aux work here");
}
}
// And finally you can call `enqueue` on a `Queue`:
let my_driver = Arc::new(MyDriver::new());
let queue: &'static Queue = kernel::workqueue::system_highpri();
queue.enqueue::<_, 0>(my_driver.clone()).expect("my_driver is not yet enqueued for id 0");
// there are different queues
let queue = kernel::workqueue::system_long();
queue.enqueue::<_, 1>(my_driver.clone()).expect("my_driver is not yet enqueued for id 1");
// cannot insert multiple times:
assert!(queue.enqueue::<_, 1>(my_driver.clone()).is_err());
FRTs could be used instead of this id, making the definition be Work<F: Field> (also merging the T parameter).
struct MyDriver {
data: Arc<MyData>,
main_work: Work<field_of!(Self, main_work)>,
aux_work: Work<field_of!(Self, aux_work)>,
// more fields ...
}
impl WorkItem<field_of!(MyDriver, main_work)> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the main work here");
}
}
impl WorkItem<field_of!(MyDriver, aux_work)> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the aux work here");
}
}
let my_driver = Arc::new(MyDriver::new());
let queue: &'static Queue = kernel::workqueue::system_highpri();
queue
.enqueue(my_driver.clone(), field_of!(MyDriver, main_work))
// ^ using Gary's idea to avoid turbofish
.expect("my_driver is not yet enqueued for main_work");
let queue = kernel::workqueue::system_long();
queue
.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work))
.expect("my_driver is not yet enqueued for aux_work");
assert!(queue.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)).is_err());
This makes it overall a lot more readable (by providing sensible names instead of magic numbers), and maintainable (we can add a new variant without worrying about which IDs are unused). It also avoids the unsafe HasWork trait and the need to write the impl_has_work! macro for each Work field.
I still think that having FRTs is going to be the right call for field projections as well, so I'm going to keep their experiment going. However, we should fully explore their necessity and rationale for a future RFC.
Making Project::project safe
In the current proposal the Project::project function is unsafe, because it takes a raw pointer as an argument. This is pretty unusual for an operator trait (it would be the first). Tyler Mandry thought about a way of making it safe by introducing "partial struct types". This new type is spelled Struct.F where F is an FRT of that struct. It's like Struct, but with the restriction that only the field represented by F can be accessed. So for example &Struct.F would point to Struct, but only allow one to read that single field. This way we could design the Project trait in a safe manner:
// governs conversion of `Self` to `Narrowed<F>` & replaces Projectable
pub unsafe trait NarrowPointee {
type Target;
type Narrowed<F: Field<Base = Self::Target>>;
}
pub trait Project: NarrowPointee {
type Output<F: Field<Base = Self::Type>>;
fn project(narrowed: Self::Narrowed<F>) -> Self::Output<F>;
}
The NarrowPointee trait allows a type to declare that it supports conversions of its Target type to Target.F. For example, we would implement it for RefMut like this:
unsafe impl<'a, T> NarrowPointee for RefMut<'a, T> {
type Target = T;
type Narrowed<F: Field<Base = T>> = RefMut<'a, T.F>;
}
Then we can make the narrowing a builtin operation in the compiler that gets prepended on the actual coercion operation.
However, this "partial struct type" has a fatal flaw that Oliver Scherer found (edit by oli: it was actually boxy who found it): it conflicts with mem::swap, if Struct.F has the same layout as Struct, then writing to such a variable will overwrite all bytes, thus also overwriting field that aren't F. Even if we make an exception for these types and moves/copies, this wouldn't work, as a user today can rely on the fact that they write size_of::<T>() bytes to a *mut T and thus have a valid value of that type at that location. Tyler Mandry suggested we make it !Sized and even !MetaSized to prevent overwriting values of that type (maybe the Overwrite trait could come in handy here as well). But this might make "partial struct types" too weak to be truly useful. Additionally this poses many more questions that we haven't yet tackled.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Initial implementation of a Reborrow trait for types with only lifetimes with exclusive reference semantics is working but not yet upstreamed not in review. CoerceShared implementation is not yet started.
Proper composable implementation will likely require a different tactic than the current one. Safety and validity checks are currently absent as well and will require more work.
"Flexible, fast(er) compilation"
| Progress | |
| Point of contact | |
| Champions |
cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
We've now opened our first batch of RFCs: rust-lang/rfcs#3873, rust-lang/rfcs#3874 and rust-lang/rfcs#3875
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
bjorn3, Folkert de Vries, [Trifecta Tech Foundation] |
| Progress | |
| Point of contact | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Higher-level Rust"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
3 detailed updates available.
I posted this blog post that proposes that we ought to name the trait Handle and define it as a trait where clone produces an "entangled" value -- i.e., a second handle to the same underlying value.
Before that, there's been a LOT of conversation that hasn't made its way onto this tracking issue. Trying to fix that! Here is a brief summary, in any case:
- It began with the first Rust Project Goals program in 2024H2, where Jonathan Kelley from Dioxus wrote a thoughtful blog post about a path to high-level Rust that eventually became a 2024H2 project goal towards ergonomic ref-counting.
- I wrote a series of blog posts about a trait I called
Claim. - Josh Triplett and I talked and Josh Triplett opened RFC #3680[], which proposed a
usekeyword anduse ||closures. Reception, I would say, was mixed; yes, this is tackling a real problem, but there were lots of concerns on the approach. I summarized the key points here. - Santiago Pastorino implemented experimental support for (a variant of) RFC #3680[] as part of the 2025H1 project goal.
- I authored a 2025H2 project goal proposing that we create an alternative RFC focused on higher-level use-cases which prompted Josh Triplett and I have to have a long and fruitful conversation in which he convinced me that this was not the right approach.
- We had a lang-team design meeting on 2025-08-27 in which I presented this survey and summary of the work done thus far.
- And then at the RustConf 2025 Unconf we had a big group discussion on the topic that I found very fruitful, as well as various follow-up conversations with smaller groups. The name
Handlearose from this and I plan to be posting further thoughts as a result.
RFC #3680: https://github.com/rust-lang/rfcs/pull/3680
I wrote up a brief summary of my current thoughts on Zulip; I plan to move this content into a series of blog posts, but I figured it was worth laying it out here too for those watching this space:
09:11 (1) I don't think clones/handles are categorically different when it comes to how much you want to see them made explicit; some applications want them both to be explicit, some want them automatic, some will want a mix -- and possibly other kinds of categorizations.
09:11 (2) But I do think that if you are making everything explicit, it's useful to see the difference between a general purpose clone and a handle.
09:12 (3) I also think there are many classes of software where there is value in having everything explicit -- and that those classes are often the ones most in Rust's "sweet spot". So we should make sure that it's possible to have everything be explicit ergonomically.
09:12 (4) This does not imply that we can't make automatic clones/handles possible too -- it is just that we should treat both use cases (explicit and automatic) as first-class in importance.
09:13 (5) Right now I'm focused on the explicit case. I think this is what the use-use-everywhere was about, though I prefer a different proposal now -- basically just making handle and clone methods understood and specially handled by the compiler for optimization and desugaring purposes. There are pros and cons to that, obviously, and that's what I plan to write-up in more detail.
09:14 (6) On a related note, I think we also need explicit closure captures, which is a whole interesting design space. I don't personally find it "sufficient" for the "fully explicit" case but I could understand why others might think it is, and it's probably a good step to take.
09:15 (7) I go back and forth on profiles -- basically a fancy name for lint-groups based on application domain -- and whether I think we should go that direction, but I think that if we were going to go automatic, that's the way I would do it: i.e., the compiler will automatically insert calls to clone and handle, but it will lint when it does so; the lint can by deny-by-default at first but applications could opt into allow for either or both.
I previously wanted allow-by-default but I've decided this is a silly hill to die on, and it's probably better to move in smaller increments.
Update:
There has been more discussion about the Handle trait on Zulip and elsewhere. Some of the notable comments:
- Downsides of the current name: it's a noun, which doesn't follow Rust naming convention, and the verb
handleis very generic and could mean many things. - Alternative names proposed:
Entangle/entangleorentangled,Share/share,Alias/alias, orRetain/retain. if we want to seriously hardcore on the science names --Mitose/mitoseorFission/fission. - There has been some criticism pointing out that focusing on handles means that other types which might be "cheaply cloneable" don't qualify.
For now I will go on using the term Handle, but I agree with the critique that it should be a verb, and currently prefer Alias/alias as an alternative.
I'm continuing to work my way through the backlog of blog posts about the conversations from Rustconf. The purposes of these blog posts is not just to socialize the ideas more broadly but also to help myself think through them. Here is the latest post:
https://smallcultfollowing.com/babysteps/blog/2025/10/13/ergonomic-explicit-handles/
The point of this post is to argue that, whatever else we do, Rust should have a way to create handles/clones (and closures that work with them) which is at once explicit and ergonomic.
To give a preview of my current thinking, I am working now on the next post which will discuss how we should add an explicit capture clause syntax. This is somewhat orthogonal but not really, in that an explicit syntax would make closures that clone more ergonomic (but only mildly). I don't have a proposal I fully like for this syntax though and there are a lot of interesting questions to work out. As a strawperson, though, you might imagine [this older proposal I wrote up](https://hackmd.io/Niko Matsakis/SyI0eMFXO?type=view), which would mean something like this:
let actor1 = async move(reply_tx.handle()) {
reply_tx.send(...);
};
let actor2 = async move(reply_tx.handle()) {
reply_tx.send(...);
};
This is an improvement on
let actor1 = {
let reply_tx = reply_tx.handle();
async move(reply_tx.handle()) {
reply_tx.send(...);
}
};
but only mildly.
The next post I intend to write would be a variant on "use, use everywhere" that recommends method call syntax and permitting the compiler to elide handle/clone calls, so that the example becomes
let actor1 = async move {
reply_tx.handle().send(...);
// -------- due to optimizations, this would capture the handle creation to happen only when future is *created*
};
This would mean that cloning of strings and things might benefit from the same behavior:
let actor1 = async move {
reply_tx.handle().send(some_id.clone());
// -------- the `some_id.clone()` would occur at future creation time
};
The rationable that got me here is (a) minimizing perceived complexity and focusing on muscle memory (just add .clone() or .handle() to fix use-after-move errors, no matter when/where they occur). The cost of course is that (a) Handle/Clone become very special; and (b) it blurs the lines on when code execution occurs. Despite the .handle() occurring inside the future (resp. closure) body, it actually executes when the future (resp. closure) is created in this case (in other cases, such as a closure that implements Fn or FnMut and hence executes more than once, it might occur during each execution as well).
| Progress | |
| Point of contact | |
| Champions |
cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) |
| Task owners |
"Unblocking dormant traits"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Taylor Cramer, Taylor Cramer & others |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts |
1 detailed update available.
This is our first update we're posting for the in-place init work. Overall things are progressing well, with lively discussion happening on the newly minted t-lang/in-place-init Zulip channel. Here are the highlights since the lang team design meeting at the end of July:
- Zulip: we now have a dedicated zulip channel that includes all topics surrounding in-place initialization: #t-lang/in-place-init.
- Guaranteed value emplacement: Olivier FAURE shared a new version of C++ inspired emplacement in #t-lang/in-place-init > RFC Draft: Guaranteed Value Emplacement inspired by C++'s emplacement system.
- Rosetta code sample: to help guide the comparison of the various proposals, we've started collecting examples to compare against each other. The first one was contributed by Alice Ryhl and is: "How can we construct a
Box<Mutex<MyType>>in-place inside theBox". For more see #t-lang/in-place-init > Shared example: emplacing into `Box. - Evolution of the outptr proposal: Taylor Cramer's original outptr-based emplacement proposal used concrete types as part of her proposal. Since then there has been significant discussion about alternative ways to represent out-pointers, including: #t-lang/in-place-init > out-pointer type and MIR semantics consideration.
- Placing functions as a high-level notation: Yoshua Wuyts has begun reworking the "placing functions" proposal as a high-level sugar on top of one of the other proposals, instead of directly desugaring to
MaybeUninit. For more see: #t-lang/in-place-init > Placing functions as sugar for low-level emplacement. - Generic fallibility for the
Initproposal: following feedback from the lang team meeting, Alice Ryhl posted an update showing how theInittrait could be made generic over allTrytypes instead of being limited to justResult. For more see: #t-lang/in-place-init > Makingimpl Initgeneric overResult/Option/infallible. - Interactions between emplacement and effects: Yoshua Wuyts has begun documenting the expected interactions between placing functions and other function-transforming effects (e.g.
async,try,gen). For more see: #t-lang/in-place-init > placing functions and interactions with effects.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Since the last update we've fixed the hang in rayon in https://github.com/rust-lang/rust/pull/144991 and https://github.com/rust-lang/rust/pull/144732 which relied on https://github.com/rust-lang/rust/pull/143054 https://github.com/rust-lang/rust/pull/144955 https://github.com/rust-lang/rust/pull/144405 https://github.com/rust-lang/rust/pull/145706. This introduced some search graph bugs which we fixed in https://github.com/rust-lang/rust/pull/147061 https://github.com/rust-lang/rust/pull/147266.
We're mostly done with the opaque type support now. Doing so required a lot of quite involved changes:
- https://github.com/rust-lang/rust/pull/145244 non-defining uses in borrowck
- https://github.com/rust-lang/rust/pull/145925 non-defining uses in borrowck closure support
- https://github.com/rust-lang/rust/pull/145711 non-defining uses in hir typeck
- https://github.com/rust-lang/rust/pull/140375 eagerly compute sub_unification_table again
- https://github.com/rust-lang/rust/pull/146329 item bounds
- https://github.com/rust-lang/rust/pull/145993 function calls
- https://github.com/rust-lang/rust/pull/146885 method selection
- https://github.com/rust-lang/rust/pull/147249 fallback
We also fixed some additional self-contained issues and perf improvements: https://github.com/rust-lang/rust/pull/146725 https://github.com/rust-lang/rust/pull/147138 https://github.com/rust-lang/rust/pull/147152 https://github.com/rust-lang/rust/pull/145713 https://github.com/rust-lang/rust/pull/145951
We have also migrated rust-analyzer to entirely use the new solver instead of chalk. This required a large effort mainly by Jack Huey Chayim Refael Friedman and Shoyu Vanilla. That's some really impressive work on their end š See this list of merged PRs for an overview of what this required on the r-a side. Chayim Refael Friedman also landed some changes to the trait solver itself to simplify the integration: https://github.com/rust-lang/rust/pull/145377 https://github.com/rust-lang/rust/pull/146111 https://github.com/rust-lang/rust/pull/147723 https://github.com/rust-lang/rust/pull/146182.
We're still tracking the remaining issues in https://github.com/orgs/rust-lang/projects/61/views/1. Most of these issues are comparatively simple and I expect us to fix most of them over the next few months, getting us close to stabilization. We're currently doing another crater triage which may surface a few more issues.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Here's another summary of the most interesting developments since the last update:
- reviews and updates have been done on the polonius alpha, and it has since landed
- the last 2 trivial diagnostics failures were fixed
- we've done perf runs, crater runs, completed gathering stats on crates.io for avg and outliers in CFG sizes, locals, loan and region counts, dataflow framework behavior on unexpected graph shapes and bitset invalidations
- I worked on dataflow for borrowck: single pass analyses on acyclic CFGs, dataflow analyses on SCCs for cyclic CFGs
- some more pieces of amanda's SCC rework have landed, with lcnr's help
- lcnr's opaque type rework, borrowcking of nested items, and so on, also fixed some issues we mentioned in previous updates with member constraints for computing when loans are going out of scope
- we also studied recent papers in flow-sensitive pointer analysis
- I also started the loans-in-scope algorithm rework, and also have reachability acceleration with the CFG SCCs
- the last 2 actual failures in the UI tests are soundness issues, related to liveness of captured regions for opaque types: some regions that should be live are not, which were done to help with precise capture and limit the impact of capturing unused regions that cannot be actually used in the hidden type. The unsoundness should not be observable with NLLs, but polonius alpha relies on liveness to propagate loans throughout the CFG: these dead regions prevent detecting some error-causing loan invalidations. The easiest fix would cause breakage in code that's now accepted. niko, jack and I have another possible solution and I'm trying to implement it now
Goals looking for help
Other goal updates
| Progress | |
| Point of contact | |
| Champions |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We had a design meeting on 2025-09-10, minutes available here, aiming at these questions:
There are a few concrete things I would like to get out of this meeting, listed sequentially in order of most to least important:
- Would you be comfortable stabilizing the initial ADTs-only extensions?
- This would be properly RFC'd before stabilization, this ask is just a "vibe check".
- Are you interested in seeing Per-Value Rejection for enums with undesirable variants?
- How do you feel about the idea of Lossy Conversion as an approach in general, what about specifically for the References and Raw Pointers extensions?
- How do you feel about the idea of dropping the One Equality ideal in general, what about specifically for
-0.0vs+0.0, what about specifically forNaNvalues?
The vibe checks on the first one were as follows:
Vibe check
The main ask:
Would you be comfortable stabilizing the initial ADTs-only extensions?
(plus the other ones)
nikomatsakis
I am +1 on working incrementally and focusing first on ADTs. I am supportive of stabilization overall but I don't feel like we've "nailed" the way to talk or think about these things. So I guess my "vibe" is +1 but if this doc were turned into an RFC kind of "as is" I would probably wind up -1 on the RFC, I think more work is needed (in some sense, the question is, "what is the name of the opt-in trait and why is it named that"). This space is complex and I think we have to do better at helping people understand the fine-grained distinctions between runtime values, const-eval values, and type-safe values.
Niko: if we add some sort of derive of a trait name, how much value are we getting from the derive, what should the trait be named?
tmandry
I think we'll learn the most by stabilizing ADTs in a forward compatible way (including an opt-in) now. So +1 from me on the proposed design.
It's worth noting that this is a feature that interacts with many other features, and we will be considering extensions to the MVP for the foreseeable future. To some extent the lang team has committed to this already but we should know what we're signing ourselves up for.
scottmcm
scottmcm: concern over the private fields restriction (see question below), but otherwise for the top ask, yes happy to just do "simple" types (no floats, no cells, no references, etc).
TC
As Niko said, +1 on working incrementally, and I too am supportive overall.
As a vibe, per-value rejection seems fairly OK to me in that we decided to do value-based reasoning for other const checks. It occurs to me there's some parallel with that.
https://github.com/rust-lang/rust/pull/119044
As for the opt-in on types, I see the logic. I do have reservations about adding too many opt-ins to the language, and so I'm curious about whether this can be safely removed.
Regarding floats, I see the question on these as related to our decision about how to handle padding in structs. If it makes sense to normalize or otherwise treat
-0.0and+0.0as the same, then it'd also make sense in my view to normalize or otherwise treat two structs with the same values but different padding (or where only one has initialized padding) as the same.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
bootstrap (Jakub BerƔnek), lang (Niko Matsakis), spec (Pete LeVasseur) |
| Task owners |
Pete LeVasseur, Contributors from Ferrous Systems and others TBD, |
2 detailed updates available.
After much discussion, we have decided to charter this team as a t-spec subteam. Pete LeVasseur and I are working to make that happen now.
PR with charters:
https://github.com/rust-lang/team/pull/2028
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Here's our first status update!
-
We've been experimenting with a few different ways of emitting retags in codegen, as well as a few different forms that retags should take at this level. We think we've settled on a set of changes that's worth sending out to the community for feedback, likely as a pre-RFC. You can expect more engagement from us on this level in the next couple of weeks.
-
We've used these changes to create an initial working prototype for BorrowSanitizer that supports finding Tree Borrows violations in tiny, single-threaded Rust programs. We're working on getting Miri's test suite ported over to confirm that everything is working correctly and that we've quashed any false positives or false negatives.
-
This coming Monday, I'll be presenting on BorrowSanitizer and this project goal at the Workshop on Supporting Memory Safety in LLVM. Please reach out if you're attending and would like to chat more in person!
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby |
1 detailed update available.
The work on this goal has led to many ongoing discussions on the current status of the Reference. Those discussions are still in progress.
Meanwhile, many people working on this goal have successfully written outlines or draft chapters, at various stages of completeness. There's a broken-out status report at https://github.com/rust-lang/project-goal-reference-expansion/issues/11 .
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Manuel Drehwald), lang (TC) |
| Task owners |
Manuel Drehwald, LLVM offload/GPU contributors |
1 detailed update available.
A longer update of the changes over the fall. We had two gsoc contributors and a lot of smaller improvements for std::autodiff. The first two improvements were already mentioned as draft PRs in the previous update, but got merged since. I also upstreamed more std::offload changes.
- Marcelo DomĆnguez refactored the autodiff frontend to be a proper rustc intrinsic, rather than just hackend into the frontend like I first implemented it. This already solved multiple open issues, reduced the code size, and made it generally easier to maintain going forward.
- Karan Janthe upstreamed a first implementation of "TypeTrees", which lowers rust type and layout information to Enzyme, our autodiff backend. This makes it more likely that you won't see compilation failures with the error message "Can not deduce type of ". We might refine in the future what information exactly we lower.
- Karan Janthe made sure that std::autodiff has support for f16 and and f128 types.
- One more of my offload PRs landed. I also figured out why the LLVM-IR generated by the std::offload code needed some manual adjustments in the past. We were inconsistent when communicating with LLVM's offload module, about whether we'd want a magic, extra, dyn_ptr argument, that enables kernels to use some extra features. We don't use these features yet, but for consistency we now always generate and expect the extra pointer. The bugfix is currently under review, once it lands upstream, rustc is able to run code on GPUs (still with a little help of clang).
- Marcelo DomĆnguez refactored my offload frontend, again introducing a proper rustc intrinsic. That code will still need to go through review, but once it lands it will get us a lot closer to a usable frontend. He also started to generate type information for our offload backend to know how many bytes to copy to and from the devices. This is a very simplified version of our autodiff typetrees.
- At RustChinaConf, I was lucky to run into the wild linker author David Lattimore, which helped me to create a draft PR that can dlopen Enzyme at runtime. This means we could ship it via rustup for people interested in std::autodiff, and don't have to link it in at build time, which would increase binary size even for those users that are not interested in it. There are some open issues, so please reach out if you have time to get the PR ready!
- @sgasho spend a lot of time trying to get Rust into the Enzyme CI. Unfortunately that is a tricky process due to Enzyme's CI requirements, so it's not merged yet.
- I tried to simplify building std::autodiff by marking it as compatible with download-llvm-ci. Building LLVM from source was previously the by far slowest part of building rustc with autodiff, so this has a large potential. Unfortunately the CI experiments revealed some issues around this setting. We think we know why Enzyme's Cmake causes issues here and are working on a fix to make it more reliable.
- Osama Abdelkader and bjorn3 looked into automatically enabling fat-lto when autodiff is enabled. In the past, forgetting to enable fat-lto resulted in incorrect (zero) derivatives. The first approach unfortunately wasn't able to cover all cases, so we need to see whether we can handle it nicely. If that turns out to be too complicated, we will revert it and instead "just" provide a nice error message, rather than returning incorrect derivatives.
All-in-all I spend a lot more time on infra (dlopen, cmake, download-llvm-ci, ...) then I'd like, but on the happy side there are only so many features left that I want to support here so there is an end in sight. I am also about to give a tech-talk at the upcoming LLVM dev meeting about safe GPU programming in Rust.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
(depending on the flag) |
3 detailed updates available.
I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).
- [merged] Sanitizers target modificators / https://github.com/rust-lang/rust/pull/138736
- [merged] Add assembly test for -Zreg-struct-return option / https://github.com/rust-lang/rust/pull/145382
- [merged] CI: rfl: move job forward to Linux v6.17-rc5 to remove temporary commits / https://github.com/rust-lang/rust/pull/146368
-Zharden-sls/ https://github.com/rust-lang/rust/pull/136597- Waiting on review
#![register_tool]/ https://github.com/rust-lang/rust/issues/66079- Waiting on https://github.com/rust-lang/rfcs/pull/3808
-Zno-jump-tables/ https://github.com/rust-lang/rust/pull/145974- Active FCP, waiting on 2 check boxes
-Cunsigned-char
We've discussed adding an option analogous to -funsigned-char in GCC and Clang, that would allow you to set whether std::ffi::c_char is represented by i8 or u8. Right now, this is platform-specific and should map onto whatever char is in C on the same platform. However, Linux explicitly sets char to be unsigned and then our Rust code conflicts with that. And isn this case the sign is significant.
Rust for Linux works around this this with their rust::ffi module, but now that they've switched to the standard library's CStr type, they're running into it again with the as_ptr method.
Tyler mentioned https://docs.rs/ffi_11/latest/ffi_11/ which preserves the char / signed char / unsigned char distinction.
Grouping target modifier flags
The proposed unsigned-char option is essentially a target modifier. We have several more of these (e.g. llvm-args, no-redzone) in the Rust compiler and Josh suggested we distinguish them somehow. E.g. by giving them the same prefix or possibly creating a new config option (right now we have -C and -Z, maybe we could add -T for target modifiers) so they're distinct from the e.g. the codegen options.
Josh started a Zulip thread here: https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Grouping.20target.20modifier.20options.3F/with/546524232
#![register_tool] / rust#66079 / RFC#3808
Tyler looked at the RFC. The Crubit team started using register_tool but then moved to using an attribute instead. He proposed we could do something similar here, although it would require a new feature and RFC.
The team was open to seeing how it would work.
| Progress | |
| Point of contact | |
| Champions |
lang (Josh Triplett), lang-docs (TC) |
| Task owners |
3 detailed updates available.
I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).
Deref/Receiver
- Ding Xiang Fei keeps updating the PR: https://github.com/rust-lang/rust/pull/146095
- They're also working on a document to explain the consequences of this split
Arbitrary Self Types
- https://github.com/rust-lang/rust/issues/44874
- Waiting on the
Deref/Receiverwork, no updates
derive(CoercePointee)
- https://github.com/rust-lang/rust/pull/133820
- Waiting on Arbitrary self types
Pass pointers to const in asm! blocks
- RFC: https://github.com/rust-lang/rfcs/pull/3848
- The Lang team went through the RFC with Alice Ryhl on 2025-10-08 and it's in FCP now
Field projections
- Benno Lossin opened a PR here: https://github.com/rust-lang/rust/pull/146307
- Being reviewed by the compiler folks
Providing \0 terminated file names with #[track_caller]
- The feature has been implemented and stabilized with
file_as_c_stras the method name: https://github.com/rust-lang/rust/pull/145664
Supertrait auto impl RFC
- Ding Xiang Fei opened the RFC and works with the reviewers: https://github.com/rust-lang/rfcs/pull/3851
Other
- Miguel Ojeda spoke to Linus about rustfmt and they came to agreement.
Layout of core::any::TypeId
Danilo asked about the layout of TypeId -- specifically its size and whether they can rely on it because they want to store it in a C struct. The struct's size is currently 16 bytes, but that's an implementation detail.
As a vibe check, Josh Triplett and Tyler Mandry were open to guaranteeing that it's going to be at most 16 bytes, but they wanted to reserve the option to reduce the size at some point. The next step is to have the full Lang and Libs teams discuss the proposal.
Danilo will open a PR to get that discussion started.
rustfmt
Miguel brought up the "trailing empty comment" workaround for the formatting issue that made the rounds on the Linux kernel a few weeks ago. The kernel style places each import on a single line:
use crate::{
fmt,
page::AsPageIter,
};
rustfmt compresses this to:
use crate::{fmt, page::AsPageIter};
The workaround is to put an empty trailing comment at the end
use crate::{
fmt,
page::AsPageIter, //
};
This was deemed acceptable (for the time being) and merged into the mainline kernel: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4a9cb2eecc78fa9d388481762dd798fa770e1971
Miguel is in contact with rustfmt to support this behaviour without a workaround.
// PANIC: ... comments / clippy#15895
This is a proposal to add a lint that would require a PANIC comment (modeled after the SAFETY comment) to explain the circumstances during which the code will or won't panic.
Alejandra GonzƔlez was open to the suggestion and Henry Barker stepped up to implement it.
Deref/Receiver
During the experimentation work, Ding ran into an issue with overlapping impls (that was present even with #[unstable_feature_bound(..)]). We ran out of time but we'll discuss this offline and return to it at the next meeting.
| Progress | |
| Point of contact |
|
| Champions |
cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact |
|
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Cargo tracking issue: https://github.com/rust-lang/cargo/issues/15844. The first implementation was https://github.com/rust-lang/cargo/pull/15845 in August that added build.analysis.enabled = true to unconditionally generate timing HTML. Further implementations tasks is listed in https://github.com/rust-lang/cargo/issues/15844#issuecomment-3192779748.
Haven't yet got any progress in September.
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) |
| Task owners |
oli-obk |
1 detailed update available.
I implemented an initial MVP supporting only tuples and primitives (tho those are just opaque things you can't interact with further), and getting offsets for the tuple fields as well as the size of the tuple: https://github.com/rust-lang/rust/pull/146923
There are two designs of how to expose this from a libs perspective, but after a sync meeting with scottmcm yesterday we came to the conclusion that neither is objectively better at this stage so we're just going to go with the nice end-user UX version for now. For details see the PR description.
Once the MVP lands, I will mentor various interested contributors who will keep adding fields to the Type struct and variants the TypeKind enum.
The next major step is restricting what information you can get from structs outside of the current module or crate. We want to honor visibility, so an initial step would be to just never show private fields, but we want to explore allowing private fields to be shown either just within the current module or via some opt-in marker trait
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Status update October 6, 2025
The build-dir was split out of target-dir as part of https://github.com/rust-lang/cargo/issues/14125 and scheduled for stabilization in Rust 1.91.0. š
Before re-organizing the build-dir layout we wanted to improve the existing layout tests to make sure we do not make any unexpected changes. This testing harness improvement was merged in https://github.com/rust-lang/cargo/pull/15874.
The initial build-dir layout reorganization PR has been posted https://github.com/rust-lang/cargo/pull/15947 and discussion/reviews are under way.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Task owners |
[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec) |
| Progress | |
| Point of contact | |
| Task owners |
vision team |
1 detailed update available.
Update:
Niko and I gave a talk at RustConf 2025 (and I represented that talk at RustChinaConf 2025) where we gave an update on this (and some intermediate insights).
We have started to seriously plan the shape of the final doc. We have some "blind spots" that we'd like to cover before finishing up, but overall we're feeling close to the finish line on interviews.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We moved forward with the implementation, and the new job queue system is now being tested in production on a single test pull request. Most things seem to be working, but there are a few things to iron out and some profiling to be done. I expect that within a few weeks we could be ready to switch to the new system fully in production.
| Progress | |
| Point of contact |
|
| Champions | |
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
Sized hierarchy
The focus right now is on the "non-const" parts of the proposal, as the "const" parts are blocked on the new trait solver (https://github.com/rust-lang/rust-project-goals/issues/113). Now that the types team FCP https://github.com/rust-lang/rust/pull/144064 has completed, work can proceed to land the implementation PRs. David Wood plans to split the RFC to separate out the "non-const" parts of the proposal so it can move independently, which will enable extern types.
To that end, there are three interesting T-lang design questions to be considered.
Naming of the traits
The RFC currently proposes the following names
SizedMetaSizedPointeeSized
However, these names do not follow the "best practice" of naming the trait after the capability that it provides. As champion Niko is recommending we shift to the following names:
Sized-- should righly be calledSizeOf, but oh well, not worth changing.SizeOfVal-- named after the methodsize_of_valthat you get access to.Pointee-- the only thing you can do is point at it.
The last trait name is already used by the (unstable) std::ptr::Pointee trait. We do not want to have these literally be the same trait because that trait adds a Metadata associated type which would be backwards incompatible; if existing code uses T::Metadata to mean <T as SomeOtherTrait>::Metadata, it could introduce ambiguity if now T: Pointee due to defaults. My proposal is to rename std::ptr::Pointee to std::ptr::PointeeMetadata for now, since that trait is unstable and the design remains under some discussion. The two traits could either be merged eventually or remain separate.
Note that PointeeMetadata would be implemented automatically by the compiler for anything that implements Pointee.
Syntax opt-in
The RFC proposes that an explicit bound like T: MetaSized disabled the default T: Sized bound. However, this gives no signal that this trait bound is "special" or different than any other trait bound. Naming conventions can help here, signalling to users that these are special traits, but that leads to constraints on naming and may not scale as we consider using this mechanism to relax other defaults as proposed in my recent blog post. One idea is to use some form of syntax, so that T: MetaSized is just a regular bound, but (for example) T: =MetaSized indicates that this bound "disables" the default Sized bound. This gives users some signal that something special is going on. This = syntax is borrowing from semver constraints, although it's not a precise match (it does not mean that T: Sized doesn't hold, after all). Other proposals would be some other sigil (T: ?MetaSized, but it means "opt out from the traits above you"; T: #MetaSized, ...) or a keyword (no idea).
To help us get a feel for it, I'll use T: =Foo throughout this post.
Implicit trait supertrait bounds, edition interaction
In Rust 2024, a trait is implicitly ?Sized which gets mapped to =SizeOfVal:
trait Marker {} // cannot be implemented by extern types
This is not desirable but changing it would be backwards incompatible if traits have default methods that take advantage of this bound:
trait NotQuiteMarker {
fn dummy(&self) {
let s = size_of_val(self);
}
}
We need to decide how to handle this. Options are
- Just change it, breakage will be small (have to test that).
- Default to
=SizeOfValbut let users explicitly write=Pointeeif they want that. Bad because all traits will be incompatible with extern types. - Default to
=SizeOfValonly if defaulted methods are present. Bad because it's a backwards incompatible change to add a defaulted method now. - Default to
=Pointeebut addwhere Self: =SizeOfValimplicitly to defaulted methods. Now it's not backwards incompatible to add a new defaulted method, but it is backwards incompatible to change an existing method to have a default.
If we go with one of the latter options, Niko proposes that we should relax this in the next Edition (Rust 2026?) so that the default becomes Pointee (or maybe not even that, if we can).
Relaxing associated type bounds
Under the RFC, existing ?Sized bounds would be equivalent to =SizeOfVal. This is mostly fine but will cause problems in (at least) two specific cases: closure bounds and the Deref trait. For closures, we can adjust the bound since the associated type is unstable and due to the peculiarities of our Fn() -> T syntax. Failure to adjust the Deref bound in particular would prohibit the use of Rc<E> where E is an extern type, etc.
For deref bounds, David Wood is preparing a PR that simply changes the bound in a backwards incompatible way to assess breakage on crater. There is some chance the breakage will be small.
If the breakage proves problematic, or if we find other traits that need to be relaxed in a similar fashion, we do have the option of:
- In Rust 2024,
T: Derefbecomes equivalent toT: Deref<Target: SizeOfVal>unless written likeT: Deref<Target: =Pointee>. We add that annotation throughout stdlib. - In Rust 202X, we change the default, so that
T: Derefdoes not add any special bounds, and existing Rust 2024T: Derefis rewritten toT: Deref<Target: SizeOfVal>as needed.
Other notes
One topic that came up in discussion is that we may eventually wish to add a level "below" Pointee, perhaps Value, that signifies webassembly external values which cannot be pointed at. That is not currently under consideration but should be backwards compatible.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
19 Nov 2025 12:00am GMT
18 Nov 2025
Planet Mozilla
Mozilla Thunderbird: Thunderbird Adds Native Microsoft Exchange Email Support

If your organization uses Microsoft Exchange-based email, you'll be happy to hear that Thunderbird's latest monthly Release version 145, now officially supports native access via the Exchange Web Services (EWS) protocol. With EWS now built directly into Thunderbird, a third-party add-on is no longer required for email functionality. Calendar and address book support for Exchange accounts remain on the roadmap, but email integration is here and ready to use!
What changes for Thunderbird users
Until now, Thunderbird users in Exchange hosted environments often relied on IMAP/POP protocols or third-party extensions. With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more. This simplifies life for users who depend on Exchange for email but prefer Thunderbird as their client.
How to get started
For many people switching from Outlook to Thunderbird, the most common setup involves Microsoft-hosted Exchange accounts such as Microsoft 365 or Office 365. Thunderbird now uses Microsoft's standard sign-in process (OAuth2) and automatically detects your account settings, so you can start using your email right away without any extra setup.
If this applies to you, setup is straightforward:
- Create a new account in Thunderbird 145 or newer.
- In the new Account Hub, select Exchange (or Exchange Web Services in legacy setup).
- Let Thunderbird handle the rest!

Important note: If you see something different, or need more details or advice, please see our support page and wiki page. Also, some authentication configurations are not supported yet and you may need to wait for a further update that expands compatibility, please refer to the table below for more details.
What functionality is supported now and what's coming soon
As mentioned earlier, EWS support in version 145 currently enables email functionality only. Calendar and address book integration are in active development and will be added in future releases. The chart below provides an at-a-glance view of what's supported today.
| Feature area | Supported now | Not yet supported |
| Email - account setup & folder access | Creating accounts via auto-config with EWS, server-side folder manipulation |
- |
| Email - message operations | Viewing messages, sending, replying/forwarding, moving/copying/deleting |
- |
| Email - attachments | Attachments can be saved and displayed with detach/delete support. |
- |
| Search & filtering | Search subject and body, quick filtering |
Filter actions requiring full body content are not yet supported. |
| Accounts hosted on Microsoft 365 | Domains using the standard Microsoft OAuth2 endpoint |
Domains requiring custom OAuth2 application and tenant IDs will be supported in the future. |
| Accounts hosted on-premise | Password-based Basic authentication |
Password-based NTLM authentication and OAuth2 for on-premise servers are on the roadmap. |
| Calendar support | - | Not yet implemented - calendar syncing is on the roadmap. |
| Address book / contacts support | - | Not yet implemented - address book support is on the roadmap. |
| Microsoft Graph support | - | Not yet implemented - Microsoft Graph integration will be added in the future. |
Exchange Web Services and Microsoft Graph
While many people and organizations still rely on Exchange Web Services (EWS), Microsoft has begun gradually phasing it out in favor of a newer, more modern interface called Microsoft Graph. Microsoft has stated that EWS will continue to be supported for the foreseeable future, but over time, Microsoft Graph will become the primary way to connect to Microsoft 365 services.
Because EWS remains widely used today, we wanted to ensure full support for it first to ensure compatibility for existing users. At the same time, we're actively working to add support for Microsoft Graph, so Thunderbird will be ready as Microsoft transitions to its new standard.
Looking ahead
While Exchange email is available now, calendar and address book integration is on the way, bringing Thunderbird closer to being a complete solution for Exchange users. For many people, having reliable email access is the most important step, but if you depend on calendar and contact synchronization, we're working hard to bring this to Thunderbird in the near future, making Thunderbird a strong alternative to Outlook.
Keep an eye on future releases for additional support and integrations, but in the meantime, enjoy a smoother Exchange email experience within your favorite email client!
If you want to know more about Exchange support in Thunderbird, please refer to the dedicated page on support.mozilla.org. Organization admins can also find out more on the Mozilla wiki page. To follow ongoing and future work in this area, please refer to the relevant meta-bug on Bugzilla.
The post Thunderbird Adds Native Microsoft Exchange Email Support appeared first on The Thunderbird Blog.
18 Nov 2025 3:15pm GMT
The Rust Programming Language Blog: Google Summer of Code 2025 results
As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.
We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!
Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors.
Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here. For easier navigation, here is an index of the project descriptions in alphabetical order:
- ABI/Layout handling for the automatic differentiation feature by Marcelo DomĆnguez
- Add safety contracts by Dawid Lachowicz
- Bootstrap of rustc with rustc_codegen_gcc by MichaÅ Kostrubiec
- Cargo: Build script delegation by Naman Garg
- Distributed and resource-efficient verification by Jiping Zhou
- Enable Witness Generation in cargo-semver-checks by Talyn Veugelers
- Extend behavioural testing of std::arch intrinsics by Madhav Madhusoodanan
- Implement merge functionality in bors by Sakibul Islam
- Improve bootstrap by Shourya Sharma
- Improve Wild linker test suites by Kei Akiyama
- Improving the Rustc Parallel Frontend: Parallel Macro Expansion by Lorrens Pantelis
- Make cargo-semver-checks faster by Joseph Chung
- Make Rustup Concurrent by Francisco Gouveia
- Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices by Julien Robert
- Modernising the libc Crate by Abdul Muiz
- Prepare stable_mir crate for publishing by Makai
- Prototype an alternative architecture for cargo fix using cargo check by Glen Thalakottur
- Prototype Cargo Plumbing Commands by Vito Secona
And now strap in, as there is a ton of great content to read about here!
ABI/Layout handling for the automatic differentiation feature
- Contributor: Marcelo DomĆnguez
- Mentors: Manuel Drehwald, Oli Scherer
- Final report
The std::autodiff module allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user-written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.
Our autodiff frontend was facing two challenges.
- First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
- Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.
Marcelo helped rewrite our autodiff macros to not generate hacky placeholder function bodies, but instead introduced a proper autodiff intrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR.
Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!
Add safety contracts
- Contributor: Dawid Lachowicz
- Mentor: Michael Tautschnig
- Final report
The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts, moving from informal comments that specify safety requirements of unsafe functions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.
Given that Rust contracts are still in their early stages, Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:
-
Pragmatic Contracts Integration: Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.
-
Variable Reference Capability: Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.
-
Separation Logic Integration: The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created.
Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!
Bootstrap of rustc with rustc_codegen_gcc
- Contributor: MichaÅ Kostrubiec
- Mentor: antoyo
- Final report
The goal of this project was to improve the Rust GCC codegen backend (rustc_codegen_gcc), so that it would be able to compile the "stage 2"1 Rust compiler (rustc) itself again.
You might remember that MichaÅ already participated in GSoC last year, where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build rustc with GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with #[inline(always)] were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog.
After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3" rustc build working, for which he had to vastly improve the memory consumption of the codegen backend.
Once that was done, he moved on to yet another goal, which was to build rustc for a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k. He also attempted to compile rustc on Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build a rustc for m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux.
MichaÅ had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here.
We were really impressed with what MichaÅ was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, MichaÅ!
Cargo: Build script delegation
- Contributor: Naman Garg
- Mentor: Ed Page
- Final report
Cargo build scripts come at a compile-time cost, because even to run cargo check, they must be built as if you ran cargo build, so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here.
The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable multiple-build-scripts feature to Cargo, implemented support for parsing an array of build scripts in Cargo.toml, and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.
Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:
[]
= ["windows-manifest.rs", "release-info.rs"]
then the corresponding crate is able to access the OUT_DIRs of both build scripts using env!("windows-manifest_OUT_DIR") and env!("release-info_OUTDIR").
As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in Cargo.toml and then implement the actual build script delegation to external build scripts using artifact-dependencies.
We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!
Distributed and resource-efficient verification
- Contributor: Jiping Zhou
- Mentor: Michael Tautschnig
- Final report
The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.
Jiping implemented a distributed verification system with several key innovations:
- Intelligent Change Detection: The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
- Multi-Tool Orchestration: The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
- Distributed Architecture: The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
- Real-time Visualization: Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here!
You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!
Enable Witness Generation in cargo-semver-checks
- Contributor: Talyn Veugelers
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.
Imagine a crate makes the following change to its public API:
// baseline version
// new version
This is clearly a major breaking change, right? And yet cargo-semver-checks with its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changing i64 to String, what if the change was from i64 to impl Into<i64>, or worse, into some monstrosity like:
Figuring out whether this change is breaking requires checking whether the original i64 parameter type can "fit" into that monstrosity of an impl Trait type. But reimplementing a Rust type checker and trait solver inside cargo-semver-checks is out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io-we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.
The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our example function comes from a crate called upstream, its witness program would look something like:
// take the same parameter type as the baseline version
This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!
Attempting to cargo check the witness while plugging in the new version of upstream forces the Rust compiler to decide whether i64 matches the new impl Trait parameter. If cargo check passes without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!
Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.
Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the cargo-semver-checks roadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.
Extend behavioural testing of std::arch intrinsics
- Contributor: Madhav Madhusoodanan
- Mentor: Amanieu d'Antras
- Final report
The std::arch module contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.
The intrinsics are tested with three approaches. We test that:
- The signatures of the intrinsics match the one specified by the architecture.
- The intrinsics generate the correct instruction.
- The intrinsics have the correct runtime behavior.
These behavior tests are implemented in the intrinsics-test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.
First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.
We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!
Implement merge functionality in bors
- Contributor: Sakibul Islam
- Mentor: Jakub BerƔnek
- Final report
The main Rust repository uses a pull request merge queue bot that we call bors. Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.
Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.
After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!
After we finish the final piece of the merge logic (creating so-called "rollups") together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all rust-lang/rust pull requests. Exciting!
Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!
Improve bootstrap
- Contributor: Shourya Sharma
- Mentors: Jakub BerĆ”nek, Jieyou Xu, Onur Ćzkan
- Final report
bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through rustup. This project's goal was very open-ended: "improve bootstrap".
And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.
After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.
The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare cargo, which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.
Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!
Improve Wild linker test suites
- Contributor: Kei Akiyama
- Mentor: David Lattimore
- Final report
Wild is a very fast linker for Linux that's written in Rust. It can be used to build executables and shared objects.
Kei's project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei's efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.
In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support --help, which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.
Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs. We thank Kei for all the hard work and look forward to continued collaboration in the future!
Improving the Rustc Parallel Frontend: Parallel Macro Expansion
- Contributor: Lorrens Pantelis
- Mentors: Sparrow Li, Vadim Petrochenkov
- Final report
The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.
We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.
The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.
Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings. He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.
These refactorings allowed him to implement something called "batched import resolution", which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues, after which the batched import resolution work will hopefully be merged.
Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!
Make cargo-semver-checks faster
- Contributor: Joseph Chung
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at 229. More lints mean more work for cargo-semver-checks to do, as well as more work for its test suite which runs over 250000 lint checks!
Joseph's contributions took three forms:
- Improving
cargo-semver-checksruntime performance-on large crates, our query runtime went from ~8s to ~2s, a 4x improvement! - Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
- Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.
Joseph described all the clever optimization tricks leading to these results in his final report. To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.
cargo-semver-checks relies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release of cargo-semver-checks supports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version's serde types to use to deserialize the data.
Previously, we used to deserialize the JSON file twice: once with a serde type that only loaded the format_version: u32 field, and a second time with the appropriate serde type that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. While serde is quite fast, there's nothing as fast as not doing the work twice in the first place!
So we used a trick: optimistically check if the format_version field is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a , character in the last few dozen bytes, then look for : after the , character, and for format_version between them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.
Joseph did a lot of profiling and performance optimizations to make cargo-semver-checks faster for everyone, with awesome results. Thank you very much for your work!
Make Rustup Concurrent
- Contributor: Francisco Gouveia
- Mentor: rami3l
- Final report
As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global tokio runtime in #3367, this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:
- He created a new set of user interfaces for displaying concurrent progress.
- He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.
As a warmup, Francisco made rustup check concurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif-based progress bar for reporting progress of concurrent operations, which replaced the original hand-rolled solution.
After that, the focus of the project has moved on to the toolchain installation flow used in commands like rustup toolchain install and rustup update. In this part, Francisco developed two main improvements:
- The possibility of downloading multiple components at once when setting up a toolchain, controlled by the
RUSTUP_CONCURRENT_DOWNLOADSenvironment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits. - The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the
defaultprofile.
We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!
Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices
- Contributor: Julien Robert
- Mentor: Jieyou Xu
- Final report
The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:
- We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
- We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.
Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from git blame and friends.
Over the years, we have accumulated a lot of unorganized stray test files in the top level tests/ui directory, and have a lot of generically named issue-*.rs tests in the tests/ui/issues/ directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.
Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:
- No more stray tests under the immediate
tests/ui/top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added. - A top-level document contains TL;DRs for each of the immediate subdirectories.
- Substantially fewer generically-named
issue-*.rsundertests/ui/issues/.
Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.
Modernising the libc Crate
- Contributor: Abdul Muiz
- Mentor: Trevor Gross
- Final report
libc is a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release of libc.
Test generation is handled by the ctest crate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project, ctest used an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaning libc could not easily use any syntax newer than that. Abdul completely rewrote ctest to use syn as its parser and make it much easier to add new tests, then went through and switched everything over to the more modern ctest. After this change, we were able to remove a number of hacks that had been needed to work with the old parser.
The other part of the project was to make progress toward the 1.0 release of libc. Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all the ctest changes.
While there is still a lot of work left to do before libc can reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!
Prepare stable_mir crate for publishing
- Contributor: Makai
- Mentor: Celina Val
- Final report
This project's goal was to prepare the Rust compiler's stable_mir crate (eventually renamed to rustc_public), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.
The main technical work involved restructuring rustc_public and rustc_public_bridge (previously named rustc_smir) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.
Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.
Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish rustc_public independently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!
Prototype an alternative architecture for cargo fix using cargo check
- Contributor: Glen Thalakottur
- Mentor: Ed Page
- Final report
The cargo fix command applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow, it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.
These problems are caused by its current architecture; it is implemented as a variant of cargo check that replaces rustc with cargo being run in a special mode that will call rustc in a loop, applying fixes until there are none. While this special rustc-proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making the rustc-proxy interactive.
Glen implemented a proof of concept of an alternative design called cargo-fixit. cargo fixit spawns cargo check in a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.
Glen performed various benchmarks to test how the new approach performs. And in some benchmarks, cargo fixit was able to finish within a few hundred milliseconds, where before the same task took cargo fix almost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!
Further work remains to be done on cargo-fixit to investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!
Prototype Cargo Plumbing Commands
- Contributor: Vito Secona
- Mentors: Cassaundra, Ed Page
- Final report
The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.
Vito created a prototype of several plumbing commands in the cargo-plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers. For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.
Vito did a lot of work, as he implemented seven different plumbing subcommands:
locate-manifestread-manifestread-lockfilelock-dependencieswrite-lockfileresolve-featuresplan-build
As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.
We thank Vito for all his work on improving the flexibility of Cargo.
Conclusion
We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.
18 Nov 2025 12:00am GMT
17 Nov 2025
Planet Mozilla
The Mozilla Blog: Firefox tab groups just got an upgrade, thanks to your feedback

Tab groups have become one of Firefox's most loved ways to stay organized - over 18 million people have used the feature since it launched earlier this year. Since then, we've been listening closely to feedback from the Mozilla Connect community to make this long-awaited feature even more helpful.
We've just concluded a round of highly requested tab groups updates that make it easier than ever to stay focused, organized, and productive. Check out what we've been up to, and if you haven't tried tab groups yet, here's a helpful starting guide.
Preview tab group contents on hover
Starting in Firefox 145, you can peek inside a group without expanding it. Whether you're checking a stash of tabs set aside for deep research or quickly scanning a group to find the right meeting notes doc, hover previews give you the context you need - instantly.
Keep the active tab visible in a collapsed group - and drag tabs into it
Since Firefox 142, when you collapse a group, the tab you're working in remains visible. It's a small but mighty improvement that reduces interruptions. And, starting in Firefox 143, you can drag a tab directly into a collapsed group without expanding it. It's a quick, intuitive way to stay organized while reducing on-screen clutter.
Each of these ideas came from your feedback on Mozilla Connect. We're grateful for your engagement, creativity, and patience as our team works to improve Tab Groups.
What's next for tab groups
We've got a big, healthy stash of great ideas and suggestions to explore, but we'd love to hear more from you on two areas of long-term interest:
- Improving the usefulness and ease of use of saved tab groups. We're curious how you're using them and how we can make the experience more helpful to you. What benefits do they bring to your workflow compared to bookmarks?
- Workspaces. Some of you have requested a way to separate contexts by creating workspaces - sets of tabs and tab groups that are entirely isolated from each other, yet remain available within a single browser window. We are curious about your workspace use cases and where context separation via window management or profiles doesn't meet your workflow needs. Is collaboration an important feature of the workspaces for you?
Have ideas and suggestions? Let us know in this Mozilla Connect thread!

Take control of your internet
Download FirefoxThe post Firefox tab groups just got an upgrade, thanks to your feedback appeared first on The Mozilla Blog.
17 Nov 2025 2:00pm GMT
The Rust Programming Language Blog: Launching the 2025 State of Rust Survey
It's time for the 2025 State of Rust Survey!
The Rust Project has been collecting valuable information about the Rust programming language community through our annual State of Rust Survey since 2016. Which means that this year marks the tenth edition of this survey!
We invite you to take this year's survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.
Like last year, the 2025 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.
We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:
- English
- Chinese (Simplified)
- Chinese (Traditional)
- French
- German
- Japanese
- Ukrainian
- Russian
- Spanish
- Portuguese (Brazil)
Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!
Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):
- @jieyouxu
- @adriantombu
- @llogiq
- @Marcono1234
- @tanakakz
- @YohDeadFall
- @Kivooeo
- @avrong
- @igarai
- @weihanglo
- @tyranron
- @leandrobbraga
Thank you!
If you have any questions, please see our frequently asked questions.
We appreciate your participation!
Click here to read a summary of last year's survey findings.
By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our Zulip channel and say hi.
17 Nov 2025 12:00am GMT
14 Nov 2025
Planet Mozilla
Mozilla Thunderbird: VIDEO: An Android Retrospective

If you can believe it, Thunderbird for Android has been out for just over a year! In this episode of our Community Office Hours, Heather and Monica check back in with the mobile team after our chat with them back in January. Sr. Software Engineer Wolf MontwƩ and our new Manager of Mobile Apps, Jon Bott look back at what the growing mobile team has been able to accomplish this last year, what we're still working on, and what's up ahead.
We'll be back next month, talking with members of the desktop team all about Exchange support landing in Thunderbird 145!
Thunderbird for Android: One Year Later
The biggest visual change to the app since last year is the new Account Drawer. The mobile team wants to help users easily tell their accounts apart and switch between them. While this is still a work in progress, we've started making these changes in Thunderbird 11.0. We know not everyone is excited about UI changes, but we hope most users like these initial changes!
Another major but hidden change involves updating our very old code, which came from K-9 Mail. Much of the K-9 code goes back to 2009! Having to work with old code explains why some fixes or new features, which should be simple, turn out to be complex and time consuming. Changes end up affecting more components than we expect, which cause delivery timelines to change from a week to months.
We are also still working to proactively eliminate tech debt, which will make the code more reliable and secure, plus allow future improvements and feature additions to be done more quickly. Even though the team didn't eliminate as much tech debt as they planned, they feel the work they've done this year will help reduce even more next year.
Over this past year, the team has also realized Thunderbird for Android users have different needs from K-9 Mail users. Thunderbird desktop users want more features from the desktop app, and this is definitely a major goal we have for our future development. The current feature gap won't always be here!
Recently, the mobile team has started moving to a monthly release cadence, similar to Firefox and the monthly Thunderbird channel. Changing from bi-monthly to monthly reduces the risks of changing huge amounts of code all at once. The team can make more incremental changes, like the account drawer, in a smaller window. Regular, "bite size" changes allow us to have more conversation with the community. The development team also benefits because they can make better timelines and can more accurately predict the amount of work needed to ship future releases.
A Growing Team and Community
Since we released the Android app, the mobile team and contributor community has grown! One of the unexpected benefits of growing the team and community has been improved documentation. Documentation makes things visible for our talented engineers and existing volunteers, and makes it easier for newcomers to join the project!
Our volunteers have made some incredible contributions to the app! Translators have not only bolstered popular languages like German and French, but have enabled previously unsupported languages. In addition to localization, community members have helped develop the app. Shamin-emon has taken on complicated changes, and has been very patient when some of his proposed changes were delayed. Arnt, another community member, debugged and patched an issue with utf-8 strings in IMAP. And Platform34 triaged numerous issues to give developers insights into reported bugs.
Finally, we're learning how to balance refactoring and improving an Android app, and at the same time building an iOS app from scratch! Both apps are important, but the team has had to think about what's most important in each app. Android development is focusing on prioritizing top bugs and splitting the work to fix them into bite size pieces. With iOS, the team can develop in small increments from the start. Fortunately, the growing team and engaged community is making this balancing act easier than it would have been a year ago.
Looking Forward
In the next year, what can Android users look forward to? At the top of the priority list is better architecture leading to a better user experience, along with view and Message List improvements, HTML signatures, and JMAP support. For the iOS app, the team is focused on getting basic functionality like place, such as reading and writing mail, attachments, and work on the JMAP and IMAP protocols.
VIDEO (Also on Peertube):
Listen to the Episode
The post VIDEO: An Android Retrospective appeared first on The Thunderbird Blog.
14 Nov 2025 6:00pm GMT
The Servo Blog: October in Servo: better for the web, better for embedders, better for you
Servo now supports several new web platform features:
- <source> in <video> and <audio> (@tharkum, #39717)
- CompressionStream and DecompressionStream (@kkoyung, #39658)
- fetchLater() (@TimvdLippe, #39547)
- Document.parseHTMLUnsafe() (@lukewarlow, #40246)
- the which property on UIEvent (@Taym95, #40109)
- the relatedTarget property on UIEvent (@TimvdLippe, #40182)
- self.name and .onmessageerror in dedicated workers (@yerke, #40156)
- name and areas properties on HTMLMapElement (@tharkum, #40133)

servoshell for macOS now ships as native Apple Silicon binaries (@jschwe, #39981). Building servoshell for macOS x86-64 still works for now, but is no longer officially supported by automated testing in CI (see § For developers).
In servoshell for Android, you can now enable experimental mode with just two taps (@jdm, #40054), use the software keyboard (@jdm, #40009), deliver touch events to web content (@mrobinson, #40240), and dismiss the location field (@jdm, #40049). Pinch zoom is now fully supported in both Servo and servoshell, taking into account the locations of pinch inputs (@mrobinson, @atbrakhi, #40083) and allowing keyboard scrolling when zoomed in (@mrobinson, @atbrakhi, #40108).
AbortController and AbortSignal are now enabled by default (@jdm, @TimvdLippe, #40079, #39943), after implementing AbortSignal.timeout() (@Taym95, #40032) and fixing throwIfAborted() on AbortSignal (@Taym95, #40224). If this is the first time you've heard of them, you might be surprised how important they are for real-world web compat! Over 40% of Google Chrome page loads at least check if they are supported, and many popular websites including GitHub and Discord are broken without them.
XPath is now enabled by default (@simonwuelker, #40212), after implementing '@attr/parent' queries (@simonwuelker, #39749), Copy > XPath in the DevTools Inspector (@simonwuelker, #39892), completely rewriting the parser (@simonwuelker, #39977), and landing several other fixes (@simonwuelker, #40103, #40105, #40161, #40167, #39751, #39764).
Servo now supports new KeyboardEvent({keyCode}) and ({charCode}) (@atbrakhi, #39590), which is enough to get Speedometer 3.0 and 3.1 working on macOS.

ImageData can now be sent over postMessage() and structuredClone() (@Gae24, #40084).
Layout engine
Our layout engine can now render text in synthetic bold (@minghuaw, @mrobinson, #39519, #39681, #39633, #39691, #39713), and now selects more appropriate fallback fonts for Kanji in Japanese text (@arayaryoma, #39608).
'initial-scale' now does the right thing in <meta name=viewport> (@atbrakhi, @shubhamg13, @mrobinson, #40055).
We've improved the way we handle 'border-radius' (@Loirooriol, #39571) and margin collapsing (@Loirooriol, #36322). While they're fairly unassuming fixes on the surface, both of them allowed us to find interop issues in the big incumbent engines (@Loirooriol, #39540, #36321) and help improve web standards (@noamr, @Loirooriol, csswg-drafts#12961, csswg-drafts#12218).
In other words, Servo is good for the web, even if you're not using it yet!
Embedding and ecosystem
Our HTML-compatible XPath implementation now lives in its own crate, and it's no longer limited to the Servo DOM (@simonwuelker, #39546). We don't have any specific plans to release this as a standalone library just yet, but please let us know if you have a use case that would benefit from this!
You can now take screenshots of webviews with WebView::take_screenshot (@mrobinson, @delan, #39583).
Historically Servo has struggled with situations causing 100% CPU usage or unnecessary work on every tick of the event loop, whenever a page is considered "active" or "animating" (#25305, #3406). We had since throttled animations (@mrobinson, #37169) and reflows (@mrobinson, @Loirooriol, #38431), but only to fixed rates of 120 Hz and 60 Hz respectively.
But starting this month, you can run Servo with vsync, thanks to the RefreshDriver trait (@coding-joedow, @mrobinson, #39072), which allows embedders to tell Servo when to start rendering each frame. The default driver continues to run at 120 Hz, but you can define and install your own with ServoBuilder::refresh_driver.
Breaking changes
Servo's embedding API has had a few breaking changes:
-
Opts::wait_for_stable_imagewas removed; to wait for a stable image, callWebView::take_screenshotinstead (@mrobinson, @delan, #39583). -
MouseButtonAction::Clickwas removed; useDownfollowed byUp. Click events need to be derived from mouse button downs and ups to ensure that they are fired correctly (@mrobinson, #39705). -
Scrolling is now derived from mouse wheel events. When you have mouse wheel input to forward to Servo, you should now call
WebView::notify_input_eventonly, notnotify_scroll_event(@mrobinson, @atbrakhi, #40269). -
WebView::set_pinch_zoomwas renamed topinch_zoom, to better reflect that pinch zoom is always relative (@mrobinson, @atbrakhi, #39868).
We've improved page zoom in our webview API (@atbrakhi, @mrobinson, @shubhamg13, #39738), which includes some breaking changes:
WebView::set_zoomwas renamed toset_page_zoom, and it now takes an absolute zoom value. This makes it idempotent, but it means if you want relative zoom, you'll have to multiply the zoom values yourself.- Use the new
WebView::page_zoommethod to get the current zoom value. WebView::reset_zoomwas removed; useset_page_zoom(1.0)instead.
Some breaking changes were also needed to give embedders a more powerful way to share input events with webviews (@mrobinson, #39720). Often both your app and the pages in your webviews may be interested in knowing when users press a key. Servo handles these situations by asking the embedder for all potentially useful input events, then echoing some of them back:
- Embedder calls
WebView::notify_input_eventto tell Servo about an input event, then web content (and Servo) can handle the event. - Servo calls
WebViewDelegate::notify_keyboard_eventto tell the embedder about keyboard events that were neither canceled by scripts nor handled by Servo itself. The event details is included in the arguments.
Embedders had no way of knowing when non-keyboard input events, or keyboard events that were canceled or handled by Servo, have completed all of their effects in Servo. This was good enough for servoshell's overridable key bindings, but not for WebDriver, where commands like Perform Actions need to reliably wait for input events to be handled. To solve these problems, we've replaced notify_keyboard_event with notify_input_event_handled:
- Embedder calls
WebView::notify_input_eventto tell Servo about an input event, then web content (and Servo) can handle the event. This now returns anInputEventId, allowing embedders to remember input events that they still care about for step 2. - Servo calls
WebViewDelegate::notify_input_event_handledto tell the embedder about every input event, when Servo has finished handling it. The event details are not included in the arguments, but you can use theInputEventIdto look up the details in the embedder.
Perf and stability
Servo now does zero unnecessary layout work when updating canvases and animated images, thanks to a new "UpdatedImageData" layout mode (@mrobinson, @mukilan, #38991).
We've fixed crashes when clicking on web content on Android (@mrobinson, #39771), and when running Servo on platforms where JIT is forbidden (@jschwe, @sagudev, #40071, #40130).
For developers
CI builds for pull requests should now take 70% less time, since they now run on self-hosted CI runners (@delan, #39900, #39915). Bencher builds for runtime benchmarking now run on our new dedicated servers, so our Speedometer and Dromaeo data should now be more accurate and less noisy (@delan, #39272).
We've now switched all of our macOS builds to run on arm64 (@sagudev, @jschwe, #38460, #39968). This helps back our macOS releases with thorough automated testing on the same architecture as our releases, but we can't run them on self-hosted CI runners yet, so they may be slower for the time being.
Work is underway to set up faster macOS arm64 runners on our own servers (@delan, ci-runners#64), funded by your donations. Speaking of which!
Donations
Thanks again for your generous support! We are now receiving 5753 USD/month (+1.7% over September) in recurring donations.
This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster macOS arm64 builds and ten-minute WPT builds.
Servo is also on thanks.dev, and already 28 GitHub users (same as September) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
14 Nov 2025 12:00am GMT
Creating accounts via auto-config with EWS, server-side folder manipulation
Filter actions requiring full body content are not yet supported.