26 Nov 2025
Planet Mozilla
This Week In Rust: This Week in Rust 627
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
- Switching to Rust's own mangling scheme on nightly | Rust Blog
- Interview with Jan David Nose | Rust Blog
- This Development-cycle in Cargo: 1.92 | Inside Rust Blog
Foundation
Project/Tooling Updates
- SeaORM 2.0: Nested ActiveModel and Cascade Operations
- Symbolica 1.0: Symbolic mathematics in Rust
- APT Rust requirement raises questions
Observations/Thoughts
- Running real-time Rust
- A look at Rust from 2012
- Making the case that Cargo features could be improved to alleviate Rust compile times
- How Cloudflare uses Rust to serve (and break) millions of websites at 50+ millions requests per second
- [audio] Netstack.FM episode 15 - Pingora with Edward and Noah from Cloudflare
- [video] Grind: Java Deserves Modern Tooling*
Rust Walkthroughs
- Rust Unit Testing: File reading
- Practical Performance Lessons from Apache DataFusion
- Describing binary data with Deku
Miscellaneous
- Rust For Linux Kernel Co-Maintainer Formally Steps Down
- JetBrains supports the open source Rust projects Ratatui and Biome
- filtra.io | Toyota's "Tip Of The Spear" Is Choosing Rust
Crate of the Week
This week's crate is grapheme-utils, a library of functions to ergonomically work with UTF graphemes.
Thanks to rustkins for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- Rustikon 2026 | CFP closes 2025-11-24 | Warsaw, Poland | 2025-03-19 - 2025-03-20 | Event Website
- TokioConf 2026 | CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
- RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
456 pull requests were merged in the last week
Compiler
- allow unnormalized types in drop elaboration
- avoid encoding non-constness or non-asyncness in metadata
- fix MaybeUninit codegen using GVN
- fix suggestion for the
cfg!macro - handle cycles when checking impl candidates for
doc(hidden) - inherent const impl
- recommend using a HashMap if a HashSet's second generic parameter doesn't implement BuildHasher
- reduce confusing
unreachable_codelints - replace OffsetOf by an actual sum of calls to intrinsic
- sess: default to v0 symbol mangling on nightly
- turn moves into copies after copy propagation
- warn against calls which mutate an interior mutable
const-item
Library
- add
bit_widthfor unsignedNonZero<T> - alloc: fix
Debugimplementation ofExtractIf - make SIMD intrinsics available in
const-contexts - match
<OsStringasDebug>::fmtto that of str - see if this is the time we can remove
layout::size_align - unwrap ret ty of
iter::ArrayChunks::into_remainder - v0 mangling for std on nightly
- hashbrown: add
HashTablemethods related to the raw bucket index - hashbrown: allow providing the key at insertion time for EntryRef
Cargo
docs(guide): When suggesting alt dev profile, link to related issuefeat(generate-lockfile): Add unstable --publish-time flagfeat(tree): Add more native completionsfix(bindeps): do not propagate artifact dependency to proc macro or build depsfix(config-include): disallow glob and template syntaxfix(package): exclude target/package from backupsrefactor(timings): separate data collection and presentationtest(config-include): include always relative to including config- enable
CARGO_CFG_DEBUG_ASSERTIONSin build scripts based on profile - feat: emit a warning when both
package.publishand--indexare specified - test: re-enable test since not flaky anymore
Rustdoc
- rustdoc-json: add rlib path to ExternalCrate to enable robust crate resolution
- rustdoc: make mergeable crate info more usable
Clippy
explicit_deref_methods: don't lint inimpl Deref(Mut)- add
large-error-ignoredconfig-knob - fix
useless_asrefsuggests wrongly when used in ctor - fix wrongly unmangled macros for
transmute_ptr_to_ptrandtransmute_bytes_to_str - taking a raw pointer on a union field is a safe operation
Rust-Analyzer
- add
unsafe(…)attribute completion - add pretty number for
add_explicit_enum_discriminant - add semantic tokens for deprecated items
- add deprecated semantic token for extern crate shorthand
- add assist to convert char literal
- allow inferring array sizes
- basic support for declarative attribute/derive macros
- completion
= $0after keyval cfg predicate - derive ParamEnv from GenericPredicates
- don't suggest duplicate
constcompletionsraw - enhance
remove_parenthesesassist to handle return expressions - extract function panics on more than one usage of variable in macro
- fix hit
incorrect_caseonno_manglestatic items - fix not applicable on
andforreplace_method_eager_lazy - fix not fill guarded match arm for
add_missing_match_arms - fix trailing newline in
tool_path - fix field completion in irrefutable patterns
- fix formatting request blocking on
crate_def_mapquery - fix parameter info with missing arguments
- fix some inference of patterns
- include all target types with paths outside package root
- infer range patterns correctly
- make dyn inlay hints configurable
- make postfix completion handle all references correctly
- move visibility diagnostics for fields to correct location
- never remove parens from prefix ops with valueless return/break/continue
- parse cargo config files with origins
- remove some deep normalizations from infer
- rewrite method resolution to follow rustc more closely
- show no error when parameters match macro names
- implement precedence for
print_hir - improve assist qualified to top when on first segment
- infer range pattern fully
- integrate postcard support into proc-macro server CLI
- optimize
SmolStr::clone4-5x speedup inline, 0.5x heap (slow down) - perf: improve start up time
- perf: prime trait impls in cache priming
- perf: produce less progress reports
- perf: reduce allocations in
try_evaluate_obligations - print more macro information in
DefMapdumps - proc-macro-srv: reimplement token trees via immutable trees
- support multiple variant for
generate_from_impl_for_enum - use inferred type in "extract type as type alias" assist and display inferred type placeholder
_inlay hints
Rust Compiler Performance Triage
Only a handful of performance-related changes landed this week. The largest one was changing the default name mangling scheme in nightly to the v0 version, which produces slightly larger symbol names, so it had a small negative effect on binary sizes and compilation time.
Triage done by @kobzol. Revision range: 6159a440..b64df9d1
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
0.9% | [0.3%, 2.7%] | 48 |
| Regressions ❌ (secondary) |
0.9% | [0.2%, 2.1%] | 25 |
| Improvements ✅ (primary) |
-0.5% | [-6.8%, -0.1%] | 33 |
| Improvements ✅ (secondary) |
-0.5% | [-1.4%, -0.1%] | 53 |
| All ❌✅ (primary) | 0.4% | [-6.8%, 2.7%] | 81 |
1 Regression, 2 Improvements, 5 Mixed; 1 of them in rollups 28 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Make closure capturing have consistent and correct behaviour around patterns
- misc coercion cleanups and handle safety correctly
- Implement
TryFrom<char>forusize. - Contracts: primitive ownership assertions:
ownedandblock - const validation: remove check for mutable refs in final value of const
No Items entered Final Comment Period this week for Compiler Team (MCPs only), Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- RFC: Exhaustive traits. Traits that enable cross trait casting between trait objects.
- CMSE calling conventions
RUSTC_ALLOW_UNSTABLE_<feature>: aRUSTC_BOOTSTRAPalternative- Target Stages, an improvement of the incremental system
Upcoming Events
Rusty Events between 2025-11-26 - 2025-12-24 🦀
Virtual
- 2025-11-26 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-27 | Virtual (Buenos Aires, AR) | Rust en Español
- 2025-11-30 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-02 | Virtual (London, UK) | Women in Rust
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-05 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-06 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2025-12-07 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-10 | Virtual (Girona, ES) | Rust Girona
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-12-16 | Virtual (Washington, DC, US) | Rust DC
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Virtual (Girona, ES) | Rust Girona
- 2025-12-18 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
Asia
- 2025-12-08 | Tokyo, JP | Rust Global: Tokyo
- 2025-12-20 | Bangalore, IN | Rust Bangalore
Europe
- 2025-11-26 | Bern, CH | Rust Bern
- 2025-11-27 | Augsburg, DE | Rust Meetup Augsburg
- 2025-11-27 | Barcelona, ES | BcnRust
- 2025-11-27 | Berlin, DE | Rust Berlin
- 2025-11-27 | Copenhagen, DK | Copenhagen Rust Community
- 2025-11-27 | Edinburgh, UK | Rust and Friends
- 2025-11-28 | Prague, CZ | Rust Prague
- 2025-12-03 | Girona, ES | Rust Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-04 | Vienna, AT | Rust Vienna
- 2025-12-08 | Dortmund, DE | Rust Dortmund
- 2025-12-08 | Paris, FR | Rust Paris
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
- 2025-12-16 | Bergen, NO | Rust Bergen
- 2025-12-16 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
North America
- 2025-11-26 | Austin, TX, US | Rust ATX
- 2025-11-26 | Phoenix, AZ, US | Desert Rust
- 2025-11-27 | Mountain View, CA, US | Hacker Dojo
- 2025-11-29 | Boston, MA, US | Boston Rust Meetup
- 2025-12-02 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-04 | México City, MX | Rust MX
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Lehi, UT, US | Utah Rust
- 2025-12-11 | Mountain View, CA, US | Hacker Dojo
- 2025-12-11 | San Diego, CA, US | San Diego Rust
- 2025-12-13 | Boston, MA, US | Boston Rust Meetup
- 2025-12-16 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-20 | Boston, MA, US | Boston Rust Meetup
- 2025-12-24 | Austin, TX, US | Rust ATX
Oceania
- 2025-12-11 | Brisbane City, QL, AU | Rust Brisbane
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
Also: a program written in Rust had a bug, and while it caused downtime, there was no security issue and nobody's data was compromised .
Thanks to Michael Voelkl for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
26 Nov 2025 5:00am GMT
25 Nov 2025
Planet Mozilla
The Mozilla Blog: Celebrating the contributors that power Mozilla Support
Every day, Firefox users around the world turn to Mozilla Support (SUMO) with a question, a hiccup or just a little curiosity. It's community-powered - contributors offer answers and support to make someone's day a little easier.
We celebrated this global community last month with Ask-A-Fox, a weeklong virtual event that brought together longtime contributors, newcomers and Mozilla staffers. The idea was simple: connect across time zones, trade tips and yes, answer questions.
Contributor appreciation, AMAs and an emoji hunt
For one lively week, contributors across Firefox and Thunderbird rallied together. Reply rates soared, response times dropped, and the forums buzzed with renewed energy. But the real story was the sense of connection.
There were live Ask Me Anything sessions with Mozilla's WebCompat, Web Performance, and Thunderbird teams. There was even a playful
+
emoji hunt through our Knowledge Base.
"That AMA was really interesting," said longtime Firefox contributor Paul. "I learned a lot and I recommend those that could not watch it live catch the recording as I am sure it will be very useful in helping users in SUMO."
Ask-A-Fox was a celebration of people: long-time contributors, brand-new faces and everyone in between. Here are just a few standout contributors:
- Firefox Desktop (including Enterprise)
Paul, Denyshon, Jonz4SUSE, @next, jscher2000 - Firefox for Android
Paul, TyDraniu, GerardoPcp04, Mad_Maks, sjohnn - Firefox for iOS
Paul, Simon.c.lord, TyDraniu, Mad_Maks, Mozilla-assistent - Thunderbird (including Android)
Davidsk, Sfhowes, Mozilla98, MattAuSupport, Christ1
Newcomers mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7 also made a big impact.
New contributor Shirmaya John said, "I love helping people, and I'm passionate about computers, so assisting with bugs or other tech issues really makes my day. I'm excited to grow here!"
Contributor Vincent won our Staff Award for the highest number of replies during the week.
"Ask a Fox highlights the incredible collaborative spirit of our community. A reminder of what we can achieve when we unite around a shared goal," said Kiki Kelimutu, a senior community manager at Mozilla.
Firefox has been powered by community from the start
As Mozilla's communities program manager, I've seen firsthand how genuine connection fuels everything we do. Members of our community aren't just answering questions; they're about building relationships, learning together, and showing up for one another with authenticity and care.
Mozilla is built by people who believe the internet should be open and accessible to all, and our community is the heartbeat of that vision. What started back in 2007 (and found its online home in 2010 at support.mozilla.org) has grown into a global network of contributors helping millions of Firefox users find answers, share solutions and get back on their Firefox journey.
Every question answered not only helps a user, it helps us build a better Firefox. By surfacing real issues and feedback, our community shapes the course of our products and keeps the web stronger for everyone.
Join the next Ask-A-Fox
Ask-A-Fox is a celebration of what makes Mozilla unique: our people.
As someone who's spent years building communities, I know that lasting engagement doesn't come from numbers or dashboards. It comes from treating contributors as individuals - people who bring their own stories, skills, and care to the table.
When Mozillians come together to share knowledge, laughter or even a few emojis, the result is more than faster replies. It's a connection.
Two more Ask-A-Fox events are already planned for next year, continuing the work of building communities that make the web more open and welcoming.
If you've ever wanted to make the web a little more human, come join us. Because every answer, every conversation, and every connection helps keep Firefox thriving.

Join us in shaping the web
Sign up hereThe post Celebrating the contributors that power Mozilla Support appeared first on The Mozilla Blog.
25 Nov 2025 6:02pm GMT
The Rust Programming Language Blog: Interview with Jan David Nose
On the Content Team, we had our first whirlwind outing at RustConf 2025 in Seattle, Washington, USA. There we had a chance to speak with folks about interesting things happening in the Project and the wider community.
Jan David Nose, Infrastructure Team
In this interview, Xander Cesari sits down with Jan David Nose, then one of the full-time engineers on the Infrastructure Team, which maintains and develops the infrastructure upon which Rust is developed and deployed -- including CI/CD tooling and crates.io.
We released this video on an accelerated timeline, some weeks ago, in light of the recent software supply chain attacks, but the interview was conducted prior to the news of compromised packages in other languages and ecosystems.
Check out the interview here or click below.
Transcript
Xander Cesari: Hey, this is Xander Cesari with the Rust Project Content Team, recording on the last hour of the last day of RustConf 2025 here in Seattle. So it's been a long and amazing two days. And I'm sitting down here with a team member from the Rust Project Infra Team, the unsung heroes of the Rust language. Want to introduce yourself and kind of how you got involved?
Jan David Nose: Yeah, sure. I'm JD. Jan David is the full name, but especially in international contexts, I just go with JD. I've been working for the Rust Foundation for the past three years as a full-time employee and I essentially hit the jackpot to work full-time on open source and I've been in the Infra Team of the Rust Project for the whole time. For the past two years I've led the team together with Jake. So the Infra Team is kind of a thing that lets Rust happen and there's a lot of different pieces.
Xander Cesari: Could you give me an overview of the responsibility of the Infra Team?
Jan David Nose: Sure. I think on a high level, we think about this in terms of, we serve two different groups of people. On one side, we have users of the language, and on the other side, we really try to provide good tooling for the maintainers of the language.
Jan David Nose: Starting with the maintainer side, this is really everything about how Rust is built. From the moment someone makes a contribution or opens a PR, we maintain the continuous integration that makes sure that the PR actually works. There's a lot of bots and tooling helping out behind the scenes to kind of maintain a good status quo, a sane state. Lots of small things like triage tools on GitHub to set labels and ping people and these kinds of things. And that's kind of managed by the Infra Team at large.
Jan David Nose: And then on the user side, we have a lot of, or the two most important things are making sure users can actually download Rust. We don't develop crates.io, but we support the infrastructure to actually ship crates to users. All the downloads go through content delivery networks that we provide. The same for Rust releases. So if I don't do my job well, which has happened, there might be a global outage of crates.io and no one can download stuff. But those are kind of the two different buckets of services that we run and operate.
Xander Cesari: Gotcha. So on the maintainer side, the Rust organization on GitHub is a large organization with a lot of activity, a lot of code. There's obviously a lot of large code bases being developed on GitHub, but there are not that many languages the size of Rust being developed on GitHub. Are there unique challenges to developing a language and the tooling that's required versus developing other software projects?
Jan David Nose: I can think of a few things that have less to do with the language specifically, but with some of the architecture decisions that were made very early on in the life cycle of Rust. So one of the things that actually caused a lot of headache for mostly GitHub, and then when they complained to us, for us as well, is that for a long, long time, the index for crates.io was a Git repo on GitHub. As Rust started to grow, the activity on the repo became so big that it actually caused some issues, I would say, in a friendly way on GitHub, just in terms of how much resources that single repository was consuming. That then kind of started this work on a web-based, HTTP-based index to shift that away. That's certainly one area where we've seen how Rust has struggled a little bit with the platform, but also the platform provider struggled with us.
Jan David Nose: I think for Rust itself, especially when we look at CI, we really want to make sure that Rust works well on all of the targets and all the platforms we support. That means we have an extremely wide CI pipeline where, for every Tier 1 target, we want to run all the tests, we want to build the release artifacts, we want to upload all of that to S3. We want to do as much as we reasonably can for Tier 2 targets and, to a lesser extent, maybe even test some stuff on Tier 3. That has turned into a gigantic build pipeline. Marco gave a talk today on what we've done with CI over the last year. One of the numbers that came out of doing the research for this talk is that we accumulate over three million build minutes per month, which is about six years of CPU time every month.
Jan David Nose: Especially when it comes to open source projects, I think we're one of the biggest consumers of GitHub Actions in that sense. Not the biggest in total; there are definitely bigger commercial projects. But that's a unique challenge for us to manage because we want to provide as good a service as we can to the community and make sure that what we ship is high quality. That comes at a huge cost in terms of scaling. As Rust gets more popular and we want to target more and more platforms, this is like a problem that just continues to grow.
Jan David Nose: We'll probably never remove a lot of targets, so there's an interesting challenge to think about. If it's already big now, how does this look in 5 years, 10 years, 15 years, and how can we make sure we can maintain the level of quality we want to ship? When you build and run for a target in the CI pipeline, some of those Tier 1 targets you can just ask a cloud service provider to give you a VM running on that piece of hardware, but some of them are probably not things that you can just run in the cloud.
Xander Cesari: Is there some HIL (Hardware-In-the-Loop) lab somewhere?
Jan David Nose: So you're touching on a conversation that's happening pretty much as we speak. So far, as part of our target tier policy, there is a clause that says it needs to be able to run in CI. That has meant being very selective about only promoting things to Tier 1 that we can actually run and test. For all of this, we had a prerequisite that it runs on GitHub Actions. So far we've used very little hardware that is not natively supported or provided by GitHub.
Jan David Nose: But this is exactly the point with Rust increasing in popularity. We just got requests to support IBM platforms and RISC-V, and those are not natively supported on GitHub. That has kicked off an internal conversation about how we even support this. How can we as a project enable companies that can provide us hardware to test on? What are the implications of that?
Jan David Nose: On one side, there are interesting constraints and considerations. For example, you don't want your PRs to randomly fail because someone else's hardware is not available. We're already so resource-constrained on how many PRs we can merge each day that adding noise to that process would really slow down contributions to Rust. On the other side, there are security implications. Especially if we talk about promoting something to Tier 1 and we want to build release artifacts on that hardware, we need to make sure that those are actually secure and no one sneaks a back door into the Rust compiler target for RISC-V.
Jan David Nose: So there are interesting challenges for us, especially in the world we live in where supply chain security is a massive concern. We need to figure out how we can both support the growth of Rust and the growth of the language, the community, and the ecosystem at large while also making sure that the things we ship are reliable, secure, and performant. That is becoming an increasingly relevant and interesting piece to work on. So far we've gotten away with the platforms that GitHub supports, but it's really cool to see that this is starting to change and people approach us and are willing to provide hardware, provide sponsorship, and help us test on their platforms. But essentially we don't have a good answer for this yet. We're still trying to figure out what this means, what we need to take into consideration, and what our requirements are to use external hardware.
Xander Cesari: Yeah, everyone is so excited about Rust will run everywhere, but there's a maintenance cost there that is almost exponential in scope.
Jan David Nose: It's really interesting as well because there's a tension there. I think with IBM, for example, approaching us, it's an interesting example. Who has IBM platforms at home? The number of users for that platform is really small globally, but IBM also invests heavily in Rust, tries to make this happen, and is willing to provide the hardware.
Jan David Nose: For us, that leads to a set of questions. Is there a line? Is there a certain requirement? Is there a certain amount of usage that a platform would need for us to promote it? Or do we say we want to promote as much as we can to Tier 1? This is a conversation we haven't really had to have yet. It's only now starting to creep in as Rust is adopted more widely and companies pour serious money and resources into it. That's exciting to see.
Jan David Nose: In this specific case, companies approach the Infra Team to figure out how we can add their platforms to CI as a first step towards Tier 1 support. But it's also a broader discussion we need to have with larger parts of the Rust Project. For Tier 1 promotions, for example, the Compiler Team needs to sign off, Infra needs to sign off. Many more people need to be involved in this discussion of how we can support the growing needs of the ecosystem at large.
Xander Cesari: I get the feeling that's going to be a theme throughout this interview.
Jan David Nose: 100%.
Xander Cesari: So one other tool that's part of this pipeline that I totally didn't know about for a long time, and I think a talk at a different conference clued me into it, is Crater. It's a tool that attempts to run all of the Rust code it can find on the internet. Can you talk about what that tool does and how it integrates into the release process?
Jan David Nose: Whenever someone creates a pull request on GitHub to add a new feature or bug fix to the Rust compiler, they can start what's called a Crater run, or an experiment. Crater is effectively a large fleet of machines that tries to pull in as many crates as it can. Ideally, we would love to test all crates, but for a variety of reasons that's not possible. Some crates simply don't build reliably, so we maintain lists to exclude those. From the top of my head, I think we currently test against roughly 60% of crates.
Jan David Nose: The experiment takes the code from your pull request, builds the Rust compiler with it, and then uses that compiler to build all of these crates. It reports back whether there are any regressions related to the change you proposed. That is a very important tool for us to maintain backwards compatibility with new versions and new features in Rust. It lets us ask: does the ecosystem still compile if we add this feature to the compiler, and where do we run into issues? Then, and this is more on the Compiler Team side, there's a decision about how to proceed. Is the breakage acceptable? Do we need to adjust the feature? Having Crater is what makes that conversation possible because it gives us real data on the impact on the wider ecosystem.
Xander Cesari: I think that's so interesting because as more and more companies adopt Rust, they're asking whether the language is going to be stable and backward compatible. You hear about other programming languages that had a big version change that caused a lot of drama and code changes. The fact that if you have code on crates.io, the Compiler Team is probably already testing against it for backwards compatibility is pretty reassuring.
Jan David Nose: Yeah, the chances are high, I would say. Especially looking at the whole Python 2 to Python 3 migration, I think as an industry we've learned a lot from those big version jumps. I can't really speak for the Compiler Team because I'm not a member and I wasn't involved in the decision-making, but I feel this is one of the reasons why backwards compatibility is such a big deal in Rust's design. We want to make it as painless as possible to stay current, stay up to date, and make sure we don't accidentally break the language or create painful migration points where the entire ecosystem has to move at once.
Xander Cesari: Do you know if there are other organizations pulling in something like Crater and running it on their own internal crate repositories, maybe some of the big tech companies or other compiler developers or even other languages? Or is this really bespoke for the Rust compiler team?
Jan David Nose: I don't know of anyone who runs Crater itself as a tool. Crater is built on a sandboxing framework that we also use in other places. For example, docs.rs uses some of the same underlying infrastructure to build all of the documentation. We try to share as much as we can of the functionality that exists in Crater, but I'm not aware of anyone using Crater in the same way we do.
Xander Cesari: Gotcha. The other big part of your job is that the Infra Team works on supporting maintainers, but it also supports users and consumers of Rust who are pulling from crates.io. It sounds like crates.io is not directly within your team, but you support a lot of the backend there.
Jan David Nose: Yeah, exactly. crates.io has its own team, and that team maintains the web application and the APIs. The crates themselves, all the individual files that people download, are hosted within our infrastructure. The Infra Team maintains the content delivery network that sits in front of that. Every download of a crate goes through infrastructure that we maintain. We collaborate very closely with the crates.io team on this shared interface. They own the app and the API, and we make sure that the files get delivered to the end user.
Xander Cesari: So it sounds like there's a lot of verification of the files that get uploaded and checks every time someone pushes a new version to crates.io. That part all happens within crates.io as an application.
Jan David Nose: Cargo uses the crates.io API to upload the crate file. crates.io has a lot of internal logic to verify that it is valid and that everything looks correct. For us, as the Infra Team, we treat that as a black box. crates.io does its work, and if it is happy with the upload, it stores the file in S3. From that point onward, infrastructure makes sure that the file is accessible and can be downloaded so people can start using your crate.
Xander Cesari: In this theme of Rust being a bit of a victim of its own success, I assume all of the traffic graphs and download graphs are very much up and to the right.
Jan David Nose: On the Foundation side, one of our colleagues likes to check how long it takes for one billion downloads to happen on crates.io, and that number has been falling quickly. I don't remember what it was three years ago, but it has come down by orders of magnitude. In our download traffic we definitely see exponential growth. Our traffic tends to double year over year, and that trend has been pretty stable. It really seems like Rust is getting a lot of adoption in the ecosystem and people are using it for more and more things.
Xander Cesari: How has the Infra Team scaled with that? Are you staying ahead of it, or are there a lot of late nights?
Jan David Nose: There have definitely been late nights. In the three years I've been working in the Infra Team, every year has had a different theme that was essentially a fire to put out.
Jan David Nose: It changes because we fix one thing and then the next thing breaks. So far, luckily, those fires have been mostly sequential, not parallel. When I joined, bandwidth was the big topic. Over the last year, it has been more about CI. About three years ago, we hit this inflection point where traffic was doubling and the sponsorship capacity we had at the time was reaching its limits.
Jan David Nose: Two or three years ago, Fastly welcomed us into their Fast Forward program and has been sponsoring all of our bandwidth since then. That has mostly helped me sleep at night. It has been a very good relationship. They have been an amazing partner and have helped us at every step to remove the fear that we might hit limits. They are very active in the open source community at large; most famously they also sponsor PyPI and the Python ecosystem, compared to which we're a tiny fish in a very big pond. That gives us a lot of confidence that we can sustain this growth and keep providing crates and releases at the level of quality people expect.
Xander Cesari: In some ways, Rust did such a good job of making all of that infrastructure feel invisible. You just type Cargo commands into your terminal and it feels magical.
Jan David Nose: I'm really happy about that. It's an interesting aspect of running an infrastructure team in open source. If you look at the ten-year history since the first stable release, or even the fifteen years since Rust really started, infrastructure was volunteer-run for most of that time. I've been here for three years, and I was the first full-time infrastructure engineer. So for ten to twelve years, volunteers ran the infrastructure.
Jan David Nose: For them, it was crucial that things just worked, because you can't page volunteers in the middle of the night because a server caught fire or downloads stopped working. From the beginning, our infrastructure has been designed to be as simple and as reliable as possible. The same is true for our CDNs. I always feel a bit bad because Fastly is an amazing sponsor. Every time we meet them at conferences or they announce new features, they ask whether we want to use them or talk about how we use Fastly in production. And every time I have to say: we have the simplest configuration possible. We set some HTTP headers. That's pretty much it.
Jan David Nose: It's a very cool platform, but we use the smallest set of features because we need to maintain all of this with a very small team that is mostly volunteer-based. Our priority has always been to keep things simple and reliable and not chase every fancy new technology, so that the project stays sustainable.
Xander Cesari: Volunteer-based organizations seem to have to care about work-life balance, which is probably terrific, and there are lessons to be learned there.
Jan David Nose: Yeah, it's definitely a very interesting environment to work in. It has different rules than corporations or commercial teams. We have to think about how much work we can do in a given timeframe in a very different way, because it's unpredictable when volunteers have time, when they're around, and what is happening in their lives.
Jan David Nose: Over the last few years, we've tried to reduce the number of fires that can break out. And when they do happen, we try to shield volunteers from them and take that work on as full-time employees. That started with me three years ago. Last year Marco joined, which increased the capacity we have, because there is so much to do on the Infra side that even with me working full-time, we simply did not have enough people.
Xander Cesari: So you're two full-time and everything else is volunteer.
Jan David Nose: Exactly. The team is around eight people. Marco and I work full-time and are paid by the Rust Foundation to focus exclusively on infrastructure. Then we have a handful of volunteers who work on different things.
Jan David Nose: Because our field of responsibility is so wide, the Infra Team works more in silos than other teams might. We have people who care deeply about very specific parts of the infrastructure. Otherwise there is simply too much to know for any one person. It has been a really nice mix, and it's amazing to work with the people on the team.
Jan David Nose: As someone who is privileged enough to work full-time on this and has the time and resources, we try to bear the bigger burden and create a space that is fun for volunteers to join. We want them to work on exciting things where there is less risk of something catching fire, where it's easier to come in, do a piece of work, and then step away. If your personal life takes over for two weeks, that's okay, because someone is there to make sure the servers and the lights stay on.
Jan David Nose: A lot of that work lives more on the maintainer side: the GitHub apps, the bots that help with triage. It's less risky if something goes wrong there. On the user side, if you push the wrong DNS setting, as someone might have done, you can end up in a situation where for 30 minutes no one can download crates. And in this case, "no one" literally means no user worldwide. That's not an experience I want volunteers to have. It's extremely stressful and was ultimately one of the reasons I joined in the first place-there was a real feeling of burnout from carrying that responsibility.
Jan David Nose: It's easier to carry that as a full-timer. We have more time and more ways to manage the stress. I'm honestly extremely amazed by what the Infra Team was able to do as volunteers. It's unbelievable what they built and how far they pushed Rust to get to where we are now.
Xander Cesari: I think anyone who's managing web traffic in 2025 is talking about traffic skyrocketing due to bots and scrapers for AI or other purposes. Has that hit the Rust network as well?
Jan David Nose: Yeah, we've definitely seen that. It's handled by a slightly different team, but on the docs.rs side in particular we've seen crawlers hit us hard from time to time, and that has caused noticeable service degradation. We're painfully aware of the increase in traffic that comes in short but very intense bursts when crawlers go wild.
Jan David Nose: That introduces a new challenge for our infrastructure. We need to figure out how to react to that traffic and protect our services from becoming unavailable to real users who want to use docs.rs to look up something for their work. On the CDN side, our providers can usually handle the traffic. It is more often the application side where things hurt.
Jan David Nose: On the CDN side we also see people crawling crates.io, presumably to vacuum up the entire crates ecosystem into an LLM. Fortunately, over the last two years we've done a lot of work to make sure crates.io as an application is less affected by these traffic spikes. Downloads now bypass crates.io entirely and go straight to the CDN, so the API is not hit by these bursts. In the past, this would have looked like a DDoS attack, with so many requests from so many sources that we couldn't handle it.
Jan David Nose: We've done a lot of backend work to keep our stack reliable, but it's definitely something that has changed the game over the last year. We can clearly see that crawlers are much more active than before.
Xander Cesari: That makes sense. I'm sure Fastly is working on this as well. Their business has to adapt to be robust to this new internet.
Jan David Nose: Exactly. For example, one of the conversations we're having right now is about docs.rs. It's still hosted on AWS behind CloudFront, but we're talking about putting it behind Fastly because through Fastly we get features like bot protection that can help keep crawlers out.
Jan David Nose: This is a good example of how our conversations have changed in the last six months. At the start of the year I did not think this would be a topic we would be discussing. We were focused on other things. For docs.rs we have long-term plans to rebuild the infrastructure that powers it, and I expected us to spend our energy there. But with the changes in the industry and everyone trying to accumulate as much data as possible, our priorities have shifted. The problems we face and the order in which we tackle them have changed.
Xander Cesari: And I assume as one of the few paid members of a mostly volunteer team, you often end up working on the fires, not the interesting next feature that might be more fun.
Jan David Nose: That is true, although it sounds a bit negative to say I only get to work on fires. Sometimes it feels like that because, as with any technology stack, there is a lot of maintenance overhead. We definitely pay that price on the infrastructure side.
Jan David Nose: Marco, for example, spent time this year going through all the servers we run, cataloging them, and making sure they're patched and on the latest operating system version. We updated our Ubuntu machines to the latest LTS. It feels a bit like busy work-you just have to do it because it's important and necessary, but it's not the most exciting project.
Jan David Nose: On the other hand, when it comes to things like CDN configuration and figuring out how bot protection features work and whether they are relevant to us, that is also genuinely interesting work. It lets us play with new tools vendors provide, and we're working on challenges that the wider industry is facing. How do you deal with this new kind of traffic? What are the implications of banning bots? How high is the risk of blocking real users? Sometimes someone just misconfigures a curl script, and from the outside it looks like they're crawling our site.
Jan David Nose: So it's an interesting field to work in, figuring out how we can use new features and address new challenges. That keeps it exciting even for us full-timers who do more of the "boring" work. We get to adapt alongside how the world around us is changing. If there's one constant, it's change.
Xander Cesari: Another ripped-from-the-headlines change around this topic is software supply chain security, and specifically xz-utils and the conversation around open source security. How much has that changed the landscape you work in?
Jan David Nose: The xz-utils compromise was scary. I don't want to call it a wake-up call, because we've been aware that supply chain security is a big issue and this was not the first compromise. But the way it happened felt very unsettling. You saw an actor spend a year and a half building social trust in an open source project and then using that to introduce a backdoor.
Jan David Nose: Thinking about that in the context of Rust: every team in the project talks about how we need more maintainers, how there's too much workload on the people who are currently contributing, and how Rust's growth puts strain on the organization as a whole. We want to be an open and welcoming project, and right now we also need to bring new people in. If someone shows up and says, "I'm willing to help, please onboard me," and they stick around for a year and then do something malicious, we would be susceptible to that. I don't think this is unique to Rust. This is an inherent problem in open source.
Xander Cesari: Yeah, it's antithetical to the culture.
Jan David Nose: Exactly. So we're trying to think through how we, as a project and as an ecosystem, deal with persistent threat actors who have the time and resources to play a long game. Paying someone to work full-time on open source for a year is a very different threat model than what we used to worry about.
Jan David Nose: I used to joke that the biggest threat to crates.io was me accidentally pulling the plug on a CDN. I think that has changed. Today the bigger threat is someone managing to insert malicious code into our releases, our supply chain, or crates.io itself. They could find ways to interfere with our systems in ways we're simply not prepared for, where, as a largely volunteer organization, we might be too slow to react to a new kind of attack.
Jan David Nose: Looking back over the last three years, this shift became very noticeable, especially after the first year. Traffic was doubling, Rust usage was going up a lot, and there were news stories about Rust being used in the Windows kernel, in Android, and in parts of iOS. Suddenly Rust is everywhere. If you want to attack "everywhere," going after Rust becomes attractive. That definitely puts a target on our back and has changed the game.
Jan David Nose: I'm very glad the Rust Foundation has a dedicated security engineer who has done a lot of threat modeling and worked with us on infrastructure security. There's also a lot of work happening specifically around the crates ecosystem and preventing supply chain attacks through crates. Luckily, it's not something the Infra side has to solve alone. But it is getting a lot more attention, and I think it will be one of the big challenges for the future: how a mostly volunteer-run project keeps up with this looming threat.
Xander Cesari: And it is the industry at large. This is not a unique problem to the Rust package manager. All package registries, from Python to JavaScript to Nix, deal with this. Is there an industry-wide conversation about how to help each other out and share learnings?
Jan David Nose: Yeah, there's definitely a lot happening. I have to smile a bit because, with a lot of empathy but also a bit of relief, we sometimes share news when another package ecosystem gets compromised. It is a reminder that it's not just us, sometimes it's npm this time.
Jan David Nose: We really try to stay aware of what's happening in the industry and in other ecosystems: what new threats or attack vectors are emerging, what others are struggling with. Sometimes that is security; sometimes it's usability. A year and a half ago, for example, npm had the "everything" package where someone declared every package on npm as a dependency, which blew up the index. We look at incidents like that and ask whether crates.io would struggle with something similar and whether we need to make changes.
Jan David Nose: On the security side we also follow closely what others are doing. In the packaging community, the different package managers are starting to come together more often to figure out which problems everyone shares. There is a bit of a joke that we're all just shipping files over the internet. Whether it's an npm package or a crate, ultimately it's a bunch of text files in a zip. So from an infrastructure perspective the problems are very similar.
Jan David Nose: These communities are now talking more about what problems PyPI has, what problems crates.io has, what is happening in the npm space. One thing every ecosystem has seen-even the very established ones-is a big increase in bandwidth needs, largely connected to the emergence of AI. PyPI, for example, publishes download charts, and it's striking. Python had steady growth-slightly exponential, but manageable-for many years. Then a year or two ago you see a massive hockey stick. People discovered that PyPI was a great distribution system for their models. There were no file size limits at the time, so you could publish precompiled GPU models there.
Jan David Nose: That pattern shows up everywhere. It has kicked off a new era for packaging ecosystems to come together and ask: in a time where open source is underfunded and traffic needs keep growing, how can we act together to find solutions to these shared problems? crates.io is part of those conversations. It's interesting to see how we, as an industry, share very similar problems across ecosystems-Python, npm, Rust, and others.
Xander Cesari: With a smaller, more hobbyist-focused community, you can have relaxed rules about what goes into your package manager. Everyone knows the spirit of what you're trying to do and you can get away without a lot of hard rules and consequences. Is the Rust world going to have to think about much harder rules around package sizes, allowed files, and how you're allowed to distribute things?
Jan David Nose: Funnily enough, we're coming at this from the opposite direction. Compared to other ecosystems, we've always had fairly strict limits. A crate can be at most around ten megabytes in size. There are limits on what kinds of files you can put in there. Ironically, those limits have helped us keep traffic manageable in this period.
Jan David Nose: At the same time, there is a valid argument that these limits may not serve all Rust use cases. There are situations where you might want to include something precompiled in your crate because it is hard to compile locally, takes a very long time, or depends on obscure headers no one has. I don't think we've reached the final state of what the crates.io package format should look like.
Jan David Nose: That has interesting security implications. When we talk about precompiled binaries or payloads, we all have that little voice in our head every time we see a curl | sh command: can I trust this? The same is true if you download a crate that contains a precompiled blob you cannot easily inspect.
Jan David Nose: The Rust Foundation is doing a lot of work and research here. My colleague Adam, who works on the crates.io team, is working behind the scenes to answer some of these questions. For example: what kind of security testing can we do before we publish crates to make sure they are secure and don't contain malicious payloads? How do we surface this information? How do we tell a publisher that they included files that are not allowed? And from the user's perspective, when you visit crates.io, how can you judge how well maintained and how secure a crate is?
Jan David Nose: Those conversations are happening quite broadly in the ecosystem. On the Infra side we're far down the chain. Ultimately we integrate with whatever security scanning infrastructure crates.io builds. We don't have to do the security research ourselves, but we do have to support it.
Jan David Nose: There's still a lot that needs to happen. As awesome as Rust already is, and as much as I love using it, it's important to remember that we're still a very young ecosystem. Python is now very mature and stable, but it's more than 25 years old. Rust is about ten years old as a stable language. We still have a lot to learn and figure out.
Xander Cesari: Is the Rust ecosystem running into problems earlier than other languages because we're succeeding at being foundational software and Rust is used in places that are even more security-critical than other languages, so you have to hit these hard problems earlier than the Python world did?
Jan David Nose: I think that's true. Other ecosystems probably had more time to mature and answer these questions. We're operating on a more condensed timeline. There is also simply more happening now. Open source has been very successful; it's everywhere. That means there are more places where security is critical.
Jan David Nose: So this comes with the success of open source, with what is happening in the ecosystem at large, and with the industry we're in. It does mean we have less time to figure some things out. On the flip side, we also have less baggage. We have less technical debt and fifteen fewer years of accumulated history. That lets us be on the forefront in some areas, like how a package ecosystem can stay secure and what infrastructure a 21st century open source project needs.
Jan David Nose: Here I really want to call out the Rust Foundation. They actively support this work: hiring people like Marco and me to work full-time on infrastructure, having Walter and Adam focus heavily on security, and as an organization taking supply chain considerations very seriously. The Foundation also works with other ecosystems so we can learn and grow together and build a better industry.
Jan David Nose: Behind the scenes, colleagues constantly work to open doors for us as a relatively young language, so we can be part of those conversations and sit at the table with other ecosystems. That lets us learn from what others have already gone through and also help shape where things are going. Sustainability is a big part of that: how do we fund the project long term? How do we make sure we have the human resources and financial resources to run the infrastructure and support maintainers? I definitely underestimated how much of my job would be relationship management and budget planning, making sure credits last until new ones arrive.
Xander Cesari: Most open core business models give away the thing that doesn't cost much-the software-and charge for the thing that scales with use-the service. In Rust's case, it's all free, which is excellent for adoption, but it must require a very creative perspective on the business side.
Jan David Nose: Yeah, and that's where different forces pull in opposite directions. As an open source project, we want everyone to be able to use Rust for free. We want great user experience. When we talk about downloads, there are ways for us to make them much cheaper, but that might mean hosting everything in a single geographic location. Then everyone, including people in Australia, would have to download from, say, Europe, and their experience would get much worse.
Jan David Nose: Instead, we want to use services that are more expensive but provide a better experience for Rust users. There's a real tension there. On one side we want to do the best we can; on the other side we need to be realistic that this costs money.
Xander Cesari: I had been thinking of infrastructure as a binary: it either works or it doesn't. But you're right, it's a slider. You can pick how much money you want to spend and what quality of service you get. Are there new technologies coming, either for the Rust Infra Team or the packaging world in general, to help with these security problems? New sandboxing technologies or higher-level support?
Jan David Nose: A lot of people are working on this problem from different angles. Internally we've talked a lot about it, especially in the context of Crater. Crater pulls in all of those crates to build them and get feedback from the Rust compiler. That means if someone publishes malicious code, we will download it and build it.
Jan David Nose: In Rust this is a particular challenge because build scripts can essentially do anything on your machine. For us that means we need strong sandboxing. We've built our own sandboxing framework so every crate build runs in an isolated container, which prevents malicious code from escaping and messing with the host systems.
Jan David Nose: We feel that pain in Crater, but if we can solve it in a way that isn't exclusive to Crater-if it also protects user machines from the same vulnerabilities-that would be ideal. People like Walter on the Foundation side are actively working on that. I'm sure there are conversations in the Cargo and crates teams as well, because every team that deals with packages sees a different angle of the problem. We all have to come together to solve it, and there is a lot of interesting work happening in that area.
Xander Cesari: I hope help is coming.
Jan David Nose: I'm optimistic.
Xander Cesari: We have this exponential curve with traffic and everything else. It seems like at some point it has to taper off.
Jan David Nose: We'll see. Rust is a young language. I don't know when that growth will slow down. I think there's a good argument that it will continue for quite a while as adoption grows.
Jan David Nose: Being at a conference like RustConf, it's exciting to see how the mix of companies has changed over time. We had a talk from Rivian on how they use Rust in their cars. We've heard from other car manufacturers exploring it. Rust is getting into more and more applications that a few years ago would have been hard to imagine or where the language simply wasn't mature enough yet.
Jan David Nose: As that continues, I think we'll see new waves of growth that sustain the exponential curve we currently have, because we're moving into domains that are new for us. It's amazing to see who is talking about Rust and how they're using it, sometimes in areas like space that you wouldn't expect.
Jan David Nose: I'm very optimistic about Rust's future. With this increase in adoption, we'll see a lot of interesting lessons about how to use Rust and a lot of creative ideas from people building with it. With more corporate adoption, I also expect a new wave of investment into the ecosystem: companies paying people to work full-time on different parts of Rust, both in the ecosystem and in the core project. I'm very curious what the next ten years will look like, because I genuinely don't know.
Xander Cesari: The state of Rust right now does feel a bit like the dog that caught the car and now doesn't know what to do with it.
Jan David Nose: Yeah, I think that's a good analogy. Suddenly we're in a situation where we realize we haven't fully thought through every consequence of success. It's fascinating to see how the challenges change every year. We keep running into new growing pains where something that wasn't an issue a year ago suddenly becomes one because growth keeps going up.
Jan David Nose: We're constantly rebuilding parts of our infrastructure to keep up with that growth, and I don't see that stopping soon. As a user, that makes me very excited. With the language and the ecosystem growing at this pace, there are going to be very interesting things coming that I can't predict today.
Jan David Nose: For the project, it also means there are real challenges: financing the infrastructure we need, finding maintainers and contributors, and creating a healthy environment where people can work without burning out. There is a lot of work to be done, but it's an exciting place to be.
Xander Cesari: Well, thank you for all your work keeping those magic Cargo commands I can type into my terminal just working in the background. If there's any call to action from this interview, it's that if you're a company using Rust, maybe think about donating to keep the Infra Team working.
Jan David Nose: We always love new Rust Foundation members. Especially if you're a company, that's one of the best ways to support the work we do. Membership gives us a budget we can use either to fund people who work full-time on the project or to fill gaps in our infrastructure sponsorship where we don't get services for free and have to pay real money.
Jan David Nose: And if you're not a company, we're always looking for people to help out. The Infra Team has a lot of Rust-based bots and other areas where people can contribute relatively easily.
Xander Cesari: Small scoped bots that you can wrap your head around and help out with.
Jan David Nose: Exactly. It is a bit harder on the Infra side because we can't give people access to our cloud infrastructure. There are areas where it's simply not possible to contribute as a volunteer because you can't have access to the production systems. But there is still plenty of other work that can be done.
Jan David Nose: Like every other team in the project, we're a bit short-staffed. So when you're at conferences, come talk to me or Marco. We have work to do.
Xander Cesari: Well, thank you for doing the work that keeps Rust running.
Jan David Nose: I'm happy to.
Xander Cesari: Awesome. Thank you so much.
25 Nov 2025 12:00am GMT
24 Nov 2025
Planet Mozilla
Firefox Nightly: Getting Better Every Day – These Weeks in Firefox: Issue 192
Highlights
- Collapsed tab group hover preview is going live in Firefox 145!
- Nicolas Chevobbe added a feature that collapses unreferenced CSS variables declarations in the Rules view (#1719461)
- Alexandre Poirot [:ochameau] added a setting to enable automatic pretty printing in the Debugger (#1994128)
- Improved performance on pages making heavy usage of CSS variables
- Jared H added a "copy this profile" button to the app menu (bug 1992199)
Friends of the Firefox team
Resolved bugs (excluding employees)
Volunteers that fixed more than one bug
- Khalid AlHaddad
- Kyler Riggs [:kylr]
New contributors (🌟 = first patch)
- Alex Stout
- Khalid AlHaddad
- Jim Gong
- Mason Abbruzzese
- PhuongNam
- Thomas J Faughnan Jr
- Mingyuan Zhao [:MagentaManifold]
Project Updates
Add-ons / Web Extensions
WebExtensions Framework
- Fixed an issue that was preventing dynamic import from resolving moz-extensions ES modules when called from content scripts attached to sandboxed sub frames - Bug 1988419
- Thanks to Yoshi Cheng-Hao Huang from the Spidermonkey Team for looking into and fixing this issue hitting dynamic imports usage from content scripts
Addon Manager & about:addons
- As a followup to the work to improve the extensions button panel's empty states, starting from Nightly 146 Firefox Desktop will be showing a message bar notice in both the extensions button panel and about:addons to highlight to the users when Firefox is running in Troubleshoot mode (also known as Safe mode) and all add-ons are expected to be disabled, along with a "Learn more link" pointing the user to the SUMO page describing Troubleshoot mode in more details - Bug 1992983 / Bug 1994074 / Bug 1727828
DevTools
- gopi made the Rule view format grid-template-areas even when the value is invalid (#1940198)
- Emilio Cobos Álvarez fixed an issue where editing constructed Rule in the shadow DOM would make them disappear (#1986702)
- Nicolas Chevobbe fixed a bug that would render erroneous data in the var() tooltip for variables defined in :host rule on shared stylesheet (#1995943)
- Julian Descottes improved inspector reload time when shadow DOM element was selected (#1986704)
- Hubert Boma Manilla fixed an issue where we could have duplicated inline preview when paused in the Debugger (#1994114)
- Nicolas Chevobbe [:nchevobbe] exposed devtools.inspector.showAllAnonymousContent in the settings panel (#1995333)
WebDriver
- Khalid added a dedicated switch_to_parent_frame method to the WebDriver Classic Python client, and renamed the existing switch_frame method to switch_to_frame for consistency with the WebDriver specification.
- Julian updated the network.getData command to return response bodies for requests using the data: scheme.
- Julian fixed a bug where different requests would reuse the same id, which could lead to unexpected behaviours when using commands targeting specific requests (e.g. network.provideResponse, network.getData etc…).
- Sasha updated the reset behaviour of "emulation.setLocaleOverride" and "emulation.setTimezoneOverride" commands to align with the spec changes. With this update, when calling these command to reset the override for e.g. a browsing context, only this override will be reset and if there is an override set for a user context, related to this browsing context, this override will be applied instead.
Lint, Docs and Workflow
- ESLint
- We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
- jsdoc/no-bad-blocks has now been enabled.
- jsdoc comments are required to have two stars at the start, this will raise an issue if it looks like it should be a jsdoc comment (e.g. has an @ symbol) but only one star.
- jsdoc/multiline-blocks has also been enabled.
- This is being used mainly for layout consistency of multi-line comments, so that the text of the comment does not start on the first line, nor ends on the last line. This also helps with automatically fixing other rules.
- jsdoc/no-bad-blocks has now been enabled.
- We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
- StyleLint
- More rules have been enabled - background-color tokens, space tokens, text-color tokens, box-shadow tokens
- A new rule has been added to prevent using browser/ css files in toolkit/
Migration Improvements
- We've disabled the IE migrator by default now, since IE (the separate browser, not the compatibility mode) stopped being supported by Microsoft in 2022. We will let this ride to release, and then begin the work of removing support entirely.
- To help users migrate their data off of Windows 10, we've revived the Backup effort, and have landed a number of fixes:
- Restores now preserve the user's default profile if it was default pre‑backup, and the prior profile is renamed to old-[profile name] for clarity. This prevents unexpected startup profiles after a restore and makes rollback obvious in Profile Manager.
- The restore file picker (restore modal and about:welcome restore) now opens at the detected backup location, cutting navigation friction and errors.
- The primary CTA label in about:preferences updates correctly ("Manage backup" → "Turn off backup") immediately after enabling, aligning UI state with functionality.
- The "Backup now" button is hidden until backup is enabled, avoiding a dead‑end action and guiding users through the correct setup sequence.
- Enterprise policy prefs were added for fxbackup, enabling admins on Windows/macOS/Linux to enforce/lock backup availability and behavior for managed users.
- Error and warning banners in about:preferences were updated to match spec for clearer state and failure messaging.
- The backup HTML archive support link now points to the correct documentation.
- Copy updates clarify what cookie data is included in backups, improving user expectations and privacy transparency.
New Tab Page
- We successfully train-hopped New Tab version 145.1.20251009.134757 to 100% of the release channel on October 20th!
- New Tab defaults and freshness: DiscoveryStream cache now expires when browser.newtabpage.activity-stream.discoverystream.sections.enabled changes, so toggling layouts updates content immediately. First‑run shows far fewer placeholders, improving perceived load. Startup correctness improves by keying the about:home startup cache on the newtab add-on version.
- Accessibility, keyboard, and RTL: Fixed a broken focus order where Settings jumped ahead of Weather. For Windows High Contrast Mode, story cards no longer disappear on hover and get clearer visuals. RTL locales now get intuitive reversed arrow-key navigation across story cards.
- Weather opt-in and reach: Opt-in flow now surfaces "Enable current location" and adds a "Detect my location" context-menu action; availability expands to more regions, reducing setup friction and increasing coverage.
- Visual polish and correctness: Standardized opacity plus hover/blur effects make story cards feel more responsive; made sure the search bar stays vertically centered while scrolling. Medium refined cards now show longer publisher names without affecting small cards.
- Wallpaper and language fixes: Missing custom wallpaper thumbnails now load reliably, and a friendly error state appears if Remote Settings wallpapers fail. The language switcher no longer lists add-on locales, restoring expected language selection.
Performance Tools (aka Firefox Profiler)
- Marker tooltips now have a 'filter' button to quickly filter the marker chart to similar markers:

- Link to the profile in the screenshot: https://share.firefox.dev/42kDTuf (and after filtering: https://share.firefox.dev/4gQHPsx)
- This is a resource usage profile of an xpcshell test job. To see them, select a test job in treeherder and press 'g'.
Profile Management
- Profiles is rolling out to all non-win10 users in 144, looking healthy so far
- Niklas refactored the BackupService to support using it to copy profiles (bug 1992203)
- Jared H added per-profile desktop shortcuts on Windows (bug 1958955), available via a toggle on the about:editprofile page
- Dave fixed an intermittent test crash in debug builds (bug 1994849) caused by a race between deleting a directory and attempting to open a lock file. nsProfileLock::LockWithFcntl now returns a warning instead of an error in this case.
Search and Navigation
- New Features
- We are working on enabling better search suggestions in the address bar (link to blog post).
- Mandy has rolled out Perplexity as a new engine to all users
- Google Lens is being rolled out to users in 144 with additional in-product demoing.
- Address Bar
- Daisuke has implemented a prototype for flight status suggestions @ 1990951 + 1994317
- Dale has been working on enabling the unified trust panel @ 1992940 + 1979713
- Dale introduced Option + Up / Down as a keyboard shortcut to open the unified search panel @ 1962200
- Moritz removed the code for "Add a keyword for this search" as it was deprecated functionality @ 1995002
- Search
- Mandy and Drew have been working on releasing the visual search + messaging @ 1995645
Storybook/Reusable Components/Acorn Design System
- <moz-message-bar> now supports arbitrary content with slot="message" elements
- Ideally this is still something short, like a message as opposed to inputs, etc
- <moz-message-bar><span slot="message" data-l10n-id="my-message"><a data-l10n-name="link"></a></span></moz-message-bar>
- Note: if you're using Lit, @click listeners etc set on Fluent elements (data-l10n-name) won't work, you'll need to attach it to the data-l10n-id element or another parent
24 Nov 2025 8:26pm GMT
21 Nov 2025
Planet Mozilla
Niko Matsakis: Move Expressions
This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.
TL;DR
The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression ("rvalue") that desugars into a temporary value that is moved into the closure. So
|| something(&move($expr))
is roughly equivalent to something like:
{
let tmp = $expr;
|| something(&{tmp})
}
How it would look in practice
Let's go back to one of our running examples, the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today - note the let _some_value = ... lines for dealing with captures:
// task: listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
do_something_else_with(_some_a, _some_b, _some_c)
});
Under this proposal it would look something like this:
tokio::task::spawn(async {
do_something_else_with(
move(self.some_a.clone()),
move(self.some_b.clone()),
move(self.some_c.clone()),
)
});
There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like
data_source_iter
.inspect(|item| {
inspect_item(item, move(tx.clone()).clone())
// ---------- -------
// | |
// move a clone |
// into the closure |
// |
// clone the clone
// on each iteration
})
.collect();
// some code that uses `tx` later...
Credit for this idea
This idea is not mine. It's been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it's come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary's proposal, like earlier proposals I've heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that's the version I ran with here.
This proposal makes closures more "continuous"
The reason that I love the move variant of this proposal is that it makes closures more "continuous" and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:
A Rust closure captures the places you use in the "minimal way that it can" - so
|| vec.len()will capture a shared reference to thevec,|| vec.push(22)will capture a mutable reference, and|| drop(vec)will take ownership of the vector.You can use
moveexpressions to control exactly what is captured: so|| move(vec).push(22)will move thevectorinto the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:|| { let vec = move(input.vec); // take full ownership of vec let data = move(&cx.data); // take a reference to data let output_tx = move(output_tx); // take ownership of the output channel process(&vec, &mut output_tx, data) }As a shorthand, you can write
move ||at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match withmoveexpressions to get more control. > So the previous closure might be written more concisely like so:move || { process(&input.vec, &mut output_tx, move(&cx.data)) // --------- --------- -------- // | | | // | | closure still // | | captures a ref // | | `&cx.data` // | | // because of the `move` keyword on the clsoure, // these two are captured "by move" // }
This proposal makes move "fit in" for me
It's a bit ironic that I like this, because it's doubling down on part of Rust's design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:
To be honest, I don't like the choice of
movebecause it's so operational. I think if I could go back, I would try to refashion our closures around two concepts
- Attached closures (what we now call
||) would always be tied to the enclosing stack frame. They'd always have a lifetime even if they don't capture anything.- Detached closures (what we now call
move ||) would capture by-value, likemovetoday.I think this would help to build up the intuition of "use
detach ||if you are going to return the closure from the current stack frame and use||otherwise".
move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don't have "ref closures" and "move closures" - you just have closures that sometimes capture moves, and a "move" closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it's quite elegant.
Why not suffix?
One question is whether a move expression should be a prefix or a postfix operator. So e.g.
|| something(&$expr.move)
instead of &move($expr).
My feeling is that it's not a good fit for a postfix operator because it doesn't just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:
|| process(foo(bar()).move)
When does bar() get called? If you think about it, it has to be closure creation time, but it's not very "obvious".
We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a "scope" of code ought to be prefix - though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.
Edit: I added this section after-the-fact in response to questions.
Conclusion
I'm going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust's existing design. I love that. To me, it joins the set of "yep, we should clearly do that" pieces in this puzzle:
- Add a
Sharetrait (I've gone back to preferring the nameshare😁) - Add
moveexpressions
These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:
"low-level enough for a Kernel, usable enough for a GUI"
but they are moving in the right direction.
21 Nov 2025 10:45am GMT
The Servo Blog: Servo Sponsorship Tiers
The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:
- Platinum: 10,000 USD/month
- Gold: 5,000 USD/month
- Silver: 1,000 USD/month
- Bronze: 100 USD/month
Organizations and individual sponsors donating in these tiers will be acknowledged on the servo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be "no strings attached" donations. All the information about these new tiers is available at the Sponsorship page on this website.
Please contact us at join@servo.org if you are interested in sponsoring the project through one of these tiers.
Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187.
Last, but not least, we're excited to welcome our first bronze sponsor LambdaTest who has recently started donating to the Servo project. Thank you very much!
21 Nov 2025 12:00am GMT
20 Nov 2025
Planet Mozilla
Mozilla Localization (L10N): Localizer spotlight: Robb
About You
My profile in Pontoon is robbp, but I go by Robb. I'm based in Romania and have been contributing to Mozilla localization since 2018 - first between 2018 and 2020, and now again after a break. I work mainly on Firefox (desktop and mobile), Thunderbird, AMO, and SUMO. When I'm not volunteering for open-source projects, I work as a professional translator in Romanian, English, and Italian.
Getting Started
Q: How did you first get interested in localization? Do you remember how you got involved in Mozilla localization?
A: I've used Thunderbird for many years, and I never changed the welcome screen. I'd always see that invitation to contribute somehow.
Back in 2018, I was using freeware only - including Thunderbird - and I started feeling guilty that I wasn't giving back. I tried donating, but online payments seemed shady back then, and I thought a small, one-time donation wouldn't make a difference.
Around the same time, my mother kept asking questions like, "What is this trying to do on my phone? I think they're asking me something, but it's in English!" My generation learned English from TV, Cartoon Network, and software, but when the internet reached the older generation, I realized how big of a problem language barriers could be. I wasn't even aware that there was such a big wave of localizing everything seen on the internet. I was used to having it all in English (operating system, browser, e-mail client, etc.).
After translating for my mom for a year, I thought, why not volunteer to localize, too? Mozilla products were the first choice - Thunderbird was "in my face" all day, all night, telling me to go and localize. I literally just clicked the button on Thunderbird's welcome page - that's where it all started.
I had also tried contributing to other open-source projects, but Mozilla's Pontoon just felt more natural to me. The interface is very close to the CAT tools I am used to.
Your Localization Journey
Q: What do you do professionally? How does that experience influence your Mozilla work and motivate you to contribute to open-source localization?
A: I've been a professional translator since 2012. I work in English, Romanian, and Italian - so yes, I type all the time.
In Pontoon, I treat the work as any professional project. I check for quality, consistency, and tone - just like I would for a client.
I was never a writer. I love translating. That's why I became a translator (professionally). And here… I actually got more feedback here than in my professional translation projects. I think that's why I stayed for so long, that's why I came back.
It is a change of scenery when I don't localize professionally, a long way from the texts I usually deal with. This is where I unwind, where I translate for the joy of translation, where I find my translator freedom.
Q: At what moment did you realize that your work really mattered?
A: When my mom stopped asking me what buttons to click! Now she just uses her phone in Romanian. I can't help but smile when I see that. It makes me think I'm a tiny little part of that confidence she has now.
Community & Collaboration
Q: Since your return, Romanian coverage has risen from below 70% to above 90%. You translate, review suggestions, and comment on other contributors' work. What helps you stay consistent and motivated?
A: I set small goals - I like seeing the completion percentage climb. I celebrate every time I hit a milestone, even if it's just with a cup of coffee.
I didn't realize it was such a big deal until the localization team pointed it out. It's hard to see the bigger picture when you work in isolation. But it's the same motivation that got me started and brought me back - you just need to find what makes you hum.
Q: Do you conduct product testing after you localize the strings or do you test them by being an active user?
A: I'm an active user of both Firefox and Thunderbird - I use them daily and quite intensely. I also have Firefox Nightly installed in Romanian, and I like to explore it to see what's changed and where. But I'll admit, I'm not as thorough as I should be! Our locale manager gives me a heads-up about things to check which helps me stay on top of updates. I need to admit that the testing part is done by the team manager. He is actively monitoring everything that goes on in Pontoon and checks how strings in Pontoon land in the products and to the end users.
Q: How do you collaborate with other contributors and support new ones?
A: I'm more of an independent worker, but in Pontoon, I wanted to use the work that was already done by the "veterans" and see how I could fit in. We had email conversations over terms, their collaboration, their contributions, personal likes and dislikes etc. I think they actually did me a favor with the email conversations, given I am not active on any channels or social media and email was my only way of talking to them.
This year I started leaving comments in Pontoon - it's such an easy way to communicate directly on specific strings. Given I was limited to emails until now, I think comments will help me reach out to other members of the team and start collaborating with them, too.
I keep in touch with the Romanian managers by email or Telegram. One of them helps me with technical terms, he helped get the Firefox project to 100% before the deadline. He contacts me with information on how to use options (I didn't know about) in Pontoon and ideas on wording (after he tests and reviews strings). Collaboration doesn't always mean meetings; sometimes it's quiet cooperation over time.
Mentoring is a big word, but I'm willing for the willing. If someone reaches out, I'll always try to help.
Q: Have you noticed improvements in Pontoon since 2020? How does it compare to professional tools you use, and what features do you wish it had?
A: It's fast - and I love that.
There's no clutter - and that's a huge plus. Some of the "much-tooted" professional tools are overloaded with features and menus that slow you down instead of helping. Pontoon keeps things simple and focused.
I also appreciate being able to see translations in other languages. I often check the French and Italian versions, just to compare terms.
The comments section is another great feature - it makes collaboration quick and to the point, perfect for discussing terms or string-specific questions. Machine translation has also improved a lot across the board, and Pontoon is keeping pace.
As for things that could be better - I'd love to try the pre-translation feature, but I've noticed that some imported strings confirm the wrong suggestion out of several options. That's when a good translation-memory cleanup becomes necessary. It would be helpful if experienced contributors could trim the TM, removing obsolete or outdated terms so new contributors won't accidentally use them.
Pontoon sometimes lags when I move too quickly through strings - like when approving matches or applying term changes across projects. And, unlike professional CAT tools, it doesn't automatically detect repeated strings or propagate translations for identical text. That's a small but noticeable gap compared to professional tools.
Personal Reflections
Q: Professional translators often don't engage in open-source projects because their work is paid elsewhere. What could attract more translators - especially women - to contribute?
A: It's tricky. Translation is a profession, not a hobby, and people need to make a living.
But for me, working on open-source projects is something different - a way to learn new things, use different tools, and have a different mindset. Maybe if more translators saw it as a creative outlet instead of extra work, they'd give it a try.
Involvement in open source is a personal choice. First, one has to hear about it, understand it, and realize that the software they use for free is made by people - then decide they want to be part of that.
I don't think it's a women's thing. Many come and many go. Maybe it's just the thrill at the beginning. Some try, but maybe translation is not for them…
Q: What does contributing to Mozilla mean to you today?
A: It's my way of giving back - and of helping people like my mom, who just want to understand new technology without fear or confusion. That thought makes me smile every time I open Firefox or Thunderbird.
Q: Any final words…
A: I look forward to more blogs featuring fellow contributors and learning and being inspired from their personal stories.
20 Nov 2025 6:46pm GMT
The Mozilla Blog: Rewiring Mozilla: Doing for AI what we did for the web

AI isn't just another tech trend - it's at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks - and further concentrating power over how tech works in the hands of a few.
This leaves us with a choice: push the trajectory of AI in a direction that's good for humanity - or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity.
Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it's currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital - everything people build on the internet - it's imperative that we step in to shape where it goes.
This post is the first in a series that will lay out Mozilla's evolving strategy to do for AI what we did for the web.
What did we do for the web?
Twenty five years ago, Microsoft Internet Explorer had 95% browser market share - controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft's monopoly control of the web, and dropped Internet Explorer's market share to 55% in just a few short years.
The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer - and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era.
How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto - values like privacy, openness and trust. And by gathering a global community of tens of thousands - a rebel alliance of sorts - to build an alternative to the big tech behemoth of the time.
What does success look like?
This is what we intend to do again: grow an alliance of people, communities, companies who envision - and want to build - a different future for AI.
What does 'different' look like? There are millions of good answers to this question. If your native tongue isn't a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it's probably apps and services that become more useful and delightful as they add AI - and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice.
Our task is to create a future for AI that is built around these values. We've started to rewire Mozilla to take on this task - and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework - a way to measure our progress against both mission and money:
| Double bottom line | In the world | In Mozilla |
| Mission | Empower people with tech that promotes agency and choice - make AI for and about people. | Build AI that puts humanity first. 100% of Mozilla orgs building AI that advances the Mozilla Manifesto. |
| Money | Decentralize the tech industry - and create an tech ecosystem where the 'people part' of AI can flourish. | Radically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue. |
Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit - and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla's portfolio will design their strategies - and measure their success - against this double bottom line.
What will we build?
As we've rewired Mozilla, we've not only laid out a new strategy - we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things - real technology and products and services that start to carve a different path for AI.
While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:
| Open source AI - for developers |
Public interest AI - by and for communities |
Trusted AI experiences - for everyone |
| Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI - and that enables people everywhere to build with AI on their own terms. | Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won't build it for them. | Focus: create trusted AI-driven products that give people new ways to interact with the web - with user choice and openness as guiding principles. |
| Early examples: Mozilla.ai's Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI. | Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. | Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 - offering an opt-in way to choose models and add AI features in a browser you trust. |
The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment - and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction.
We need to do this - together
These are the stakes: if we can't push AI in a better direction, the internet - a place where 6 billion of us now spend much of our lives - will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI.
For Mozilla, whether or not to tackle this challenge isn't a question anymore. We need to do this. The question is: how? The high level strategy that I've laid out is our answer. It doesn't prescribe all the details - but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things - and we know that we can't do this alone.
Which means it's incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well - and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow - and win - just as we did in the web era.
You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI.
The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.
20 Nov 2025 3:00pm GMT
Mozilla Thunderbird: Thunderbird Pro November 2025 Update

Welcome back to the latest update on our progress with Thunderbird Pro, a set of additional subscription services designed to enhance the email client you know, while providing a powerful open-source alternative to many of the big tech offerings available today. These services include Appointment, an easy to use scheduling tool; Send, which offers end-to-end encrypted file sharing; and Thundermail, an email service from the Thunderbird team. If you'd like more information on the broader details of each service and the road to getting here you can read our past series of updates here. Do you want to receive these and other updates and be the first to know when Thunderbird Pro is available? Be sure to sign up for the waitlist.
With that said, here's how progress has shaped up on Thunderbird Pro since the last update.
Current Progress
Thundermail
It took a lot of work to get here, but Thundermail accounts are now in production testing. Internal testing with our own team members has begun, ensuring everything is in place for support and onboarding of the Early Bird wave of users. On the visual side, we've implemented improved designs for the new Thundermail dashboard, where users can view and edit their settings, including adding custom domains and aliases.

The new Thunderbird Pro add-on now features support for Thundermail, which will allow future users who sign-up through the add-on to automatically add their Thundermail account in Thunderbird. Work to boost infrastructure and security has also continued, and we've migrated our data hosting from the Americas to Germany and the EU where possible. We've also been improving our email delivery to reduce the chances of Thundermail messages landing in spam folders.

Appointment
The team has been busy with design work, getting Zoom and CalDAV better integrated, and addressing workflow, infrastructure, and bugs. Appointment received a major visual update in the past few months, which is being applied across all of Thunderbird Pro. While some of these updates have already been implemented, there's still lots of remodelling happening and under discussion - all in preparation for the Early Bird beta release.

Send
One of the main focuses for Send has been migrating it from its own add-on to the new Thunderbird Pro add-on, which will make using it in Thunderbird desktop much smoother. Progress continues on improving file safety through better reporting and prevention of illegal uploads. Our security review is now complete, with an external assessor validating all issues scheduled for fixing and once finalized, this report will be shared publicly with our community. Finally, we've refined the Send user experience by optimizing mobile performance, improving upload and download speeds, enhancing the first-time user flow, and much more.

Bringing it all together
Our new Thunderbird Pro website is now live, marking a major milestone in bringing the project to life. The website offers more details about Thunderbird Pro and serves as the first step for users to sign up, sign in and access their accounts.
Our initial subscription tier, the Early Bird Plan, priced at $9 per month, will include all three services: Thundermail, Send, and Appointment. Email hosting, file storage, and the security behind all of this come at a cost, and Thunderbird Pro will never be funded by selling user data, showing ads, or compromising its independence. This introductory rate directly supports Thunderbird Pro's early development and growth, positioning it for long-term sustainability. We will also be actively listening to your feedback and reviewing the pricing and plans we offer. Once the rough edges are smoothed out and we're ready to open the doors to everyone, we plan to introduce additional tiers to better meet the needs of all our users.
What's next
Thunderbird Pro is now awaiting its initial closed test run which will include a core group of community contributors. This group will help conduct a broader test and identify critical issues before we gradually open Early Bird access to our waitlist subscribers in waves. While these services will still be considered under active development, with your help this early release will continue to test and refine them for all future users.
Be sure you sign up for our Early Bird waitlist at tb.pro and help us shape the future of Thunderbird Pro. See you soon!
The post Thunderbird Pro November 2025 Update appeared first on The Thunderbird Blog.
20 Nov 2025 12:00pm GMT
The Rust Programming Language Blog: Switching to Rust's own mangling scheme on nightly
TL;DR: rustc will use its own "v0" mangling scheme by default on nightly versions instead of the previous default, which re-used C++'s mangling scheme, starting in nightly-2025-11-21
Context
When Rust is compiled into object files and binaries, each item (functions, statics, etc) must have a globally unique "symbol" identifying it.
In C, the symbol name of a function is just the name that the function was defined with, such as strcmp. This is straightforward and easy to understand, but requires that each item have a globally unique name that doesn't overlap with any symbols from libraries that it is linked against. If two items had the same symbol then when the linker tried to resolve a symbol to an address in memory (of a function, say), then it wouldn't know which symbol is the correct one.
Languages like Rust and C++ define "symbol mangling schemes", leveraging information from the type system to give each item a unique symbol name. Without this, it would be possible to produce clashing symbols in a variety of ways - for example, every instantiation of a generic or templated function (or an overload in C++), which all have the same name in the surface language would end up with clashing symbols; or the same name in different modules, such as a::foo and b::foo would have clashing symbols.
Rust originally used a symbol mangling scheme based on the Itanium ABI's name mangling scheme used by C++ (sometimes). Over the years, it was extended in an inconsistent and ad-hoc way to support Rust features that the mangling scheme wasn't originally designed for. Rust's current legacy mangling scheme has a number of drawbacks:
- Information about generic parameter instantiations is lost during mangling
- It is internally inconsistent - some paths use an Itanium ABI-style encoding but some don't
- Symbol names can contain
.characters which aren't supported on all platforms - Symbol names include an opaque hash which depends on compiler internals and can't be easily replicated by other compilers or tools
- There is no straightforward way to differentiate between Rust and C++ symbols
If you've ever tried to use Rust with a debugger or a profiler and found it hard to work with because you couldn't work out which functions were which, it's probably because information was being lost in the mangling scheme.
Rust's compiler team started working on our own mangling scheme back in 2018 with RFC 2603 (see the "v0 Symbol Format" chapter in rustc book for our current documentation on the format). Our "v0" mangling scheme has multiple advantageous properties:
- An unambiguous encoding for everything that can end up in a binary's symbol table
- Information about generic parameters are encoded in a reversible way
- Mangled symbols are decodable such that it should be possible to identify concrete instances of generic functions
- It doesn't rely on compiler internals
- Symbols are restricted to only
A-Z,a-z,0-9and_, helping ensure compatibility with tools on varied platforms - It tries to stay efficient and avoid unnecessarily long names and computationally-expensive decoding
However, rustc is not the only tool that interacts with Rust symbol names: the aforementioned debuggers, profilers and other tools all need to be updated to understand Rust's v0 symbol mangling scheme so that Rust's users can continue to work with Rust binaries using all the tools they're used to without having to look at mangled symbols. Furthermore, all of those tools need to have new releases cut and then those releases need to be picked up by distros. This takes time!
Fortunately, the compiler team now believe that support for our v0 mangling scheme is now sufficiently widespread that it can start to be used by default by rustc.
Benefits
Reading Rust backtraces, or using Rust with debuggers, profilers and other tools that operate on compiled Rust code, will be able to output much more useful and readable names. This will especially help with async code, closures and generic functions.
It's easy to see the new mangling scheme in action, consider the following example:
With the legacy mangling scheme, all of the useful information about the generic instantiation of foo is lost in the symbol f::foo..
thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
0: std::panicking::begin_panic
at /rustc/d6c...582/library/std/src/panicking.rs:769:5
1: f::foo
2: f::main
3: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
..but with the v0 mangling scheme, the useful details of the generic instantiation are preserved with f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>:
thread 'main' panicked at f.rs:2:5:
explicit panic
stack backtrace:
0: std::panicking::begin_panic
at /rustc/d6c...582/library/std/src/panicking.rs:769:5
1: f::foo::<alloc::vec::Vec<(alloc::string::String, &[u8; 123])>>
2: f::main
3: <fn() as core::ops::function::FnOnce<()>>::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Possible drawbacks
Symbols using the v0 mangling scheme can be larger than symbols with the legacy mangling scheme, which can result in a slight increase in linking times and binary sizes if symbols aren't stripped (which they aren't by default). Fortunately this impact should be minor, especially with modern linkers like lld, which Rust will now default to on some targets.
Some old versions of tools/distros or niche tools that the compiler team are unaware of may not have had support for the v0 mangling scheme added. When using these tools, the only consequence is that users may encounter mangled symbols. rustfilt can be used to demangle Rust symbols if a tool does not.
In any case, using the new mangling scheme can be disabled if any problem occurs: use the -Csymbol-mangling-version=legacy -Zunstable-options flag to revert to using the legacy mangling scheme.
Explicitly enabling the legacy mangling scheme requires nightly, it is not intended to be stabilised so that support can eventually be removed.
Adding v0 support in your tools
If you maintain a tool that interacts with Rust symbols and does not support the v0 mangling scheme, there are Rust and C implementations of a v0 symbol demangler available in the rust-lang/rustc-demangle repository that can be integrated into your project.
Summary
rustc will use our "v0" mangling scheme on nightly for all targets starting in tomorrow's rustup nightly (nightly-2025-11-21).
Let us know if you encounter problems, by opening an issue on GitHub.
If that happens, you can use the legacy mangling scheme with the -Csymbol-mangling-version=legacy -Zunstable-options flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:
[]
= ["-Csymbol-mangling-version=legacy", "-Zunstable-options"]
If you like the sound of the new symbol mangling version and would like to start using it on stable or beta channels of Rust, then you can similarly use the -Csymbol-mangling-version=v0 flag today via RUSTFLAGS or .cargo/config.toml:
[]
= ["-Csymbol-mangling-version=v0"]
20 Nov 2025 12:00am GMT
19 Nov 2025
Planet Mozilla
Nick Fitzgerald: A Function Inliner for Wasmtime and Cranelift
Note: I cross-posted this to the Bytecode Alliance blog.
Function inlining is one of the most important compiler optimizations, not because of its direct effects, but because of the follow-up optimizations it unlocks. It may reveal, for example, that an otherwise-unknown function parameter value is bound to a constant argument, which makes a conditional branch unconditional, which in turn exposes that the function will always return the same value. Inlining is the catalyst of modern compiler optimization.
Wasmtime is a WebAssembly runtime that focuses on safety and fast Wasm execution. But despite that focus on speed, Wasmtime has historically chosen not to perform inlining in its optimizing compiler backend, Cranelift. There were two reasons for this surprising decision: first, Cranelift is a per-function compiler designed such that Wasmtime can compile all of a Wasm module's functions in parallel. Inlining is inter-procedural and requires synchronization between function compilations; that synchronization reduces parallelism. Second, Wasm modules are generally produced by an optimizing toolchain, like LLVM, that already did all the beneficial inlining. Any calls remaining in the module will not benefit from inlining - perhaps they are on slow paths marked [[unlikely]] or the callee is annotated with #[inline(never)]. But WebAssembly's component model changes this calculus.
With the component model, developers can compose multiple Wasm modules - each produced by different toolchains - into a single program. Those toolchains only had a local view of the call graph, limited to their own module, and they couldn't see cross-module or fused adapter function definitions. None of them, therefore, had an opportunity to inline calls to such functions. Only the Wasm runtime's compiler, which has the final, complete call graph and function definitions in hand, has that opportunity.
Therefore we implemented function inlining in Wasmtime and Cranelift. Its initial implementation landed in Wasmtime version 36, however, it remains off-by-default and is still baking. You can test it out via the -C inlining=y command-line flag or the wasmtime::Config::compiler_inlining method. The rest of this article describes function inlining in more detail, digs into the guts of our implementation and rationale for its design choices, and finally looks at some early performance results.
Function Inlining
Function inlining is a compiler optimization where a call to a function f is replaced by a copy of f's body. This removes function call overheads (spilling caller-save registers, setting up the call frame, etc…) which can be beneficial on its own. But inlining's main benefits are indirect: it enables subsequent optimization of f's body in the context of the call site. That context is important - a parameter's previously unknown value might be bound to a constant argument and exposing that to the optimizer might cascade into a large code clean up.
Consider the following example, where function g calls function f:
fn f(x: u32) -> bool {
return x < u32::MAX / 2;
}
fn g() -> u32 {
let a = 42;
if f(a) {
return a;
} else {
return 0;
}
}
After inlining the call to f, function g looks something like this:
fn g() -> u32 {
let a = 42;
let x = a;
let f_result = x < u32::MAX / 2;
if f_result {
return a;
} else {
return 0;
}
}
Now the whole subexpression that defines f_result only depends on constant values, so the optimizer can replace that subexpression with its known value:
fn g() -> u32 {
let a = 42;
let f_result = true;
if f_result {
return a;
} else {
return 0;
}
}
This reveals that the if-else conditional will, in fact, unconditionally transfer control to the consequent, and g can be simplified into the following:
fn g() -> u32 {
let a = 42;
return a;
}
In isolation, inlining f was a marginal transformation. When considered holistically, however, it unlocked a plethora of subsequent simplifications that ultimately led to g returning a constant value rather than computing anything at run-time.
Implementation
Cranelift's unit of compilation is a single function, which Wasmtime leverages to compile each function in a Wasm module in parallel, speeding up compile times on multi-core systems. But inlining a function at a particular call site requires that function's definition, which implies parallelism-hurting synchronization or some other compromise, like additional read-only copies of function bodies. So this was the first goal of our implementation: to preserve as much parallelism as possible.
Additionally, although Cranelift is primarily developed for Wasmtime by Wasmtime's developers, it is independent from Wasmtime. It is a reusable library and is reused, for example, by the Rust project as an alternative backend for rustc. But a large part of inlining, in practice, are the heuristics for deciding when inlining a call is likely beneficial, and those heuristics can be domain specific. Wasmtime generally wants to leave most calls out-of-line, inlining only cross-module calls, while rustc wants something much more aggressive to boil away its Iterator combinators and the like. So our second implementation goal was to separate how we inline a function call from the decision of whether to inline that call.
These goals led us to a layered design where Cranelift has an optional inlining pass, but the Cranelift embedder (e.g. Wasmtime) must provide a callback to it. The inlining pass invokes the callback for each call site, the callback returns a command of either "leave the call as-is" or "here is a function body, replace the call with it". Cranelift is responsible for the inlining transformation and the embedder is responsible for deciding whether to inline a function call and, if so, getting that function's body (along with whatever synchronization that requires).
The mechanics of the inlining transformation - wiring arguments to parameters, renaming values, and copying instructions and basic blocks into the caller - are, well, mechanical. Cranelift makes extensive uses of arenas for various entities in its IR, and we begin by appending the callee's arenas to the caller's arenas, renaming entity references from the callee's arena indices to their new indices in the caller's arenas as we do so. Next we copy the callee's block layout into the caller and replace the original call instruction with a jump to the caller's inlined version of the callee's entry block. Cranelift uses block parameters, rather than phi nodes, so the call arguments simply become jump arguments. Finally, we translate each instruction from the callee into the caller. This is done via a pre-order traversal to ensure that we process value definitions before value uses, simplifying instruction operand rewriting. The changes to Wasmtime's compilation orchestration are more interesting.
The following pseudocode describes Wasmtime's compilation orchestration before Cranelift gained an inlining pass and also when inlining is disabled:
// Compile each function in parallel.
let objects = parallel map for func in wasm.functions {
compile(func)
};
// Combine the functions into one region of executable memory, resolving
// relocations by mapping function references to PC-relative offsets.
return link(objects)
The naive way to update that process to use Cranelift's inlining pass might look something like this:
// Optionally perform some pre-inlining optimizations in parallel.
parallel for func in wasm.functions {
pre_optimize(func);
}
// Do inlining sequentially.
for func in wasm.functions {
func.inline(|f| if should_inline(f) {
Some(wasm.functions[f])
} else {
None
})
}
// And then proceed as before.
let objects = parallel map for func in wasm.functions {
compile(func)
};
return link(objects)
Inlining is performed sequentially, rather than in parallel, which is a bummer. But if we tried to make that loop parallel by logically running each function's inlining pass in its own thread, then a callee function we are inlining might or might not have had its transitive function calls inlined already depending on the whims of the scheduler. That leads to non-deterministic output, and our compilation must be deterministic, so it's a non-starter.1 But whether a function has already had transitive inlining done or not leads to another problem.
With this naive approach, we are either limited to one layer of inlining or else potentially duplicating inlining effort, repeatedly inlining e into f each time we inline f into g, h, and i. This is because f may come before or after g in our wasm.functions list. We would prefer it if f already contained e and was already optimized accordingly, so that every caller of f didn't have to redo that same work when inlining calls to f.
This suggests we should topologically sort our functions based on their call graph, so that we inline in a bottom-up manner, from leaf functions (those that do not call any others) towards root functions (those that are not called by any others, typically main and other top-level exported functions). Given a topological sort, we know that whenever we are inlining f into g either (a) f has already had its own inlining done or (b) f and g participate in a cycle. Case (a) is ideal: we aren't repeating any work because it's already been done. Case (b), when we find cycles, means that f and g are mutually recursive. We cannot fully inline recursive calls in general (just as you cannot fully unroll a loop in general) so we will simply avoid inlining these calls.2 So topological sort avoids repeating work, but our inlining phase is still sequential.
At the heart of our proposed topological sort is a call graph traversal that visits callees before callers. To parallelize inlining, you could imagine that, while traversing the call graph, we track how many still-uninlined callees each caller function has. Then we batch all functions whose associated counts are currently zero (i.e. they aren't waiting on anything else to be inlined first) into a layer and process them in parallel. Next, we decrement each of their callers' counts and collect the next layer of ready-to-go functions, continuing until all functions have been processed.
let call_graph = CallGraph::new(wasm.functions);
let counts = { f: call_graph.num_callees_of(f) for f in wasm.functions };
let layer = [ f for f in wasm.functions if counts[f] == 0 ];
while layer is not empty {
parallel for func in layer {
func.inline(...);
}
let next_layer = [];
for func in layer {
for caller in call_graph.callers_of(func) {
counts[caller] -= 1;
if counts[caller] == 0 {
next_layer.push(caller)
}
}
}
layer = next_layer;
}
This algorithm will leverage available parallelism, and it avoids repeating work via the same dependency-based scheduling that topological sorting did, but it has a flaw. It will not terminate when it encounters recursion cycles in the call graph. If function f calls function g which also calls f, for example, then it will not schedule either of them into a layer because they are both waiting for the other to be processed first. One way we can avoid this problem is by avoiding cycles.
If you partition a graph's nodes into disjoint sets, where each set contains every node reachable from every other node in that set, you get that graph's strongly-connected components (SCCs). If a node does not participate in a cycle, then it will be in its own singleton SCC. The members of a cycle, on the other hand, will all be grouped into the same SCC, since those nodes are all reachable from each other.
In the following example, the dotted boxes designate the graph's SCCs:
Ignoring edges between nodes within the same SCC, and only considering edges across SCCs, gives us the graph's condensation. The condensation is always acyclic, because the original graph's cycles are "hidden" within the SCCs.
Here is the condensation of the previous example:
We can adapt our parallel-inlining algorithm to operate on strongly-connected components, and now it will correctly terminate because we've removed all cycles. First, we find the call graph's SCCs and create the reverse (or transpose) condensation, where an edge a→b is flipped to b→a. We do this because we will query this graph to find the callers of a given function f, not the functions that f calls. I am not aware of an existing name for the reverse condensation, so, at Chris Fallin's brilliant suggestion, I have decided to call it an evaporation. From there, the algorithm largely remains as it was before, although we keep track of counts and layers by SCC rather than by function.
let call_graph = CallGraph::new(wasm.functions);
let components = StronglyConnectedComponents::new(call_graph);
let evaoporation = Evaporation::new(components);
let counts = { c: evaporation.num_callees_of(c) for c in components };
let layer = [ c for c in components if counts[c] == 0 ];
while layer is not empty {
parallel for func in scc in layer {
func.inline(...);
}
let next_layer = [];
for scc in layer {
for caller_scc in evaporation.callers_of(scc) {
counts[caller_scc] -= 1;
if counts[caller_scc] == 0 {
next_layer.push(caller_scc);
}
}
}
layer = next_layer;
}
This is the algorithm we use in Wasmtime, modulo minor tweaks here and there to engineer some data structures and combine some loops. After parallel inlining, the rest of the compiler pipeline continues in parallel for each function, yielding unlinked machine code. Finally, we link all that together and resolve relocations, same as we did previously.
Heuristics are the only implementation detail left to discuss, but there isn't much to say that hasn't already been said. Wasmtime prefers not to inline calls within the same Wasm module, while cross-module calls are a strong hint that we should consider inlining. Beyond that, our heuristics are extremely naive at the moment, and only consider the code sizes of the caller and callee functions. There is a lot of room for improvement here, and we intend to make those improvements on-demand as people start playing with the inliner. For example, there are many things we don't consider in our heuristics today, but possibly should:
- Hints from WebAssembly's compilation-hints proposal
- The number of edges to a callee function in the call graph
- Whether any of a call's arguments are constants
- Whether the call is inside a loop or a block marked as "cold"
- Etc…
Some Initial Results
The speed up you get (or don't get) from enabling inlining is going to vary from program to program. Here are a couple synthetic benchmarks.
First, let's investigate the simplest case possible, a cross-module call of an empty function in a loop:
(component
;; Define one module, exporting an empty function `f`.
(core module $M
(func (export "f")
nop
)
)
;; Define another module, importing `f`, and exporting a function
;; that calls `f` in a loop.
(core module $N
(import "m" "f" (func $f))
(func (export "g") (param $counter i32)
(loop $loop
;; When counter is zero, return.
(if (i32.eq (local.get $counter) (i32.const 0))
(then (return)))
;; Do our cross-module call.
(call $f)
;; Decrement the counter and continue to the next iteration
;; of the loop.
(local.set $counter (i32.sub (local.get $counter)
(i32.const 1)))
(br $loop))
)
)
;; Instantiate and link our modules.
(core instance $m (instantiate $M))
(core instance $n (instantiate $N (with "m" (instance $m))))
;; Lift and export the looping function.
(func (export "g") (param "n" u32)
(canon lift (core func $n "g"))
)
)
We can inspect the machine code that this compiles down to via the wasmtime compile and wasmtime objdump commands. Let's focus only on the looping function. Without inlining, we see a loop around a call, as we would expect:
00000020 wasm[1]::function[1]:
;; Function prologue.
20: pushq %rbp
21: movq %rsp, %rbp
;; Check for stack overflow.
24: movq 8(%rdi), %r10
28: movq 0x10(%r10), %r10
2c: addq $0x30, %r10
30: cmpq %rsp, %r10
33: ja 0x89
;; Allocate this function's stack frame, save callee-save
;; registers, and shuffle some registers.
39: subq $0x20, %rsp
3d: movq %rbx, (%rsp)
41: movq %r14, 8(%rsp)
46: movq %r15, 0x10(%rsp)
4b: movq 0x40(%rdi), %rbx
4f: movq %rdi, %r15
52: movq %rdx, %r14
;; Begin loop.
;;
;; Test our counter for zero and break out if so.
55: testl %r14d, %r14d
58: je 0x72
;; Do our cross-module call.
5e: movq %r15, %rsi
61: movq %rbx, %rdi
64: callq 0
;; Decrement our counter.
69: subl $1, %r14d
;; Continue to the next iteration of the loop.
6d: jmp 0x55
;; Function epilogue: restore callee-save registers and
;; deallocate this functions's stack frame.
72: movq (%rsp), %rbx
76: movq 8(%rsp), %r14
7b: movq 0x10(%rsp), %r15
80: addq $0x20, %rsp
84: movq %rbp, %rsp
87: popq %rbp
88: retq
;; Out-of-line traps.
89: ud2
╰─╼ trap: StackOverflow
When we enable inlining, then M::f gets inlined into N::g. Despite N::g becoming a leaf function, we will still push %rbp and all that in the prologue and pop it in the epilogue, because Wasmtime always enables frame pointers. But because it no longer needs to shuffle values into ABI argument registers or allocate any stack space, it doesn't need to do any explicit stack checks, and nearly all the rest of the code also goes away. All that is left is a loop decrementing a counter to zero:3
00000020 wasm[1]::function[1]:
;; Function prologue.
20: pushq %rbp
21: movq %rsp, %rbp
;; Loop.
24: testl %edx, %edx
26: je 0x34
2c: subl $1, %edx
2f: jmp 0x24
;; Function epilogue.
34: movq %rbp, %rsp
37: popq %rbp
38: retq
With this simplest of examples, we can just count the difference in number of instructions in each loop body:
- 12 without inlining (7 in
N::gand 5 inM::fwhich are 2 to push the frame pointer, 2 to pop it, and 1 to return) - 4 with inlining
But we might as well verify that the inlined version really is faster via some quick-and-dirty benchmarking with hyperfine. This won't measure only Wasm execution time, it also measures spawning a whole Wasmtime process, loading code from disk, etc…, but it will work for our purposes if we crank up the number of iterations:
$ hyperfine \
"wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm" \
"wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm"
Benchmark 1: wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm
Time (mean ± σ): 138.2 ms ± 9.6 ms [User: 132.7 ms, System: 6.7 ms]
Range (min … max): 128.7 ms … 167.7 ms 19 runs
Benchmark 2: wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm
Time (mean ± σ): 37.5 ms ± 1.1 ms [User: 33.0 ms, System: 5.8 ms]
Range (min … max): 35.7 ms … 40.8 ms 77 runs
Summary
'wasmtime run --allow-precompiled -Cinlining=y --invoke 'g(100000000)' yes-inline.cwasm' ran
3.69 ± 0.28 times faster than 'wasmtime run --allow-precompiled -Cinlining=n --invoke 'g(100000000)' no-inline.cwasm'
Okay so if we measure Wasm doing almost nothing but empty function calls and then we measure again after removing function call overhead, we get a big speed up - it would be disappointing if we didn't! But maybe we can benchmark something a tiny bit more realistic.
A program that we commonly reach for when benchmarking is a small wrapper around the pulldown-cmark markdown library that parses the CommonMark specification (which is itself written in markdown) and renders that to HTML. This is Real World™ code operating on Real World™ inputs that matches Real World™ use cases people have for Wasm. That is, good benchmarking is incredibly difficult, but this program is nonetheless a pretty good candidate for inclusion in our corpus. There's just one hiccup: in order for our inliner to activate normally, we need a program using components and making cross-module calls, and this program doesn't do that. But we don't have a good corpus of such benchmarks yet because this kind of component composition is still relatively new, so let's keep using our pulldown-cmark program but measure our inliner's effects via a more circuitous route.
Wasmtime has tunables to enable the inlining of intra-module calls4 and rustc and LLVM have tunables for disabling inlining5. Therefore we can roughly estimate the speed ups our inliner might unlock on a similar, but extensively componentized and cross-module calling, program by:
-
Disabling inlining when compiling the Rust source code to Wasm
-
Compiling the resulting Wasm binary to native code with Wasmtime twice: once with inlining disabled, and once with intra-module call inlining enabled
-
Comparing those two different compilations' execution speeds
Running this experiment with Sightglass, our internal benchmarking infrastructure and tooling, yields the following results:
execution :: instructions-retired :: pulldown-cmark.wasm
Δ = 7329995.35 ± 2.47 (confidence = 99%)
with-inlining is 1.26x to 1.26x faster than without-inlining!
[35729153 35729164.72 35729173] without-inlining
[28399156 28399169.37 28399179] with-inlining
Conclusion
Wasmtime and Cranelift now have a function inliner! Test it out via the -C inlining=y command-line flag or via the wasmtime::Config::compiler_inlining method. Let us know if you run into any bugs or whether you see any speed-ups when running Wasm components containing multiple core modules.
Thanks to Chris Fallin and Graydon Hoare for reading early drafts of this piece and providing valuable feedback. Any errors that remain are my own.
-
Deterministic compilation gives a number of benefits: testing is easier, debugging is easier, builds can be byte-for-byte reproducible, it is well-behaved in the face of incremental compilation and fine-grained caching, etc… ↩
-
For what it is worth, this still allows collapsing chains of mutually-recursive calls (
acallsbcallsccallsa) into a single, self-recursive call (abccallsabc). Our actual implementation does not do this in practice, preferring additional parallelism instead, but it could in theory. ↩ -
Cranelift cannot currently remove loops without side effects, and generally doesn't mess with control-flow at all in its mid-end. We've had various discussions about how we might best fit control-flow-y optimizations into Cranelift's mid-end architecture over the years, but it also isn't something that we've seen would be very beneficial for actual, Real World™ Wasm programs, given that (a) LLVM has already done much of this kind of thing when producing the Wasm, and (b) we do some branch-folding when lowering from our mid-level IR to our machine-specific IR. Maybe we will revisit this sometime in the future if it crops up more often after inlining. ↩
-
-C cranelift-wasmtime-inlining-intra-module=yes↩ -
-Cllvm-args=--inline-threshold=0,-Cllvm-args=--inlinehint-threshold=0, and-Zinline-mir=no↩
19 Nov 2025 8:00am GMT
This Week In Rust: This Week in Rust 626
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
- Launching the 2025 State of Rust Survey
- Google Summer of Code 2025 results
- Project goals update - October 2025
- Project goals update - September 2025
Newsletters
- Scientific Computing in Rust #12 (November 2025)
- Secure-by-design firmware development with Wasefire
- Rust Trends Issue #72: From Experimental to Enterprise: Rust's Production Moment
Project/Tooling Updates
Observations/Thoughts
- [audio] Netstack.FM Episode 14 - Roto And Cascade with Terts and Arya from NLnet Labs
- Improving the Incremental System in the Rust Compiler
- Truly First-Class Custom Smart Pointers
- Pinning is a kind of static borrow
- Rust in Android: move fast and fix things
- Match it again Sam
- Humanity is stained by the sins of C and no LLM can rewrite them away to Rust
- UV and Ruff: Turbocharging Python Development with Rust-Powered Tools
- A Function Inliner for Wasmtime and Cranelift
Rust Walkthroughs
- Rust Unit Tests: Assertion libraries
- Rust Unit Tests: Using a mocking library
- A Practical Guide to Transitioning to Memory-Safe Languages
- Building WebSocket Protocol in Apache Iggy using io_uring and Completion Based I/O Architecture
- Building serverless applications with Rust on AWS Lambda
- Disallow code usage with a custom
clippy.toml
Miscellaneous
- Absurd Rust? Never!
- [video] Linus Torvalds - Speaks up on the Rust Divide and saying NO
- October 2025 Rust Jobs Report
- Rust's Strategic Advantage
Crate of the Week
This week's crate is cargo cat, a cargo-subcommand to put a random ascii cat face on your terminal.
Thanks to Alejandra Gonzáles for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
- GuardianDB - Create and translate documentation to English
- GuardianDB - Increase test coverage (currently 13%)
- GuardianDB - Create cohesive usage examples
- GuardianDB - Backend Iroh IPFS Node
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- Rustikon 2026| CFP closes 2025-11-24 | Warsaw, Poland | 2025-03-19 - 2025-03-20 | Event Website
- TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
- RustWeek 2026| CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
427 pull requests were merged in the last week
Compiler
- add new
function_casts_as_integerlint - miri: initial implementation of wildcard provenence for tree borrows
Library
- new
format_args!()andfmt::Argumentsimplementation vec_recycle: implementation- implement
Read::read_array - stabilize
char_max_len - stabilize
duration_from_nanos_u128 - stabilize
extern_system_varargs - stabilize
vec_into_raw_parts - constify
ManuallyDrop::take - constify
mem::take - remove
rustc_inherit_overflow_checksfromposition()in slice iterators
Cargo
cli: add support for completing--configvalues in Bashtree: support long forms for --format variablesconfig: fallback to non-canonical path for workspace-path-hashmanifest: point out when a key belongs to configpackage: all tar entries timestamp be the same- do not lock the artifact-dir for check builds
- add unstable rustc-unicode flag
Rustdoc
- Fix invalid jump to def macro link generation
- don't ignore path distance for doc aliases
- don't pass
RenderOptionstoDocContext - microoptimize
render_item,move stuff out of common path - quality of life changes
Clippy
ok_expect: add autofix- {
unnecessary,panicking}_unwrap: lint field accesses equatable_if_let: don't suggest=in const contextrc_buffer: don't touch the path toRc/Arcin the suggestionincompatible_msrv: don't check the contents of anystdmacro- add a
doc_paragraphs_missing_punctuationlint - fix
single_range_in_vec_initfalse positive for explicitRange - fix
sliced_string_as_bytesfalse positive with aRangeFull - fix website history interactions
- rework
missing_docs_in_private_items
Rust-Analyzer
Rust Compiler Performance Triage
Positive week, most notably because of the new format_args!() and fmt::Arguments implementation from #148789. Another notable improvement came from moving some computations from one compiler stage to another to save memory and unnecessary tree traversals in #148706
Triage done by @panstromek. Revision range: 055d0d6a..6159a440
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
1.6% | [0.2%, 5.6%] | 11 |
| Regressions ❌ (secondary) |
0.3% | [0.1%, 1.1%] | 26 |
| Improvements ✅ (primary) |
-0.8% | [-4.5%, -0.1%] | 161 |
| Improvements ✅ (secondary) |
-1.4% | [-38.1%, -0.1%] | 168 |
| All ❌✅ (primary) | -0.6% | [-4.5%, 5.6%] | 172 |
2 Regressions, 4 Improvements, 10 Mixed; 4 of them in rollups 48 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- No New or Updated RFCs were created this week.
Upcoming Events
Rusty Events between 2025-11-19 - 2025-12-17 🦀
Virtual
- 2025-11-19 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-11-19 | Virtual (Girona, ES) | Rust Girona
- 2025-11-20 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-11-20 | Virtual (Berlin, DE) | Rust Berlin
- 2025-11-20 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2025-11-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (Boulder, CO, US) | Boulder Elixir
- 2025-11-25 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (London, UK) | Women in Rust
- 2025-11-26 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-27 | Virtual (Buenos Aires, AR) | Rust en Español
- 2025-11-30 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-02 | Virtual (London, UK) | Women in Rust
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-05 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-06 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2025-12-07 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-10 | Virtual (Girona, ES) | Rust Girona
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-12-16 | Virtual (Washington, DC, US) | Rust DC
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Virtual (Girona, ES) | Rust Girona
Asia
- 2025-11-20 | Tokyo, JP | Tokyo Rust Meetup
Europe
- 2025-11-19 | Ostrava, CZ | TechMeetup Ostrava
- 2025-11-20 | Aarhus, DK | Rust Aarhus
- 2025-11-20 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2025-11-20 | Luzern, CH | Rust Luzern
- 2025-11-26 | Bern, CH | Rust Bern
- 2025-11-27 | Augsburg, DE | Rust Meetup Augsburg
- 2025-11-27 | Barcelona, ES | BcnRust
- 2025-11-27 | Edinburgh, UK | Rust and Friends
- 2025-11-28 | Prague, CZ | Rust Prague
- 2025-12-03 | Girona, ES | Rust Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-08 | Paris, FR | Rust Paris
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
- 2025-12-16 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
North America
- 2025-11-19 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-11-20 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-11-20 | Spokane, WA, US | Spokane Rust
- 2025-11-23 | Boston, MA, US | Boston Rust Meetup
- 2025-11-26 | Austin, TX, US | Rust ATX
- 2025-11-26 | Phoenix, AZ, US | Desert Rust
- 2025-11-27 | Mountain View, CA, US | Hacker Dojo
- 2025-11-29 | Boston, MA, US | Boston Rust Meetup
- 2025-12-02 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-04 | México City, MX | Rust MX
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Lehi, UT, US | Utah Rust
- 2025-12-11 | San Diego, CA, US | San Diego Rust
- 2025-12-13 | Boston, MA, US | Boston Rust Meetup
- 2025-12-16 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
Oceania
- 2025-12-11 | Brisbane City, QL, AU | Rust Brisbane
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android's C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one.
- Jeff Vander Stoep on the Google Android blog
Thanks to binarycat for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
19 Nov 2025 5:00am GMT
The Rust Programming Language Blog: Project goals update — September 2025
The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.
Flagship goals
"Beyond the `&`"
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (TC) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Key Developments
- coordinating with
# to ensure compatibility between the two features (allow custom pin projections to be the same as the ones for&pin mut T) - identified connection to auto reborrowing
- https://github.com/rust-lang/rust-project-goals/issues/399
- https://github.com/rust-lang/rust/issues/145612
- held a design meeting
- very positive feedback from the language team
- approved lang experiment
- got a vibe check on design axioms
- created a new Zulip channel #t-lang/custom-refs for all new features needed to make custom references more similar to
&T/&mut Tsuch as field projections, auto reborrowing and more - created the tracking issue for
#![feature(field_projections)] - opened https://github.com/rust-lang/rust/pull/146307 to implement field representing types (FRTs) in the compiler
Next Steps
- Get https://github.com/rust-lang/rust/pull/146307 reviewed & merged
Help Wanted
- When the PR for FRTs lands, try out the feature & provide feedback on FRTs
- if possible using the field-projection crate and provide feedback on projections
Internal Design Updates
Shared & Exclusive Projections
We want users to be able to have two different types of projections analogous to &T and &mut T. Each field can be projected independently and a single field can only be projected multiple times in a shared way. The current design uses two different traits to model this. The two traits are almost identical, except for their safety documentation.
We were thinking if it is possible to unify them into a single trait and have coercions similar to autoreborrowing that would allow the borrow checker to change the behavior depending on which type is projected.
Syntax
There are lots of different possibilities for which syntax we can choose, here are a couple options: [Devon Peticolas][]->f/[Andrea D'Angelo][] x->f, [Devon Peticolas][].f/[Andrea D'Angelo][] x.f, x.[Fatih Kadir Akın][]/x.mut[Fatih Kadir Akın][], x.ref.[Fatih Kadir Akın][]/x.[Fatih Kadir Akın][]. Also many alternatives for the sigils used: x[Fatih Kadir Akın][], x~f, x.@.f.
We have yet to decide on a direction we want to go in. If we are able to merge the two project traits, we can also settle on a single syntax which would be great.
Splitting Projections into Containers & Pointers
There are two categories of projections: Containers and Pointers:
- Containers are types like
MaybeUninit<T>,Cell<T>,UnsafeCell<T>,ManuallyDrop<T>. They arerepr(transparent)and apply themselves to each field, soMaybeUninit<MyStruct>has a field of typeMaybeUninit<MyField>(ifMyStructhas a field of typeMyField). - Pointers are types like
&T,&mut T,cell::Ref[Mut]<'_, T>,*const T/*mut T,NonNull<T>. They support projectingPointer<'_, Struct>toPointer<'_, Field>.
In the current design, these two classes of projections are unified by just implementing Pointer<'_, Container<Struct>> -> Pointer<'_, Container<Field>> manually for the common use-cases (for example &mut MaybeUninit<Struct> -> &mut MaybeUninit<Field>). However this means that things like &Cell<MaybeUninit<Struct>> doesn't have native projections unless we explicitly implement them.
We could try to go for a design that has two different ways to implement projections -- one for containers and one for pointers. But this has the following issues:
- there are two ways to implement projections, which means that some people will get confused which one they should use.
- making projections through multiple container types work out of the box is great, however this means that when defining a new container type and making it available for projections, one needs to consider all other container types and swear coherence with them. If we instead have an explicit way to opt in to projections through multiple container types, the implementer of that trait only has to reason about the types involved in that operation.
- so to rephrase, the current design allows more container types that users actually use to be projected whereas the split design allows arbitrary nestings of container types to be projected while disallowing certain types to be considered container types.
- The same problem exists for allowing all container types to be projected by pointer types, if I define a new pointer type I again need to reason about all container types and if it's sound to project them.
We might be able to come up with a sensible definition of "container type" which then resolves these issues, but further investigation is required.
Projections for &Custom<U>
We want to be able to have both a blanket impl<T, F: Field<Base = T>> Project<F> for &T as well as allow people to have custom projections on &Custom<U>. The motivating example for custom projections is the Rust-for-Linux Mutex that wants these projections for safe RCU abstractions.
During the design meeting, it was suggested we could add a generic to Project that only the compiler is allowed to insert, this would allow disambiguation between the two impls. We have now found an alternative approach that requires less specific compiler magic:
- Add a new marker trait
ProjectableBasethat's implemented for all types by default. - People can opt out of implementing it by writing
impl !ProjectableBase for MyStruct;(needs negative impls for marker traits). - We add
where T: ProjectableBaseto theimpl Project for &T. - The compiler needs to consider the negative impls in the overlap check for users to be able to write their own
impl<U, F> Project<F> for &Custom<U> where ...(needs negative impl overlap reasoning)
We probably want negative impls for marker traits as well as improved overlap reasoning for different reasons too, so it is probably fine to depend on them here.
enum support
enum and union shouldn't be available for projections by default, take for example &Cell<Enum>, if we project to a variant, someone else could overwrite the value with a different variant, invalidating our &Cell<Field>. This also needs a new trait, probably AlwaysActiveField (needs more name bikeshedding, but too early for that) that marks fields in structs and tuples.
To properly project an enum, we need:
- a new
CanProjectEnum(TBB) trait that provides a way to read the discriminant that's currently inhabiting the value.- it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for
&Cell)
- it also needs to guarantee that the discriminant doesn't change while fields are being projected (this rules out implementing it for
- a new
matchoperator that will project all mentioned fields (for&Enumthis already is the behavior formatch)
Field Representing Types (FRTs)
While implementing https://github.com/rust-lang/rust/pull/146307 we identified the following problems/design decisions:
- a FRT is considered local to the orphan check when each container base type involved in the field path is local or a tuple (see the top comment on the PR for more infos)
- FRTs cannot implement
Drop - the
Fieldtrait is not user-implementable - types with fields that are dynamically sized don't have a statically known offset, which complicates the
UnalignedFieldtrait,
I decided to simplify the first implementation of FRTs and restrict them to sized structs and tuples. It also doesn't support packed structs. Future PRs will add support for enums, unions and packed structs as well as dynamically sized types.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Flexible, fast(er) compilation"
| Progress | |
| Point of contact | |
| Champions |
cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
Recently we've been working on feedback on the multi-staged format of the RFC. We've also shared the RFC outside of our sync call group to people from a variety of project teams and potential users too.
We're now receiving feedback that is much more detail-oriented, as opposed to being about the direction and scope of the RFC, which is a good indication that the overall strategy for shipping this RFC seems promising. We're continuing to address feedback to ensure the RFC is clear, consistent and technically feasible. David's feeling is that we've probably got another couple rounds of feedback from currently involved people and then we'll invite more people from various groups before publishing parts of the RFC formally.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
bjorn3, Folkert de Vries, [Trifecta Tech Foundation] |
| Progress | |
| Point of contact | |
| Task owners |
Help test the deadlock code in the issue list and try to reproduce the issue
1 detailed update available.
- Key developments: We have added more tests for deadlock issues. And we can say that deadlock problems are almost resolved. And we are currently addressing issues related to reproducible builds, and some of these have already been resolved.
- Blockers: null
- Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Higher-level Rust"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) |
| Task owners |
1 detailed update available.
Key developments:
- Overall polish
- https://github.com/rust-lang/rust/pull/145751
- https://github.com/rust-lang/rust/pull/145754
- https://github.com/rust-lang/rust/pull/146106
- https://github.com/rust-lang/rust/pull/146137
- https://github.com/rust-lang/rust/pull/146211
- https://github.com/rust-lang/rust/pull/146340
- https://github.com/rust-lang/rust/pull/145568
- https://github.com/rust-lang/cargo/pull/15878
- https://github.com/rust-lang/cargo/pull/15886
- https://github.com/rust-lang/cargo/pull/15899
- https://github.com/rust-lang/cargo/pull/15914
- https://github.com/rust-lang/cargo/pull/15927
- https://github.com/rust-lang/cargo/pull/15939
- https://github.com/rust-lang/cargo/pull/15952
- https://github.com/rust-lang/cargo/pull/15972
- https://github.com/rust-lang/cargo/pull/15975
- rustfmt work
- https://github.com/rust-lang/rust/pull/145617
- https://github.com/rust-lang/rust/pull/145766
- Reference work
- https://github.com/rust-lang/reference/pull/1974
"Unblocking dormant traits"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Taylor Cramer, Taylor Cramer & others |
1 detailed update available.
Current status: there is an RFC for auto impl supertraits that has received some discussion and updates (thank you, Ding Xiang Fei!).
The major open questions currently are:
Syntax
The current RFC proposes:
trait Subtrait: Supertrait {
auto impl Supertrait {
// Supertrait items defined in terms of Subtrait items, if any
}
}
Additionally, there is an open question around the syntax of auto impl for unsafe supertraits. The current proposal is to require unsafe auto impl Supertrait.
Whether to require impls to opt-out of auto impls
The current RFC proposes that
impl Supertrait for MyType {}
impl Subtrait for MyType {
// Required in order to manually write `Supertrait` for MyType.
extern impl Supertrait;
}
This makes it explicit via opt-out whether an auto impl is being applied. However, this is in conflict with the goal of allowing auto impls to be added to existing trait hierarchies. The RFC proposes to resolve this via a temporary attribute which triggers a warning. See my comment here.
Note that properly resolving whether or not to apply an auto impl requires coherence-like analysis.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Goals looking for help
No goals listed.
Other goal updates
| Progress | |
| Point of contact | |
| Champions |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Just removed the duplicate posts, guessing from a script that had a bad day.
| Progress | |
| Point of contact | |
| Champions |
bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur) |
| Task owners |
Pete LeVasseur, Contributors from Ferrous Systems and others TBD, |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Key developments:
- libtest2
- libtest env variables were deprecated, reducing the API surface for custom test harnesses, https://github.com/rust-lang/rust/pull/145269
- libtest2 was updated to reflect deprecations
- https://github.com/assert-rs/libtest2/pull/105
- libtest2 is now mostly in shape for use
- json schema
- https://github.com/assert-rs/libtest2/pull/107
- https://github.com/assert-rs/libtest2/pull/108
- https://github.com/assert-rs/libtest2/pull/111
- https://github.com/assert-rs/libtest2/pull/120
- starting exploration of extension through custom messages, see https://github.com/assert-rs/libtest2/pull/122
New areas found for further exploration
- Failable discovery
- Nested discovery
| Progress | |
| Point of contact | |
| Champions |
compiler (Manuel Drehwald), lang (TC) |
| Task owners |
Manuel Drehwald, LLVM offload/GPU contributors |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
(depending on the flag) |
| Progress | |
| Point of contact | |
| Champions |
lang (Josh Triplett), lang-docs (TC) |
| Task owners |
| Progress | |
| Point of contact |
|
| Champions |
cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact |
|
| Task owners |
|
1 detailed update available.
Key developments:
- https://github.com/crate-ci/cargo-plumbing/pull/53
- https://github.com/crate-ci/cargo-plumbing/pull/62
- https://github.com/crate-ci/cargo-plumbing/pull/68
- https://github.com/crate-ci/cargo-plumbing/pull/96
- Further schema discussions at https://github.com/crate-ci/cargo-plumbing/discussions/18
- Writing up https://github.com/crate-ci/cargo-plumbing/issues/82
Major obstacles
- Cargo, being designed for itself, doesn't allow working with arbitrary data, see https://github.com/crate-ci/cargo-plumbing/issues/82
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) |
| Task owners |
oli-obk |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Task owners |
[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec) |
| Progress | |
| Point of contact | |
| Task owners |
vision team |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
It is possible to now run the system with two different machines on two different architectures however there is work to be done to make this more robust.
We have worked on ironing out the last bits and pieces for dequeuing benchmarks as well as creating a new user interface to reflect multiple collectors doing work. Presently work is mostly on polishing the UI and handing edge cases through manual testing.
Queue Work:
- https://github.com/rust-lang/rustc-perf/pull/2212
- https://github.com/rust-lang/rustc-perf/pull/2214
- https://github.com/rust-lang/rustc-perf/pull/2216
- https://github.com/rust-lang/rustc-perf/pull/2221
- https://github.com/rust-lang/rustc-perf/pull/2226
- https://github.com/rust-lang/rustc-perf/pull/2230
- https://github.com/rust-lang/rustc-perf/pull/2231
Ui:
- https://github.com/rust-lang/rustc-perf/pull/2217
- https://github.com/rust-lang/rustc-perf/pull/2220
- https://github.com/rust-lang/rustc-perf/pull/2224
- https://github.com/rust-lang/rustc-perf/pull/2227
- https://github.com/rust-lang/rustc-perf/pull/2232
- https://github.com/rust-lang/rustc-perf/pull/2233
- https://github.com/rust-lang/rustc-perf/pull/2236
| Progress | |
| Point of contact |
|
| Champions | |
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
19 Nov 2025 12:00am GMT
The Rust Programming Language Blog: Project goals update — October 2025
The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.
Flagship goals
"Beyond the `&`"
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (TC) |
| Task owners |
1 detailed update available.
Status update:
Regarding the TODO list in the next 6 months, here is the current status:
Introduce &pin mut|const place borrowing syntax
- [x] parsing: #135731, merged.
- [ ] lowering and borrowck: not started yet.
I've got some primitive ideas about borrowck, and I probably need to confirm with someone who is familiar with MIR/borrowck before starting to implement.
A pinned borrow consists two MIR statements:
- a borrow statement that creates the mutable reference,
- and an adt aggregate statement that put the mutable reference into the
Pinstruct.
I may have to add a new borrow kind so that pinned borrows can be recognized. Then traverse the dataflow graph to make sure that pinned places cannot been moved.
Pattern matching of &pin mut|const T types
In the past few months, I have struggled with the !Unpin stuffs (the original design sketch Alternative A), trying implementing it, refactoring, discussing on zulips, and was constantly confused; luckily, we have finally reached a new agreement of the Alternative B version.
- [ ] #139751 under review (reimplemented regarding Alternative B).
Support drop(&pin mut self) for structurally pinned types
- [ ] adding a new
Drop::pin_drop(&pin mut self)method: draft PR #144537
Supporting both Drop::drop(&mut self) and Drop::drop(&pin mut self) seems to introduce method-overloading to Rust, which I think might need some more general ways to handle (maybe by a rustc attribute?). So instead, I'd like to implemenent this via a new method Drop::pin_drop(&pin mut self) first.
Introduce &pin pat pattern syntax
Not started yet (I'd prefer doing that when pattern matching of &pin mut|const T types is ready).
Support &pin mut|const T -> &|&mut T coercion (requires T: Unpin of &pin mut T -> &mut T)
Not started yet. (It's quite independent, probably someone else can help with it)
Support auto borrowing of &pin mut|const place in method calls with &pin mut|const self receivers
Seems to be handled by Autoreborrow traits?
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
There have been lots of internal developments since the last update:
- field representing types and chained projections have received a fundamental overhaul: disallowing field paths and requiring projections to decompose. Additionally we explored how const generics could emulate FRTs.
- we discussed a potential solution to having only a single project operator & trait through a decay operation with special borrow checker treatment.
- we were able to further simplify the project trait moving the generic argument of the represented field to the project function. We also discovered the possibility that FRTs are not fundamentally necessary for field projections -- however, they are still very useful in other applications and my gut feeling is that they are also right for field projections. So we will continue our experiment with them.
- we talked about making
Project::projectbe a safe function by introducing a new kind of type.
Next Steps:
- we're still planning to merge https://github.com/rust-lang/rust/pull/146307, after I have updated it with the new FRT logic and it has been reviewed
- once that PR lands, I plan to update the library experiment to use the experimental FRTs
- then the testing using that library can begin in the Linux kernel and other projects (this is where anyone interested in trying field projections can help out!)
4 detailed updates available.
Decomposing Projections
A chained projection operation should naturally decompose, so foo.[Ber Clausen][].[Baz Shkara][] should be the same as writing (foo.[Ber Clausen][]).[Baz Shkara][]. Until now, the different parenthesizing would have allowed different outcomes. This behavior is confusing and also makes many implementation details more complicated than they need to be.
Field Representing Types
Since projections now decompose, we have no need from a design perspective for multi-level FRTs. So field_of!(Foo, bar.baz) is no longer required to work. Thus we have decided to restrict FRTs to only a single field and get rid of the path. This simplifies the implementation in the compiler and also avoids certain difficult questions such as the locality of FRTs (if we had a path, we would have to walk the path and it is local, if all structs included in the path are local). Now with only a single field, the FRT is local if the struct is.
We also discovered that it is a good idea to make FRTs inhabited (they still are ZSTs), since then it allows the following pattern to work:
fn project_free_standing<F: Field>(_: Field, r: &F::Base) -> &F::Type { ... }
// can now call the function without turbofish:
let my_field = project_free_standing(field_of!(MyStruct, my_field), &my_struct);
FRTs via const Generics
We also spent some time thinking about const generics and FRTs on zulip:
- https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/const.20generics.3A.20implementing.20field.20representing.20types/with/544617587
- https://rust-lang.zulipchat.com/#narrow/channel/144729-t-types/topic/field.20representing.20values.20.26.20.60Field.3Cconst.20F.3A.20.3F.3F.3F.3E.60.20trait/with/542855620
In short, this won't be happening any time soon. However, it could be a future implementation of the field_of! macro depending on how reflection through const generics evolves (but also only in the far-ish future).
Single Project Operator & Trait via Exclusive Decay
It would be great if we only had to add a single operator and trait and could obtain the same features as we have with two. The current reason for having two operators is to allow both shared and exclusive projections. If we could have another operation that decays an exclusive reference (or custom, exclusive smart-pointer type) into a shared reference (or the custom, shared version of the smart pointer). This decay operation would need borrow checker support in order to have simultaneous projections of one field exclusively and another field shared (and possibly multiple times).
This goes into a similar direction as the reborrowing project goal https://github.com/rust-lang/rust-project-goals/issues/399, however, it needs extra borrow checker support.
fn add(x: cell::RefMut<'_, i32>, step: i32) {
*x = *x + step;
}
struct Point {
x: i32,
y: i32,
}
fn example(p: cell::RefMut<'_, Point>) {
let y: cell::Ref<'_, i32> = coerce_shared!(p.[@y][]);
let y2 = coerce_shared!(p.[@y][]); // can project twice if both are coerced
add(p.[Devon Peticolas][], *y);
add(p.[Devon Peticolas][], *y2);
assert_eq!(*y, *y2); // can still use them afterwards
}
Problems:
- explicit syntax is annoying for these "coercions", but
- we cannot make this implicit:
- if this were an implicit operation, only the borrow checker would know when one had to coerce,
- this operation is allowed to change the type,
- this results in borrow check backfeeding into typecheck, which is not possible or at least extremely difficult
Syntax
Not much movement here, it depends on the question discussed in the previous section, since if we only have one operator, we could choose .@, -> or ~; if we have to have two, then we need additional syntax to differentiate them.
Simplifying the Project trait
There have been some developments in pin ergonomics https://github.com/rust-lang/rust/issues/130494: "alternative B" is now the main approach which means that Pin<&mut T> has linear projections, which means that it doesn't change its output type depending on the concrete field (really depending on the field, not only its type). So it falls into the general projection pattern Pin<&mut Struct> -> Pin<&mut Field> which means that Pin doesn't need any where clauses when implementing Project.
Additionally we have found out that RCU also doesn't need where clauses, as we can also make its projections linear by introducing a MutexRef<'_, T> smart pointer that always allows projections and only has special behavior for T = Rcu<U>. Discussed on zulip after this message.
For this reason we can get rid of the generic argument to Project and mandate that all types that support projections support them for all fields. So the new Project trait looks like this:
// still need a common super trait for `Project` & `ProjectMut`
pub trait Projectable {
type Target: ?Sized;
}
pub unsafe trait Project: Projectable {
type Output<F: Field<Base = Self::Target>>;
unsafe fn project<F: Field<Base = Self::Target>>(
this: *const Self,
) -> Self::Output<F>;
}
Are FRTs even necessary?
With this change we can also think about getting rid of FRTs entirely. For example we could have the following Project trait:
pub unsafe trait Project: Projectable {
type Output<F>;
unsafe fn project<const OFFSET: usize, F>(
this: *const Self,
) -> Self::Output<F>;
}
There are other applications for FRTs that are very useful for Rust-for-Linux. For example, storing field information for intrusive data structures directly in that structure as a generic.
More concretely, in the kernel there are workqueues that allow you to run code in parallel to the currently running thread. In order to insert an item into a workqueue, an intrusive linked list is used. However, we need to be able to insert the same item into multiple lists. This is done by storing multiple instances of the Work struct. Its definition is:
pub struct Work<T, const ID: u64> { ... }
Where the ID generic must be unique inside of the struct.
struct MyDriver {
data: Arc<MyData>,
main_work: Work<Self, 0>,
aux_work: Work<Self, 1>,
// more fields ...
}
// Then you call a macro to implement the unsafe `HasWork` trait safely.
// It asserts that there is a field of type `Work<MyDriver, 0>` at the given field
// (and also exposes its offset).
impl_has_work!(impl HasWork<MyDriver, 0> for MyDriver { self.main_work });
impl_has_work!(impl HasWork<MyDriver, 1> for MyDriver { self.aux_work });
// Then you implement `WorkItem` twice:
impl WorkItem<0> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the main work here");
}
}
impl WorkItem<1> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the aux work here");
}
}
// And finally you can call `enqueue` on a `Queue`:
let my_driver = Arc::new(MyDriver::new());
let queue: &'static Queue = kernel::workqueue::system_highpri();
queue.enqueue::<_, 0>(my_driver.clone()).expect("my_driver is not yet enqueued for id 0");
// there are different queues
let queue = kernel::workqueue::system_long();
queue.enqueue::<_, 1>(my_driver.clone()).expect("my_driver is not yet enqueued for id 1");
// cannot insert multiple times:
assert!(queue.enqueue::<_, 1>(my_driver.clone()).is_err());
FRTs could be used instead of this id, making the definition be Work<F: Field> (also merging the T parameter).
struct MyDriver {
data: Arc<MyData>,
main_work: Work<field_of!(Self, main_work)>,
aux_work: Work<field_of!(Self, aux_work)>,
// more fields ...
}
impl WorkItem<field_of!(MyDriver, main_work)> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the main work here");
}
}
impl WorkItem<field_of!(MyDriver, aux_work)> for MyDriver {
type Pointer = Arc<Self>;
fn run(this: Self::Pointer) {
println!("doing the aux work here");
}
}
let my_driver = Arc::new(MyDriver::new());
let queue: &'static Queue = kernel::workqueue::system_highpri();
queue
.enqueue(my_driver.clone(), field_of!(MyDriver, main_work))
// ^ using Gary's idea to avoid turbofish
.expect("my_driver is not yet enqueued for main_work");
let queue = kernel::workqueue::system_long();
queue
.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work))
.expect("my_driver is not yet enqueued for aux_work");
assert!(queue.enqueue(my_driver.clone(), field_of!(MyDriver, aux_work)).is_err());
This makes it overall a lot more readable (by providing sensible names instead of magic numbers), and maintainable (we can add a new variant without worrying about which IDs are unused). It also avoids the unsafe HasWork trait and the need to write the impl_has_work! macro for each Work field.
I still think that having FRTs is going to be the right call for field projections as well, so I'm going to keep their experiment going. However, we should fully explore their necessity and rationale for a future RFC.
Making Project::project safe
In the current proposal the Project::project function is unsafe, because it takes a raw pointer as an argument. This is pretty unusual for an operator trait (it would be the first). Tyler Mandry thought about a way of making it safe by introducing "partial struct types". This new type is spelled Struct.F where F is an FRT of that struct. It's like Struct, but with the restriction that only the field represented by F can be accessed. So for example &Struct.F would point to Struct, but only allow one to read that single field. This way we could design the Project trait in a safe manner:
// governs conversion of `Self` to `Narrowed<F>` & replaces Projectable
pub unsafe trait NarrowPointee {
type Target;
type Narrowed<F: Field<Base = Self::Target>>;
}
pub trait Project: NarrowPointee {
type Output<F: Field<Base = Self::Type>>;
fn project(narrowed: Self::Narrowed<F>) -> Self::Output<F>;
}
The NarrowPointee trait allows a type to declare that it supports conversions of its Target type to Target.F. For example, we would implement it for RefMut like this:
unsafe impl<'a, T> NarrowPointee for RefMut<'a, T> {
type Target = T;
type Narrowed<F: Field<Base = T>> = RefMut<'a, T.F>;
}
Then we can make the narrowing a builtin operation in the compiler that gets prepended on the actual coercion operation.
However, this "partial struct type" has a fatal flaw that Oliver Scherer found (edit by oli: it was actually boxy who found it): it conflicts with mem::swap, if Struct.F has the same layout as Struct, then writing to such a variable will overwrite all bytes, thus also overwriting field that aren't F. Even if we make an exception for these types and moves/copies, this wouldn't work, as a user today can rely on the fact that they write size_of::<T>() bytes to a *mut T and thus have a valid value of that type at that location. Tyler Mandry suggested we make it !Sized and even !MetaSized to prevent overwriting values of that type (maybe the Overwrite trait could come in handy here as well). But this might make "partial struct types" too weak to be truly useful. Additionally this poses many more questions that we haven't yet tackled.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Initial implementation of a Reborrow trait for types with only lifetimes with exclusive reference semantics is working but not yet upstreamed not in review. CoerceShared implementation is not yet started.
Proper composable implementation will likely require a different tactic than the current one. Safety and validity checks are currently absent as well and will require more work.
"Flexible, fast(er) compilation"
| Progress | |
| Point of contact | |
| Champions |
cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
We've now opened our first batch of RFCs: rust-lang/rfcs#3873, rust-lang/rfcs#3874 and rust-lang/rfcs#3875
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
bjorn3, Folkert de Vries, [Trifecta Tech Foundation] |
| Progress | |
| Point of contact | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
"Higher-level Rust"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
3 detailed updates available.
I posted this blog post that proposes that we ought to name the trait Handle and define it as a trait where clone produces an "entangled" value -- i.e., a second handle to the same underlying value.
Before that, there's been a LOT of conversation that hasn't made its way onto this tracking issue. Trying to fix that! Here is a brief summary, in any case:
- It began with the first Rust Project Goals program in 2024H2, where Jonathan Kelley from Dioxus wrote a thoughtful blog post about a path to high-level Rust that eventually became a 2024H2 project goal towards ergonomic ref-counting.
- I wrote a series of blog posts about a trait I called
Claim. - Josh Triplett and I talked and Josh Triplett opened RFC #3680[], which proposed a
usekeyword anduse ||closures. Reception, I would say, was mixed; yes, this is tackling a real problem, but there were lots of concerns on the approach. I summarized the key points here. - Santiago Pastorino implemented experimental support for (a variant of) RFC #3680[] as part of the 2025H1 project goal.
- I authored a 2025H2 project goal proposing that we create an alternative RFC focused on higher-level use-cases which prompted Josh Triplett and I have to have a long and fruitful conversation in which he convinced me that this was not the right approach.
- We had a lang-team design meeting on 2025-08-27 in which I presented this survey and summary of the work done thus far.
- And then at the RustConf 2025 Unconf we had a big group discussion on the topic that I found very fruitful, as well as various follow-up conversations with smaller groups. The name
Handlearose from this and I plan to be posting further thoughts as a result.
RFC #3680: https://github.com/rust-lang/rfcs/pull/3680
I wrote up a brief summary of my current thoughts on Zulip; I plan to move this content into a series of blog posts, but I figured it was worth laying it out here too for those watching this space:
09:11 (1) I don't think clones/handles are categorically different when it comes to how much you want to see them made explicit; some applications want them both to be explicit, some want them automatic, some will want a mix -- and possibly other kinds of categorizations.
09:11 (2) But I do think that if you are making everything explicit, it's useful to see the difference between a general purpose clone and a handle.
09:12 (3) I also think there are many classes of software where there is value in having everything explicit -- and that those classes are often the ones most in Rust's "sweet spot". So we should make sure that it's possible to have everything be explicit ergonomically.
09:12 (4) This does not imply that we can't make automatic clones/handles possible too -- it is just that we should treat both use cases (explicit and automatic) as first-class in importance.
09:13 (5) Right now I'm focused on the explicit case. I think this is what the use-use-everywhere was about, though I prefer a different proposal now -- basically just making handle and clone methods understood and specially handled by the compiler for optimization and desugaring purposes. There are pros and cons to that, obviously, and that's what I plan to write-up in more detail.
09:14 (6) On a related note, I think we also need explicit closure captures, which is a whole interesting design space. I don't personally find it "sufficient" for the "fully explicit" case but I could understand why others might think it is, and it's probably a good step to take.
09:15 (7) I go back and forth on profiles -- basically a fancy name for lint-groups based on application domain -- and whether I think we should go that direction, but I think that if we were going to go automatic, that's the way I would do it: i.e., the compiler will automatically insert calls to clone and handle, but it will lint when it does so; the lint can by deny-by-default at first but applications could opt into allow for either or both.
I previously wanted allow-by-default but I've decided this is a silly hill to die on, and it's probably better to move in smaller increments.
Update:
There has been more discussion about the Handle trait on Zulip and elsewhere. Some of the notable comments:
- Downsides of the current name: it's a noun, which doesn't follow Rust naming convention, and the verb
handleis very generic and could mean many things. - Alternative names proposed:
Entangle/entangleorentangled,Share/share,Alias/alias, orRetain/retain. if we want to seriously hardcore on the science names --Mitose/mitoseorFission/fission. - There has been some criticism pointing out that focusing on handles means that other types which might be "cheaply cloneable" don't qualify.
For now I will go on using the term Handle, but I agree with the critique that it should be a verb, and currently prefer Alias/alias as an alternative.
I'm continuing to work my way through the backlog of blog posts about the conversations from Rustconf. The purposes of these blog posts is not just to socialize the ideas more broadly but also to help myself think through them. Here is the latest post:
https://smallcultfollowing.com/babysteps/blog/2025/10/13/ergonomic-explicit-handles/
The point of this post is to argue that, whatever else we do, Rust should have a way to create handles/clones (and closures that work with them) which is at once explicit and ergonomic.
To give a preview of my current thinking, I am working now on the next post which will discuss how we should add an explicit capture clause syntax. This is somewhat orthogonal but not really, in that an explicit syntax would make closures that clone more ergonomic (but only mildly). I don't have a proposal I fully like for this syntax though and there are a lot of interesting questions to work out. As a strawperson, though, you might imagine [this older proposal I wrote up](https://hackmd.io/Niko Matsakis/SyI0eMFXO?type=view), which would mean something like this:
let actor1 = async move(reply_tx.handle()) {
reply_tx.send(...);
};
let actor2 = async move(reply_tx.handle()) {
reply_tx.send(...);
};
This is an improvement on
let actor1 = {
let reply_tx = reply_tx.handle();
async move(reply_tx.handle()) {
reply_tx.send(...);
}
};
but only mildly.
The next post I intend to write would be a variant on "use, use everywhere" that recommends method call syntax and permitting the compiler to elide handle/clone calls, so that the example becomes
let actor1 = async move {
reply_tx.handle().send(...);
// -------- due to optimizations, this would capture the handle creation to happen only when future is *created*
};
This would mean that cloning of strings and things might benefit from the same behavior:
let actor1 = async move {
reply_tx.handle().send(some_id.clone());
// -------- the `some_id.clone()` would occur at future creation time
};
The rationable that got me here is (a) minimizing perceived complexity and focusing on muscle memory (just add .clone() or .handle() to fix use-after-move errors, no matter when/where they occur). The cost of course is that (a) Handle/Clone become very special; and (b) it blurs the lines on when code execution occurs. Despite the .handle() occurring inside the future (resp. closure) body, it actually executes when the future (resp. closure) is created in this case (in other cases, such as a closure that implements Fn or FnMut and hence executes more than once, it might occur during each execution as well).
| Progress | |
| Point of contact | |
| Champions |
cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett) |
| Task owners |
"Unblocking dormant traits"
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Taylor Cramer, Taylor Cramer & others |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts |
1 detailed update available.
This is our first update we're posting for the in-place init work. Overall things are progressing well, with lively discussion happening on the newly minted t-lang/in-place-init Zulip channel. Here are the highlights since the lang team design meeting at the end of July:
- Zulip: we now have a dedicated zulip channel that includes all topics surrounding in-place initialization: #t-lang/in-place-init.
- Guaranteed value emplacement: Olivier FAURE shared a new version of C++ inspired emplacement in #t-lang/in-place-init > RFC Draft: Guaranteed Value Emplacement inspired by C++'s emplacement system.
- Rosetta code sample: to help guide the comparison of the various proposals, we've started collecting examples to compare against each other. The first one was contributed by Alice Ryhl and is: "How can we construct a
Box<Mutex<MyType>>in-place inside theBox". For more see #t-lang/in-place-init > Shared example: emplacing into `Box. - Evolution of the outptr proposal: Taylor Cramer's original outptr-based emplacement proposal used concrete types as part of her proposal. Since then there has been significant discussion about alternative ways to represent out-pointers, including: #t-lang/in-place-init > out-pointer type and MIR semantics consideration.
- Placing functions as a high-level notation: Yoshua Wuyts has begun reworking the "placing functions" proposal as a high-level sugar on top of one of the other proposals, instead of directly desugaring to
MaybeUninit. For more see: #t-lang/in-place-init > Placing functions as sugar for low-level emplacement. - Generic fallibility for the
Initproposal: following feedback from the lang team meeting, Alice Ryhl posted an update showing how theInittrait could be made generic over allTrytypes instead of being limited to justResult. For more see: #t-lang/in-place-init > Makingimpl Initgeneric overResult/Option/infallible. - Interactions between emplacement and effects: Yoshua Wuyts has begun documenting the expected interactions between placing functions and other function-transforming effects (e.g.
async,try,gen). For more see: #t-lang/in-place-init > placing functions and interactions with effects.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Since the last update we've fixed the hang in rayon in https://github.com/rust-lang/rust/pull/144991 and https://github.com/rust-lang/rust/pull/144732 which relied on https://github.com/rust-lang/rust/pull/143054 https://github.com/rust-lang/rust/pull/144955 https://github.com/rust-lang/rust/pull/144405 https://github.com/rust-lang/rust/pull/145706. This introduced some search graph bugs which we fixed in https://github.com/rust-lang/rust/pull/147061 https://github.com/rust-lang/rust/pull/147266.
We're mostly done with the opaque type support now. Doing so required a lot of quite involved changes:
- https://github.com/rust-lang/rust/pull/145244 non-defining uses in borrowck
- https://github.com/rust-lang/rust/pull/145925 non-defining uses in borrowck closure support
- https://github.com/rust-lang/rust/pull/145711 non-defining uses in hir typeck
- https://github.com/rust-lang/rust/pull/140375 eagerly compute sub_unification_table again
- https://github.com/rust-lang/rust/pull/146329 item bounds
- https://github.com/rust-lang/rust/pull/145993 function calls
- https://github.com/rust-lang/rust/pull/146885 method selection
- https://github.com/rust-lang/rust/pull/147249 fallback
We also fixed some additional self-contained issues and perf improvements: https://github.com/rust-lang/rust/pull/146725 https://github.com/rust-lang/rust/pull/147138 https://github.com/rust-lang/rust/pull/147152 https://github.com/rust-lang/rust/pull/145713 https://github.com/rust-lang/rust/pull/145951
We have also migrated rust-analyzer to entirely use the new solver instead of chalk. This required a large effort mainly by Jack Huey Chayim Refael Friedman and Shoyu Vanilla. That's some really impressive work on their end 🎉 See this list of merged PRs for an overview of what this required on the r-a side. Chayim Refael Friedman also landed some changes to the trait solver itself to simplify the integration: https://github.com/rust-lang/rust/pull/145377 https://github.com/rust-lang/rust/pull/146111 https://github.com/rust-lang/rust/pull/147723 https://github.com/rust-lang/rust/pull/146182.
We're still tracking the remaining issues in https://github.com/orgs/rust-lang/projects/61/views/1. Most of these issues are comparatively simple and I expect us to fix most of them over the next few months, getting us close to stabilization. We're currently doing another crater triage which may surface a few more issues.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Here's another summary of the most interesting developments since the last update:
- reviews and updates have been done on the polonius alpha, and it has since landed
- the last 2 trivial diagnostics failures were fixed
- we've done perf runs, crater runs, completed gathering stats on crates.io for avg and outliers in CFG sizes, locals, loan and region counts, dataflow framework behavior on unexpected graph shapes and bitset invalidations
- I worked on dataflow for borrowck: single pass analyses on acyclic CFGs, dataflow analyses on SCCs for cyclic CFGs
- some more pieces of amanda's SCC rework have landed, with lcnr's help
- lcnr's opaque type rework, borrowcking of nested items, and so on, also fixed some issues we mentioned in previous updates with member constraints for computing when loans are going out of scope
- we also studied recent papers in flow-sensitive pointer analysis
- I also started the loans-in-scope algorithm rework, and also have reachability acceleration with the CFG SCCs
- the last 2 actual failures in the UI tests are soundness issues, related to liveness of captured regions for opaque types: some regions that should be live are not, which were done to help with precise capture and limit the impact of capturing unused regions that cannot be actually used in the hidden type. The unsoundness should not be observable with NLLs, but polonius alpha relies on liveness to propagate loans throughout the CFG: these dead regions prevent detecting some error-causing loan invalidations. The easiest fix would cause breakage in code that's now accepted. niko, jack and I have another possible solution and I'm trying to implement it now
Goals looking for help
Other goal updates
| Progress | |
| Point of contact | |
| Champions |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We had a design meeting on 2025-09-10, minutes available here, aiming at these questions:
There are a few concrete things I would like to get out of this meeting, listed sequentially in order of most to least important:
- Would you be comfortable stabilizing the initial ADTs-only extensions?
- This would be properly RFC'd before stabilization, this ask is just a "vibe check".
- Are you interested in seeing Per-Value Rejection for enums with undesirable variants?
- How do you feel about the idea of Lossy Conversion as an approach in general, what about specifically for the References and Raw Pointers extensions?
- How do you feel about the idea of dropping the One Equality ideal in general, what about specifically for
-0.0vs+0.0, what about specifically forNaNvalues?
The vibe checks on the first one were as follows:
Vibe check
The main ask:
Would you be comfortable stabilizing the initial ADTs-only extensions?
(plus the other ones)
nikomatsakis
I am +1 on working incrementally and focusing first on ADTs. I am supportive of stabilization overall but I don't feel like we've "nailed" the way to talk or think about these things. So I guess my "vibe" is +1 but if this doc were turned into an RFC kind of "as is" I would probably wind up -1 on the RFC, I think more work is needed (in some sense, the question is, "what is the name of the opt-in trait and why is it named that"). This space is complex and I think we have to do better at helping people understand the fine-grained distinctions between runtime values, const-eval values, and type-safe values.
Niko: if we add some sort of derive of a trait name, how much value are we getting from the derive, what should the trait be named?
tmandry
I think we'll learn the most by stabilizing ADTs in a forward compatible way (including an opt-in) now. So +1 from me on the proposed design.
It's worth noting that this is a feature that interacts with many other features, and we will be considering extensions to the MVP for the foreseeable future. To some extent the lang team has committed to this already but we should know what we're signing ourselves up for.
scottmcm
scottmcm: concern over the private fields restriction (see question below), but otherwise for the top ask, yes happy to just do "simple" types (no floats, no cells, no references, etc).
TC
As Niko said, +1 on working incrementally, and I too am supportive overall.
As a vibe, per-value rejection seems fairly OK to me in that we decided to do value-based reasoning for other const checks. It occurs to me there's some parallel with that.
https://github.com/rust-lang/rust/pull/119044
As for the opt-in on types, I see the logic. I do have reservations about adding too many opt-ins to the language, and so I'm curious about whether this can be safely removed.
Regarding floats, I see the question on these as related to our decision about how to handle padding in structs. If it makes sense to normalize or otherwise treat
-0.0and+0.0as the same, then it'd also make sense in my view to normalize or otherwise treat two structs with the same values but different padding (or where only one has initialized padding) as the same.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur) |
| Task owners |
Pete LeVasseur, Contributors from Ferrous Systems and others TBD, |
2 detailed updates available.
After much discussion, we have decided to charter this team as a t-spec subteam. Pete LeVasseur and I are working to make that happen now.
PR with charters:
https://github.com/rust-lang/team/pull/2028
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Here's our first status update!
-
We've been experimenting with a few different ways of emitting retags in codegen, as well as a few different forms that retags should take at this level. We think we've settled on a set of changes that's worth sending out to the community for feedback, likely as a pre-RFC. You can expect more engagement from us on this level in the next couple of weeks.
-
We've used these changes to create an initial working prototype for BorrowSanitizer that supports finding Tree Borrows violations in tiny, single-threaded Rust programs. We're working on getting Miri's test suite ported over to confirm that everything is working correctly and that we've quashed any false positives or false negatives.
-
This coming Monday, I'll be presenting on BorrowSanitizer and this project goal at the Workshop on Supporting Memory Safety in LLVM. Please reach out if you're attending and would like to chat more in person!
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby |
1 detailed update available.
The work on this goal has led to many ongoing discussions on the current status of the Reference. Those discussions are still in progress.
Meanwhile, many people working on this goal have successfully written outlines or draft chapters, at various stages of completeness. There's a broken-out status report at https://github.com/rust-lang/project-goal-reference-expansion/issues/11 .
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (Manuel Drehwald), lang (TC) |
| Task owners |
Manuel Drehwald, LLVM offload/GPU contributors |
1 detailed update available.
A longer update of the changes over the fall. We had two gsoc contributors and a lot of smaller improvements for std::autodiff. The first two improvements were already mentioned as draft PRs in the previous update, but got merged since. I also upstreamed more std::offload changes.
- Marcelo Domínguez refactored the autodiff frontend to be a proper rustc intrinsic, rather than just hackend into the frontend like I first implemented it. This already solved multiple open issues, reduced the code size, and made it generally easier to maintain going forward.
- Karan Janthe upstreamed a first implementation of "TypeTrees", which lowers rust type and layout information to Enzyme, our autodiff backend. This makes it more likely that you won't see compilation failures with the error message "Can not deduce type of ". We might refine in the future what information exactly we lower.
- Karan Janthe made sure that std::autodiff has support for f16 and and f128 types.
- One more of my offload PRs landed. I also figured out why the LLVM-IR generated by the std::offload code needed some manual adjustments in the past. We were inconsistent when communicating with LLVM's offload module, about whether we'd want a magic, extra, dyn_ptr argument, that enables kernels to use some extra features. We don't use these features yet, but for consistency we now always generate and expect the extra pointer. The bugfix is currently under review, once it lands upstream, rustc is able to run code on GPUs (still with a little help of clang).
- Marcelo Domínguez refactored my offload frontend, again introducing a proper rustc intrinsic. That code will still need to go through review, but once it lands it will get us a lot closer to a usable frontend. He also started to generate type information for our offload backend to know how many bytes to copy to and from the devices. This is a very simplified version of our autodiff typetrees.
- At RustChinaConf, I was lucky to run into the wild linker author David Lattimore, which helped me to create a draft PR that can dlopen Enzyme at runtime. This means we could ship it via rustup for people interested in std::autodiff, and don't have to link it in at build time, which would increase binary size even for those users that are not interested in it. There are some open issues, so please reach out if you have time to get the PR ready!
- @sgasho spend a lot of time trying to get Rust into the Enzyme CI. Unfortunately that is a tricky process due to Enzyme's CI requirements, so it's not merged yet.
- I tried to simplify building std::autodiff by marking it as compatible with download-llvm-ci. Building LLVM from source was previously the by far slowest part of building rustc with autodiff, so this has a large potential. Unfortunately the CI experiments revealed some issues around this setting. We think we know why Enzyme's Cmake causes issues here and are working on a fix to make it more reliable.
- Osama Abdelkader and bjorn3 looked into automatically enabling fat-lto when autodiff is enabled. In the past, forgetting to enable fat-lto resulted in incorrect (zero) derivatives. The first approach unfortunately wasn't able to cover all cases, so we need to see whether we can handle it nicely. If that turns out to be too complicated, we will revert it and instead "just" provide a nice error message, rather than returning incorrect derivatives.
All-in-all I spend a lot more time on infra (dlopen, cmake, download-llvm-ci, ...) then I'd like, but on the happy side there are only so many features left that I want to support here so there is an end in sight. I am also about to give a tech-talk at the upcoming LLVM dev meeting about safe GPU programming in Rust.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
(depending on the flag) |
3 detailed updates available.
I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).
- [merged] Sanitizers target modificators / https://github.com/rust-lang/rust/pull/138736
- [merged] Add assembly test for -Zreg-struct-return option / https://github.com/rust-lang/rust/pull/145382
- [merged] CI: rfl: move job forward to Linux v6.17-rc5 to remove temporary commits / https://github.com/rust-lang/rust/pull/146368
-Zharden-sls/ https://github.com/rust-lang/rust/pull/136597- Waiting on review
#![register_tool]/ https://github.com/rust-lang/rust/issues/66079- Waiting on https://github.com/rust-lang/rfcs/pull/3808
-Zno-jump-tables/ https://github.com/rust-lang/rust/pull/145974- Active FCP, waiting on 2 check boxes
-Cunsigned-char
We've discussed adding an option analogous to -funsigned-char in GCC and Clang, that would allow you to set whether std::ffi::c_char is represented by i8 or u8. Right now, this is platform-specific and should map onto whatever char is in C on the same platform. However, Linux explicitly sets char to be unsigned and then our Rust code conflicts with that. And isn this case the sign is significant.
Rust for Linux works around this this with their rust::ffi module, but now that they've switched to the standard library's CStr type, they're running into it again with the as_ptr method.
Tyler mentioned https://docs.rs/ffi_11/latest/ffi_11/ which preserves the char / signed char / unsigned char distinction.
Grouping target modifier flags
The proposed unsigned-char option is essentially a target modifier. We have several more of these (e.g. llvm-args, no-redzone) in the Rust compiler and Josh suggested we distinguish them somehow. E.g. by giving them the same prefix or possibly creating a new config option (right now we have -C and -Z, maybe we could add -T for target modifiers) so they're distinct from the e.g. the codegen options.
Josh started a Zulip thread here: https://rust-lang.zulipchat.com/#narrow/channel/131828-t-compiler/topic/Grouping.20target.20modifier.20options.3F/with/546524232
#![register_tool] / rust#66079 / RFC#3808
Tyler looked at the RFC. The Crubit team started using register_tool but then moved to using an attribute instead. He proposed we could do something similar here, although it would require a new feature and RFC.
The team was open to seeing how it would work.
| Progress | |
| Point of contact | |
| Champions |
lang (Josh Triplett), lang-docs (TC) |
| Task owners |
3 detailed updates available.
I've updated the top-level description to show everything we're tracking here (please let me know if anything's missing or incorrect!).
Deref/Receiver
- Ding Xiang Fei keeps updating the PR: https://github.com/rust-lang/rust/pull/146095
- They're also working on a document to explain the consequences of this split
Arbitrary Self Types
- https://github.com/rust-lang/rust/issues/44874
- Waiting on the
Deref/Receiverwork, no updates
derive(CoercePointee)
- https://github.com/rust-lang/rust/pull/133820
- Waiting on Arbitrary self types
Pass pointers to const in asm! blocks
- RFC: https://github.com/rust-lang/rfcs/pull/3848
- The Lang team went through the RFC with Alice Ryhl on 2025-10-08 and it's in FCP now
Field projections
- Benno Lossin opened a PR here: https://github.com/rust-lang/rust/pull/146307
- Being reviewed by the compiler folks
Providing \0 terminated file names with #[track_caller]
- The feature has been implemented and stabilized with
file_as_c_stras the method name: https://github.com/rust-lang/rust/pull/145664
Supertrait auto impl RFC
- Ding Xiang Fei opened the RFC and works with the reviewers: https://github.com/rust-lang/rfcs/pull/3851
Other
- Miguel Ojeda spoke to Linus about rustfmt and they came to agreement.
Layout of core::any::TypeId
Danilo asked about the layout of TypeId -- specifically its size and whether they can rely on it because they want to store it in a C struct. The struct's size is currently 16 bytes, but that's an implementation detail.
As a vibe check, Josh Triplett and Tyler Mandry were open to guaranteeing that it's going to be at most 16 bytes, but they wanted to reserve the option to reduce the size at some point. The next step is to have the full Lang and Libs teams discuss the proposal.
Danilo will open a PR to get that discussion started.
rustfmt
Miguel brought up the "trailing empty comment" workaround for the formatting issue that made the rounds on the Linux kernel a few weeks ago. The kernel style places each import on a single line:
use crate::{
fmt,
page::AsPageIter,
};
rustfmt compresses this to:
use crate::{fmt, page::AsPageIter};
The workaround is to put an empty trailing comment at the end
use crate::{
fmt,
page::AsPageIter, //
};
This was deemed acceptable (for the time being) and merged into the mainline kernel: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4a9cb2eecc78fa9d388481762dd798fa770e1971
Miguel is in contact with rustfmt to support this behaviour without a workaround.
// PANIC: ... comments / clippy#15895
This is a proposal to add a lint that would require a PANIC comment (modeled after the SAFETY comment) to explain the circumstances during which the code will or won't panic.
Alejandra González was open to the suggestion and Henry Barker stepped up to implement it.
Deref/Receiver
During the experimentation work, Ding ran into an issue with overlapping impls (that was present even with #[unstable_feature_bound(..)]). We ran out of time but we'll discuss this offline and return to it at the next meeting.
| Progress | |
| Point of contact |
|
| Champions |
cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols) |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact |
|
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Cargo tracking issue: https://github.com/rust-lang/cargo/issues/15844. The first implementation was https://github.com/rust-lang/cargo/pull/15845 in August that added build.analysis.enabled = true to unconditionally generate timing HTML. Further implementations tasks is listed in https://github.com/rust-lang/cargo/issues/15844#issuecomment-3192779748.
Haven't yet got any progress in September.
| Progress | |
| Point of contact | |
| Champions |
compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett) |
| Task owners |
oli-obk |
1 detailed update available.
I implemented an initial MVP supporting only tuples and primitives (tho those are just opaque things you can't interact with further), and getting offsets for the tuple fields as well as the size of the tuple: https://github.com/rust-lang/rust/pull/146923
There are two designs of how to expose this from a libs perspective, but after a sync meeting with scottmcm yesterday we came to the conclusion that neither is objectively better at this stage so we're just going to go with the nice end-user UX version for now. For details see the PR description.
Once the MVP lands, I will mentor various interested contributors who will keep adding fields to the Type struct and variants the TypeKind enum.
The next major step is restricting what information you can get from structs outside of the current module or crate. We want to honor visibility, so an initial step would be to just never show private fields, but we want to explore allowing private fields to be shown either just within the current module or via some opt-in marker trait
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
Status update October 6, 2025
The build-dir was split out of target-dir as part of https://github.com/rust-lang/cargo/issues/14125 and scheduled for stabilization in Rust 1.91.0. 🎉
Before re-organizing the build-dir layout we wanted to improve the existing layout tests to make sure we do not make any unexpected changes. This testing harness improvement was merged in https://github.com/rust-lang/cargo/pull/15874.
The initial build-dir layout reorganization PR has been posted https://github.com/rust-lang/cargo/pull/15947 and discussion/reviews are under way.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Task owners |
[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec) |
| Progress | |
| Point of contact | |
| Task owners |
vision team |
1 detailed update available.
Update:
Niko and I gave a talk at RustConf 2025 (and I represented that talk at RustChinaConf 2025) where we gave an update on this (and some intermediate insights).
We have started to seriously plan the shape of the final doc. We have some "blind spots" that we'd like to cover before finishing up, but overall we're feeling close to the finish line on interviews.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
1 detailed update available.
We moved forward with the implementation, and the new job queue system is now being tested in production on a single test pull request. Most things seem to be working, but there are a few things to iron out and some profiling to be done. I expect that within a few weeks we could be ready to switch to the new system fully in production.
| Progress | |
| Point of contact |
|
| Champions | |
| Task owners |
|
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions |
compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras) |
| Task owners |
1 detailed update available.
Sized hierarchy
The focus right now is on the "non-const" parts of the proposal, as the "const" parts are blocked on the new trait solver (https://github.com/rust-lang/rust-project-goals/issues/113). Now that the types team FCP https://github.com/rust-lang/rust/pull/144064 has completed, work can proceed to land the implementation PRs. David Wood plans to split the RFC to separate out the "non-const" parts of the proposal so it can move independently, which will enable extern types.
To that end, there are three interesting T-lang design questions to be considered.
Naming of the traits
The RFC currently proposes the following names
SizedMetaSizedPointeeSized
However, these names do not follow the "best practice" of naming the trait after the capability that it provides. As champion Niko is recommending we shift to the following names:
Sized-- should righly be calledSizeOf, but oh well, not worth changing.SizeOfVal-- named after the methodsize_of_valthat you get access to.Pointee-- the only thing you can do is point at it.
The last trait name is already used by the (unstable) std::ptr::Pointee trait. We do not want to have these literally be the same trait because that trait adds a Metadata associated type which would be backwards incompatible; if existing code uses T::Metadata to mean <T as SomeOtherTrait>::Metadata, it could introduce ambiguity if now T: Pointee due to defaults. My proposal is to rename std::ptr::Pointee to std::ptr::PointeeMetadata for now, since that trait is unstable and the design remains under some discussion. The two traits could either be merged eventually or remain separate.
Note that PointeeMetadata would be implemented automatically by the compiler for anything that implements Pointee.
Syntax opt-in
The RFC proposes that an explicit bound like T: MetaSized disabled the default T: Sized bound. However, this gives no signal that this trait bound is "special" or different than any other trait bound. Naming conventions can help here, signalling to users that these are special traits, but that leads to constraints on naming and may not scale as we consider using this mechanism to relax other defaults as proposed in my recent blog post. One idea is to use some form of syntax, so that T: MetaSized is just a regular bound, but (for example) T: =MetaSized indicates that this bound "disables" the default Sized bound. This gives users some signal that something special is going on. This = syntax is borrowing from semver constraints, although it's not a precise match (it does not mean that T: Sized doesn't hold, after all). Other proposals would be some other sigil (T: ?MetaSized, but it means "opt out from the traits above you"; T: #MetaSized, ...) or a keyword (no idea).
To help us get a feel for it, I'll use T: =Foo throughout this post.
Implicit trait supertrait bounds, edition interaction
In Rust 2024, a trait is implicitly ?Sized which gets mapped to =SizeOfVal:
trait Marker {} // cannot be implemented by extern types
This is not desirable but changing it would be backwards incompatible if traits have default methods that take advantage of this bound:
trait NotQuiteMarker {
fn dummy(&self) {
let s = size_of_val(self);
}
}
We need to decide how to handle this. Options are
- Just change it, breakage will be small (have to test that).
- Default to
=SizeOfValbut let users explicitly write=Pointeeif they want that. Bad because all traits will be incompatible with extern types. - Default to
=SizeOfValonly if defaulted methods are present. Bad because it's a backwards incompatible change to add a defaulted method now. - Default to
=Pointeebut addwhere Self: =SizeOfValimplicitly to defaulted methods. Now it's not backwards incompatible to add a new defaulted method, but it is backwards incompatible to change an existing method to have a default.
If we go with one of the latter options, Niko proposes that we should relax this in the next Edition (Rust 2026?) so that the default becomes Pointee (or maybe not even that, if we can).
Relaxing associated type bounds
Under the RFC, existing ?Sized bounds would be equivalent to =SizeOfVal. This is mostly fine but will cause problems in (at least) two specific cases: closure bounds and the Deref trait. For closures, we can adjust the bound since the associated type is unstable and due to the peculiarities of our Fn() -> T syntax. Failure to adjust the Deref bound in particular would prohibit the use of Rc<E> where E is an extern type, etc.
For deref bounds, David Wood is preparing a PR that simply changes the bound in a backwards incompatible way to assess breakage on crater. There is some chance the breakage will be small.
If the breakage proves problematic, or if we find other traits that need to be relaxed in a similar fashion, we do have the option of:
- In Rust 2024,
T: Derefbecomes equivalent toT: Deref<Target: SizeOfVal>unless written likeT: Deref<Target: =Pointee>. We add that annotation throughout stdlib. - In Rust 202X, we change the default, so that
T: Derefdoes not add any special bounds, and existing Rust 2024T: Derefis rewritten toT: Deref<Target: SizeOfVal>as needed.
Other notes
One topic that came up in discussion is that we may eventually wish to add a level "below" Pointee, perhaps Value, that signifies webassembly external values which cannot be pointed at. That is not currently under consideration but should be backwards compatible.
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
| Progress | |
| Point of contact | |
| Champions | |
| Task owners |
19 Nov 2025 12:00am GMT
18 Nov 2025
Planet Mozilla
Mozilla Thunderbird: Thunderbird Adds Native Microsoft Exchange Email Support

If your organization uses Microsoft Exchange-based email, you'll be happy to hear that Thunderbird's latest monthly Release version 145, now officially supports native access via the Exchange Web Services (EWS) protocol. With EWS now built directly into Thunderbird, a third-party add-on is no longer required for email functionality. Calendar and address book support for Exchange accounts remain on the roadmap, but email integration is here and ready to use!
What changes for Thunderbird users
Until now, Thunderbird users in Exchange hosted environments often relied on IMAP/POP protocols or third-party extensions. With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more. This simplifies life for users who depend on Exchange for email but prefer Thunderbird as their client.
How to get started
For many people switching from Outlook to Thunderbird, the most common setup involves Microsoft-hosted Exchange accounts such as Microsoft 365 or Office 365. Thunderbird now uses Microsoft's standard sign-in process (OAuth2) and automatically detects your account settings, so you can start using your email right away without any extra setup.
If this applies to you, setup is straightforward:
- Create a new account in Thunderbird 145 or newer.
- In the new Account Hub, select Exchange (or Exchange Web Services in legacy setup).
- Let Thunderbird handle the rest!

Important note: If you see something different, or need more details or advice, please see our support page and wiki page. Also, some authentication configurations are not supported yet and you may need to wait for a further update that expands compatibility, please refer to the table below for more details.
What functionality is supported now and what's coming soon
As mentioned earlier, EWS support in version 145 currently enables email functionality only. Calendar and address book integration are in active development and will be added in future releases. The chart below provides an at-a-glance view of what's supported today.
| Feature area | Supported now | Not yet supported |
| Email - account setup & folder access | Creating accounts via auto-config with EWS, server-side folder manipulation |
- |
| Email - message operations | Viewing messages, sending, replying/forwarding, moving/copying/deleting |
- |
| Email - attachments | Attachments can be saved and displayed with detach/delete support. |
- |
| Search & filtering | Search subject and body, quick filtering |
Filter actions requiring full body content are not yet supported. |
| Accounts hosted on Microsoft 365 | Domains using the standard Microsoft OAuth2 endpoint |
Domains requiring custom OAuth2 application and tenant IDs will be supported in the future. |
| Accounts hosted on-premise | Password-based Basic authentication |
Password-based NTLM authentication and OAuth2 for on-premise servers are on the roadmap. |
| Calendar support | - | Not yet implemented - calendar syncing is on the roadmap. |
| Address book / contacts support | - | Not yet implemented - address book support is on the roadmap. |
| Microsoft Graph support | - | Not yet implemented - Microsoft Graph integration will be added in the future. |
Exchange Web Services and Microsoft Graph
While many people and organizations still rely on Exchange Web Services (EWS), Microsoft has begun gradually phasing it out in favor of a newer, more modern interface called Microsoft Graph. Microsoft has stated that EWS will continue to be supported for the foreseeable future, but over time, Microsoft Graph will become the primary way to connect to Microsoft 365 services.
Because EWS remains widely used today, we wanted to ensure full support for it first to ensure compatibility for existing users. At the same time, we're actively working to add support for Microsoft Graph, so Thunderbird will be ready as Microsoft transitions to its new standard.
Looking ahead
While Exchange email is available now, calendar and address book integration is on the way, bringing Thunderbird closer to being a complete solution for Exchange users. For many people, having reliable email access is the most important step, but if you depend on calendar and contact synchronization, we're working hard to bring this to Thunderbird in the near future, making Thunderbird a strong alternative to Outlook.
Keep an eye on future releases for additional support and integrations, but in the meantime, enjoy a smoother Exchange email experience within your favorite email client!
If you want to know more about Exchange support in Thunderbird, please refer to the dedicated page on support.mozilla.org. Organization admins can also find out more on the Mozilla wiki page. To follow ongoing and future work in this area, please refer to the relevant meta-bug on Bugzilla.
The post Thunderbird Adds Native Microsoft Exchange Email Support appeared first on The Thunderbird Blog.
18 Nov 2025 3:15pm GMT
The Rust Programming Language Blog: Google Summer of Code 2025 results
As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.
We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!
Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors.
Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here. For easier navigation, here is an index of the project descriptions in alphabetical order:
- ABI/Layout handling for the automatic differentiation feature by Marcelo Domínguez
- Add safety contracts by Dawid Lachowicz
- Bootstrap of rustc with rustc_codegen_gcc by Michał Kostrubiec
- Cargo: Build script delegation by Naman Garg
- Distributed and resource-efficient verification by Jiping Zhou
- Enable Witness Generation in cargo-semver-checks by Talyn Veugelers
- Extend behavioural testing of std::arch intrinsics by Madhav Madhusoodanan
- Implement merge functionality in bors by Sakibul Islam
- Improve bootstrap by Shourya Sharma
- Improve Wild linker test suites by Kei Akiyama
- Improving the Rustc Parallel Frontend: Parallel Macro Expansion by Lorrens Pantelis
- Make cargo-semver-checks faster by Joseph Chung
- Make Rustup Concurrent by Francisco Gouveia
- Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices by Julien Robert
- Modernising the libc Crate by Abdul Muiz
- Prepare stable_mir crate for publishing by Makai
- Prototype an alternative architecture for cargo fix using cargo check by Glen Thalakottur
- Prototype Cargo Plumbing Commands by Vito Secona
And now strap in, as there is a ton of great content to read about here!
ABI/Layout handling for the automatic differentiation feature
- Contributor: Marcelo Domínguez
- Mentors: Manuel Drehwald, Oli Scherer
- Final report
The std::autodiff module allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user-written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.
Our autodiff frontend was facing two challenges.
- First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
- Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.
Marcelo helped rewrite our autodiff macros to not generate hacky placeholder function bodies, but instead introduced a proper autodiff intrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR.
Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!
Add safety contracts
- Contributor: Dawid Lachowicz
- Mentor: Michael Tautschnig
- Final report
The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts, moving from informal comments that specify safety requirements of unsafe functions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.
Given that Rust contracts are still in their early stages, Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:
-
Pragmatic Contracts Integration: Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.
-
Variable Reference Capability: Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.
-
Separation Logic Integration: The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created.
Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!
Bootstrap of rustc with rustc_codegen_gcc
- Contributor: Michał Kostrubiec
- Mentor: antoyo
- Final report
The goal of this project was to improve the Rust GCC codegen backend (rustc_codegen_gcc), so that it would be able to compile the "stage 2"1 Rust compiler (rustc) itself again.
You might remember that Michał already participated in GSoC last year, where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build rustc with GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with #[inline(always)] were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog.
After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3" rustc build working, for which he had to vastly improve the memory consumption of the codegen backend.
Once that was done, he moved on to yet another goal, which was to build rustc for a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k. He also attempted to compile rustc on Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build a rustc for m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux.
Michał had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here.
We were really impressed with what Michał was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, Michał!
Cargo: Build script delegation
- Contributor: Naman Garg
- Mentor: Ed Page
- Final report
Cargo build scripts come at a compile-time cost, because even to run cargo check, they must be built as if you ran cargo build, so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here.
The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable multiple-build-scripts feature to Cargo, implemented support for parsing an array of build scripts in Cargo.toml, and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.
Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:
[]
= ["windows-manifest.rs", "release-info.rs"]
then the corresponding crate is able to access the OUT_DIRs of both build scripts using env!("windows-manifest_OUT_DIR") and env!("release-info_OUTDIR").
As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in Cargo.toml and then implement the actual build script delegation to external build scripts using artifact-dependencies.
We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!
Distributed and resource-efficient verification
- Contributor: Jiping Zhou
- Mentor: Michael Tautschnig
- Final report
The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.
Jiping implemented a distributed verification system with several key innovations:
- Intelligent Change Detection: The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
- Multi-Tool Orchestration: The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
- Distributed Architecture: The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
- Real-time Visualization: Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here!
You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!
Enable Witness Generation in cargo-semver-checks
- Contributor: Talyn Veugelers
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.
Imagine a crate makes the following change to its public API:
// baseline version
// new version
This is clearly a major breaking change, right? And yet cargo-semver-checks with its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changing i64 to String, what if the change was from i64 to impl Into<i64>, or worse, into some monstrosity like:
Figuring out whether this change is breaking requires checking whether the original i64 parameter type can "fit" into that monstrosity of an impl Trait type. But reimplementing a Rust type checker and trait solver inside cargo-semver-checks is out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io-we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.
The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our example function comes from a crate called upstream, its witness program would look something like:
// take the same parameter type as the baseline version
This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!
Attempting to cargo check the witness while plugging in the new version of upstream forces the Rust compiler to decide whether i64 matches the new impl Trait parameter. If cargo check passes without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!
Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.
Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the cargo-semver-checks roadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.
Extend behavioural testing of std::arch intrinsics
- Contributor: Madhav Madhusoodanan
- Mentor: Amanieu d'Antras
- Final report
The std::arch module contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.
The intrinsics are tested with three approaches. We test that:
- The signatures of the intrinsics match the one specified by the architecture.
- The intrinsics generate the correct instruction.
- The intrinsics have the correct runtime behavior.
These behavior tests are implemented in the intrinsics-test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.
First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.
We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!
Implement merge functionality in bors
- Contributor: Sakibul Islam
- Mentor: Jakub Beránek
- Final report
The main Rust repository uses a pull request merge queue bot that we call bors. Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.
Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.
After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!
After we finish the final piece of the merge logic (creating so-called "rollups") together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all rust-lang/rust pull requests. Exciting!
Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!
Improve bootstrap
- Contributor: Shourya Sharma
- Mentors: Jakub Beránek, Jieyou Xu, Onur Özkan
- Final report
bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through rustup. This project's goal was very open-ended: "improve bootstrap".
And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.
After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.
The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare cargo, which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.
Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!
Improve Wild linker test suites
- Contributor: Kei Akiyama
- Mentor: David Lattimore
- Final report
Wild is a very fast linker for Linux that's written in Rust. It can be used to build executables and shared objects.
Kei's project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei's efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.
In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support --help, which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.
Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs. We thank Kei for all the hard work and look forward to continued collaboration in the future!
Improving the Rustc Parallel Frontend: Parallel Macro Expansion
- Contributor: Lorrens Pantelis
- Mentors: Sparrow Li, Vadim Petrochenkov
- Final report
The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.
We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.
The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.
Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings. He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.
These refactorings allowed him to implement something called "batched import resolution", which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues, after which the batched import resolution work will hopefully be merged.
Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!
Make cargo-semver-checks faster
- Contributor: Joseph Chung
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at 229. More lints mean more work for cargo-semver-checks to do, as well as more work for its test suite which runs over 250000 lint checks!
Joseph's contributions took three forms:
- Improving
cargo-semver-checksruntime performance-on large crates, our query runtime went from ~8s to ~2s, a 4x improvement! - Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
- Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.
Joseph described all the clever optimization tricks leading to these results in his final report. To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.
cargo-semver-checks relies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release of cargo-semver-checks supports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version's serde types to use to deserialize the data.
Previously, we used to deserialize the JSON file twice: once with a serde type that only loaded the format_version: u32 field, and a second time with the appropriate serde type that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. While serde is quite fast, there's nothing as fast as not doing the work twice in the first place!
So we used a trick: optimistically check if the format_version field is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a , character in the last few dozen bytes, then look for : after the , character, and for format_version between them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.
Joseph did a lot of profiling and performance optimizations to make cargo-semver-checks faster for everyone, with awesome results. Thank you very much for your work!
Make Rustup Concurrent
- Contributor: Francisco Gouveia
- Mentor: rami3l
- Final report
As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global tokio runtime in #3367, this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:
- He created a new set of user interfaces for displaying concurrent progress.
- He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.
As a warmup, Francisco made rustup check concurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif-based progress bar for reporting progress of concurrent operations, which replaced the original hand-rolled solution.
After that, the focus of the project has moved on to the toolchain installation flow used in commands like rustup toolchain install and rustup update. In this part, Francisco developed two main improvements:
- The possibility of downloading multiple components at once when setting up a toolchain, controlled by the
RUSTUP_CONCURRENT_DOWNLOADSenvironment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits. - The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the
defaultprofile.
We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!
Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices
- Contributor: Julien Robert
- Mentor: Jieyou Xu
- Final report
The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:
- We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
- We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.
Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from git blame and friends.
Over the years, we have accumulated a lot of unorganized stray test files in the top level tests/ui directory, and have a lot of generically named issue-*.rs tests in the tests/ui/issues/ directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.
Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:
- No more stray tests under the immediate
tests/ui/top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added. - A top-level document contains TL;DRs for each of the immediate subdirectories.
- Substantially fewer generically-named
issue-*.rsundertests/ui/issues/.
Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.
Modernising the libc Crate
- Contributor: Abdul Muiz
- Mentor: Trevor Gross
- Final report
libc is a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release of libc.
Test generation is handled by the ctest crate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project, ctest used an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaning libc could not easily use any syntax newer than that. Abdul completely rewrote ctest to use syn as its parser and make it much easier to add new tests, then went through and switched everything over to the more modern ctest. After this change, we were able to remove a number of hacks that had been needed to work with the old parser.
The other part of the project was to make progress toward the 1.0 release of libc. Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all the ctest changes.
While there is still a lot of work left to do before libc can reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!
Prepare stable_mir crate for publishing
- Contributor: Makai
- Mentor: Celina Val
- Final report
This project's goal was to prepare the Rust compiler's stable_mir crate (eventually renamed to rustc_public), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.
The main technical work involved restructuring rustc_public and rustc_public_bridge (previously named rustc_smir) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.
Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.
Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish rustc_public independently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!
Prototype an alternative architecture for cargo fix using cargo check
- Contributor: Glen Thalakottur
- Mentor: Ed Page
- Final report
The cargo fix command applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow, it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.
These problems are caused by its current architecture; it is implemented as a variant of cargo check that replaces rustc with cargo being run in a special mode that will call rustc in a loop, applying fixes until there are none. While this special rustc-proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making the rustc-proxy interactive.
Glen implemented a proof of concept of an alternative design called cargo-fixit. cargo fixit spawns cargo check in a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.
Glen performed various benchmarks to test how the new approach performs. And in some benchmarks, cargo fixit was able to finish within a few hundred milliseconds, where before the same task took cargo fix almost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!
Further work remains to be done on cargo-fixit to investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!
Prototype Cargo Plumbing Commands
- Contributor: Vito Secona
- Mentors: Cassaundra, Ed Page
- Final report
The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.
Vito created a prototype of several plumbing commands in the cargo-plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers. For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.
Vito did a lot of work, as he implemented seven different plumbing subcommands:
locate-manifestread-manifestread-lockfilelock-dependencieswrite-lockfileresolve-featuresplan-build
As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.
We thank Vito for all his work on improving the flexibility of Cargo.
Conclusion
We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.
18 Nov 2025 12:00am GMT








Creating accounts via auto-config with EWS, server-side folder manipulation
Filter actions requiring full body content are not yet supported.