18 Nov 2025
Planet Mozilla
Mozilla Thunderbird: Thunderbird Adds Native Microsoft Exchange Email Support

If your organization uses Microsoft Exchange-based email, you'll be happy to hear that Thunderbird's latest monthly Release version 145, now officially supports native access via the Exchange Web Services (EWS) protocol. With EWS now built directly into Thunderbird, a third-party add-on is no longer required for email functionality. Calendar and address book support for Exchange accounts remain on the roadmap, but email integration is here and ready to use!
What changes for Thunderbird users
Until now, Thunderbird users in Exchange hosted environments often relied on IMAP/POP protocols or third-party extensions. With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more. This simplifies life for users who depend on Exchange for email but prefer Thunderbird as their client.
How to get started
For many people switching from Outlook to Thunderbird, the most common setup involves Microsoft-hosted Exchange accounts such as Microsoft 365 or Office 365. Thunderbird now uses Microsoft's standard sign-in process (OAuth2) and automatically detects your account settings, so you can start using your email right away without any extra setup.
If this applies to you, setup is straightforward:
- Create a new account in Thunderbird 145 or newer.
- In the new Account Hub, select Exchange (or Exchange Web Services in legacy setup).
- Let Thunderbird handle the rest!

Important note: If you see something different, or need more details or advice, please see our support page and wiki page. Also, some authentication configurations are not supported yet and you may need to wait for a further update that expands compatibility, please refer to the table below for more details.
What functionality is supported now and what's coming soon
As mentioned earlier, EWS support in version 145 currently enables email functionality only. Calendar and address book integration are in active development and will be added in future releases. The chart below provides an at-a-glance view of what's supported today.
| Feature area | Supported now | Not yet supported |
| Email - account setup & folder access | Creating accounts via auto-config with EWS, server-side folder manipulation |
- |
| Email - message operations | Viewing messages, sending, replying/forwarding, moving/copying/deleting |
- |
| Email - attachments | Attachments can be saved and displayed with detach/delete support. |
- |
| Search & filtering | Search subject and body, quick filtering |
Filter actions requiring full body content are not yet supported. |
| Accounts hosted on Microsoft 365 | Domains using the standard Microsoft OAuth2 endpoint |
Domains requiring custom OAuth2 application and tenant IDs will be supported in the future. |
| Accounts hosted on-premise | Password-based Basic authentication |
Password-based NTLM authentication and OAuth2 for on-premise servers are on the roadmap. |
| Calendar support | - | Not yet implemented - calendar syncing is on the roadmap. |
| Address book / contacts support | - | Not yet implemented - address book support is on the roadmap. |
| Microsoft Graph support | - | Not yet implemented - Microsoft Graph integration will be added in the future. |
Exchange Web Services and Microsoft Graph
While many people and organizations still rely on Exchange Web Services (EWS), Microsoft has begun gradually phasing it out in favor of a newer, more modern interface called Microsoft Graph. Microsoft has stated that EWS will continue to be supported for the foreseeable future, but over time, Microsoft Graph will become the primary way to connect to Microsoft 365 services.
Because EWS remains widely used today, we wanted to ensure full support for it first to ensure compatibility for existing users. At the same time, we're actively working to add support for Microsoft Graph, so Thunderbird will be ready as Microsoft transitions to its new standard.
Looking ahead
While Exchange email is available now, calendar and address book integration is on the way, bringing Thunderbird closer to being a complete solution for Exchange users. For many people, having reliable email access is the most important step, but if you depend on calendar and contact synchronization, we're working hard to bring this to Thunderbird in the near future, making Thunderbird a strong alternative to Outlook.
Keep an eye on future releases for additional support and integrations, but in the meantime, enjoy a smoother Exchange email experience within your favorite email client!
If you want to know more about Exchange support in Thunderbird, please refer to the dedicated page on support.mozilla.org. Organization admins can also find out more on the Mozilla wiki page. To follow ongoing and future work in this area, please refer to the relevant meta-bug on Bugzilla.
The post Thunderbird Adds Native Microsoft Exchange Email Support appeared first on The Thunderbird Blog.
18 Nov 2025 3:15pm GMT
The Rust Programming Language Blog: Google Summer of Code 2025 results
As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.
We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!
Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors.
Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here. For easier navigation, here is an index of the project descriptions in alphabetical order:
- ABI/Layout handling for the automatic differentiation feature by Marcelo Domínguez
- Add safety contracts by Dawid Lachowicz
- Bootstrap of rustc with rustc_codegen_gcc by Michał Kostrubiec
- Cargo: Build script delegation by Naman Garg
- Distributed and resource-efficient verification by Jiping Zhou
- Enable Witness Generation in cargo-semver-checks by Talyn Veugelers
- Extend behavioural testing of std::arch intrinsics by Madhav Madhusoodanan
- Implement merge functionality in bors by Sakibul Islam
- Improve bootstrap by Shourya Sharma
- Improve Wild linker test suites by Kei Akiyama
- Improving the Rustc Parallel Frontend: Parallel Macro Expansion by Lorrens Pantelis
- Make cargo-semver-checks faster by Joseph Chung
- Make Rustup Concurrent by Francisco Gouveia
- Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices by Julien Robert
- Modernising the libc Crate by Abdul Muiz
- Prepare stable_mir crate for publishing by Makai
- Prototype an alternative architecture for cargo fix using cargo check by Glen Thalakottur
- Prototype Cargo Plumbing Commands by Vito Secona
And now strap in, as there is a ton of great content to read about here!
ABI/Layout handling for the automatic differentiation feature
- Contributor: Marcelo Domínguez
- Mentors: Manuel Drehwald, Oli Scherer
- Final report
The std::autodiff module allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user-written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.
Our autodiff frontend was facing two challenges.
- First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
- Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.
Marcelo helped rewrite our autodiff macros to not generate hacky placeholder function bodies, but instead introduced a proper autodiff intrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR.
Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!
Add safety contracts
- Contributor: Dawid Lachowicz
- Mentor: Michael Tautschnig
- Final report
The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts, moving from informal comments that specify safety requirements of unsafe functions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.
Given that Rust contracts are still in their early stages, Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:
-
Pragmatic Contracts Integration: Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.
-
Variable Reference Capability: Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.
-
Separation Logic Integration: The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created.
Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!
Bootstrap of rustc with rustc_codegen_gcc
- Contributor: Michał Kostrubiec
- Mentor: antoyo
- Final report
The goal of this project was to improve the Rust GCC codegen backend (rustc_codegen_gcc), so that it would be able to compile the "stage 2"1 Rust compiler (rustc) itself again.
You might remember that Michał already participated in GSoC last year, where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build rustc with GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with #[inline(always)] were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog.
After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3" rustc build working, for which he had to vastly improve the memory consumption of the codegen backend.
Once that was done, he moved on to yet another goal, which was to build rustc for a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k. He also attempted to compile rustc on Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build a rustc for m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux.
Michał had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here.
We were really impressed with what Michał was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, Michał!
Cargo: Build script delegation
- Contributor: Naman Garg
- Mentor: Ed Page
- Final report
Cargo build scripts come at a compile-time cost, because even to run cargo check, they must be built as if you ran cargo build, so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here.
The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable multiple-build-scripts feature to Cargo, implemented support for parsing an array of build scripts in Cargo.toml, and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.
Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:
[]
= ["windows-manifest.rs", "release-info.rs"]
then the corresponding crate is able to access the OUT_DIRs of both build scripts using env!("windows-manifest_OUT_DIR") and env!("release-info_OUTDIR").
As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in Cargo.toml and then implement the actual build script delegation to external build scripts using artifact-dependencies.
We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!
Distributed and resource-efficient verification
- Contributor: Jiping Zhou
- Mentor: Michael Tautschnig
- Final report
The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.
Jiping implemented a distributed verification system with several key innovations:
- Intelligent Change Detection: The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
- Multi-Tool Orchestration: The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
- Distributed Architecture: The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
- Real-time Visualization: Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here!
You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!
Enable Witness Generation in cargo-semver-checks
- Contributor: Talyn Veugelers
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.
Imagine a crate makes the following change to its public API:
// baseline version
// new version
This is clearly a major breaking change, right? And yet cargo-semver-checks with its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changing i64 to String, what if the change was from i64 to impl Into<i64>, or worse, into some monstrosity like:
Figuring out whether this change is breaking requires checking whether the original i64 parameter type can "fit" into that monstrosity of an impl Trait type. But reimplementing a Rust type checker and trait solver inside cargo-semver-checks is out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io-we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.
The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our example function comes from a crate called upstream, its witness program would look something like:
// take the same parameter type as the baseline version
This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!
Attempting to cargo check the witness while plugging in the new version of upstream forces the Rust compiler to decide whether i64 matches the new impl Trait parameter. If cargo check passes without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!
Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.
Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the cargo-semver-checks roadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.
Extend behavioural testing of std::arch intrinsics
- Contributor: Madhav Madhusoodanan
- Mentor: Amanieu d'Antras
- Final report
The std::arch module contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.
The intrinsics are tested with three approaches. We test that:
- The signatures of the intrinsics match the one specified by the architecture.
- The intrinsics generate the correct instruction.
- The intrinsics have the correct runtime behavior.
These behavior tests are implemented in the intrinsics-test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.
First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.
We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!
Implement merge functionality in bors
- Contributor: Sakibul Islam
- Mentor: Jakub Beránek
- Final report
The main Rust repository uses a pull request merge queue bot that we call bors. Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.
Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.
After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!
After we finish the final piece of the merge logic (creating so-called "rollups") together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all rust-lang/rust pull requests. Exciting!
Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!
Improve bootstrap
- Contributor: Shourya Sharma
- Mentors: Jakub Beránek, Jieyou Xu, Onur Özkan
- Final report
bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through rustup. This project's goal was very open-ended: "improve bootstrap".
And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.
After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.
The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare cargo, which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.
Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!
Improve Wild linker test suites
- Contributor: Kei Akiyama
- Mentor: David Lattimore
- Final report
Wild is a very fast linker for Linux that's written in Rust. It can be used to build executables and shared objects.
Kei's project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei's efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.
In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support --help, which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.
Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs. We thank Kei for all the hard work and look forward to continued collaboration in the future!
Improving the Rustc Parallel Frontend: Parallel Macro Expansion
- Contributor: Lorrens Pantelis
- Mentors: Sparrow Li, Vadim Petrochenkov
- Final report
The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.
We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.
The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.
Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings. He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.
These refactorings allowed him to implement something called "batched import resolution", which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues, after which the batched import resolution work will hopefully be merged.
Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!
Make cargo-semver-checks faster
- Contributor: Joseph Chung
- Mentor: Predrag Gruevski
- Final report
cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at 229. More lints mean more work for cargo-semver-checks to do, as well as more work for its test suite which runs over 250000 lint checks!
Joseph's contributions took three forms:
- Improving
cargo-semver-checksruntime performance-on large crates, our query runtime went from ~8s to ~2s, a 4x improvement! - Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
- Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.
Joseph described all the clever optimization tricks leading to these results in his final report. To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.
cargo-semver-checks relies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release of cargo-semver-checks supports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version's serde types to use to deserialize the data.
Previously, we used to deserialize the JSON file twice: once with a serde type that only loaded the format_version: u32 field, and a second time with the appropriate serde type that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. While serde is quite fast, there's nothing as fast as not doing the work twice in the first place!
So we used a trick: optimistically check if the format_version field is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a , character in the last few dozen bytes, then look for : after the , character, and for format_version between them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.
Joseph did a lot of profiling and performance optimizations to make cargo-semver-checks faster for everyone, with awesome results. Thank you very much for your work!
Make Rustup Concurrent
- Contributor: Francisco Gouveia
- Mentor: rami3l
- Final report
As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global tokio runtime in #3367, this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:
- He created a new set of user interfaces for displaying concurrent progress.
- He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.
As a warmup, Francisco made rustup check concurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif-based progress bar for reporting progress of concurrent operations, which replaced the original hand-rolled solution.
After that, the focus of the project has moved on to the toolchain installation flow used in commands like rustup toolchain install and rustup update. In this part, Francisco developed two main improvements:
- The possibility of downloading multiple components at once when setting up a toolchain, controlled by the
RUSTUP_CONCURRENT_DOWNLOADSenvironment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits. - The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the
defaultprofile.
We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!
Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices
- Contributor: Julien Robert
- Mentor: Jieyou Xu
- Final report
The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:
- We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
- We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.
Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from git blame and friends.
Over the years, we have accumulated a lot of unorganized stray test files in the top level tests/ui directory, and have a lot of generically named issue-*.rs tests in the tests/ui/issues/ directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.
Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:
- No more stray tests under the immediate
tests/ui/top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added. - A top-level document contains TL;DRs for each of the immediate subdirectories.
- Substantially fewer generically-named
issue-*.rsundertests/ui/issues/.
Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.
Modernising the libc Crate
- Contributor: Abdul Muiz
- Mentor: Trevor Gross
- Final report
libc is a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release of libc.
Test generation is handled by the ctest crate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project, ctest used an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaning libc could not easily use any syntax newer than that. Abdul completely rewrote ctest to use syn as its parser and make it much easier to add new tests, then went through and switched everything over to the more modern ctest. After this change, we were able to remove a number of hacks that had been needed to work with the old parser.
The other part of the project was to make progress toward the 1.0 release of libc. Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all the ctest changes.
While there is still a lot of work left to do before libc can reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!
Prepare stable_mir crate for publishing
- Contributor: Makai
- Mentor: Celina Val
- Final report
This project's goal was to prepare the Rust compiler's stable_mir crate (eventually renamed to rustc_public), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.
The main technical work involved restructuring rustc_public and rustc_public_bridge (previously named rustc_smir) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.
Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.
Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish rustc_public independently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!
Prototype an alternative architecture for cargo fix using cargo check
- Contributor: Glen Thalakottur
- Mentor: Ed Page
- Final report
The cargo fix command applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow, it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.
These problems are caused by its current architecture; it is implemented as a variant of cargo check that replaces rustc with cargo being run in a special mode that will call rustc in a loop, applying fixes until there are none. While this special rustc-proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making the rustc-proxy interactive.
Glen implemented a proof of concept of an alternative design called cargo-fixit. cargo fixit spawns cargo check in a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.
Glen performed various benchmarks to test how the new approach performs. And in some benchmarks, cargo fixit was able to finish within a few hundred milliseconds, where before the same task took cargo fix almost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!
Further work remains to be done on cargo-fixit to investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!
Prototype Cargo Plumbing Commands
- Contributor: Vito Secona
- Mentors: Cassaundra, Ed Page
- Final report
The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.
Vito created a prototype of several plumbing commands in the cargo-plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers. For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.
Vito did a lot of work, as he implemented seven different plumbing subcommands:
locate-manifestread-manifestread-lockfilelock-dependencieswrite-lockfileresolve-featuresplan-build
As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.
We thank Vito for all his work on improving the flexibility of Cargo.
Conclusion
We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.
18 Nov 2025 12:00am GMT
17 Nov 2025
Planet Mozilla
The Mozilla Blog: Firefox tab groups just got an upgrade, thanks to your feedback

Tab groups have become one of Firefox's most loved ways to stay organized - over 18 million people have used the feature since it launched earlier this year. Since then, we've been listening closely to feedback from the Mozilla Connect community to make this long-awaited feature even more helpful.
We've just concluded a round of highly requested tab groups updates that make it easier than ever to stay focused, organized, and productive. Check out what we've been up to, and if you haven't tried tab groups yet, here's a helpful starting guide.
Preview tab group contents on hover
Starting in Firefox 145, you can peek inside a group without expanding it. Whether you're checking a stash of tabs set aside for deep research or quickly scanning a group to find the right meeting notes doc, hover previews give you the context you need - instantly.
Keep the active tab visible in a collapsed group - and drag tabs into it
Since Firefox 142, when you collapse a group, the tab you're working in remains visible. It's a small but mighty improvement that reduces interruptions. And, starting in Firefox 143, you can drag a tab directly into a collapsed group without expanding it. It's a quick, intuitive way to stay organized while reducing on-screen clutter.
Each of these ideas came from your feedback on Mozilla Connect. We're grateful for your engagement, creativity, and patience as our team works to improve Tab Groups.
What's next for tab groups
We've got a big, healthy stash of great ideas and suggestions to explore, but we'd love to hear more from you on two areas of long-term interest:
- Improving the usefulness and ease of use of saved tab groups. We're curious how you're using them and how we can make the experience more helpful to you. What benefits do they bring to your workflow compared to bookmarks?
- Workspaces. Some of you have requested a way to separate contexts by creating workspaces - sets of tabs and tab groups that are entirely isolated from each other, yet remain available within a single browser window. We are curious about your workspace use cases and where context separation via window management or profiles doesn't meet your workflow needs. Is collaboration an important feature of the workspaces for you?
Have ideas and suggestions? Let us know in this Mozilla Connect thread!

Take control of your internet
Download FirefoxThe post Firefox tab groups just got an upgrade, thanks to your feedback appeared first on The Mozilla Blog.
17 Nov 2025 2:00pm GMT
The Rust Programming Language Blog: Launching the 2025 State of Rust Survey
It's time for the 2025 State of Rust Survey!
The Rust Project has been collecting valuable information about the Rust programming language community through our annual State of Rust Survey since 2016. Which means that this year marks the tenth edition of this survey!
We invite you to take this year's survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. The results will allow us to more deeply understand the global Rust community and how it evolves over time.
Like last year, the 2025 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until December 17. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.
We are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:
- English
- Chinese (Simplified)
- Chinese (Traditional)
- French
- German
- Japanese
- Ukrainian
- Russian
- Spanish
- Portuguese (Brazil)
Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!
Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.
This survey would not be possible without the time, resources, and attention of the Rust Survey Team, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):
- @jieyouxu
- @adriantombu
- @llogiq
- @Marcono1234
- @tanakakz
- @YohDeadFall
- @Kivooeo
- @avrong
- @igarai
- @weihanglo
- @tyranron
- @leandrobbraga
Thank you!
If you have any questions, please see our frequently asked questions.
We appreciate your participation!
Click here to read a summary of last year's survey findings.
By the way, the Rust Survey team is looking for new members. If you like working with data and coordinating people, and would like to help us out with managing various Rust surveys, please drop by our Zulip channel and say hi.
17 Nov 2025 12:00am GMT
14 Nov 2025
Planet Mozilla
Mozilla Thunderbird: VIDEO: An Android Retrospective

If you can believe it, Thunderbird for Android has been out for just over a year! In this episode of our Community Office Hours, Heather and Monica check back in with the mobile team after our chat with them back in January. Sr. Software Engineer Wolf Montwé and our new Manager of Mobile Apps, Jon Bott look back at what the growing mobile team has been able to accomplish this last year, what we're still working on, and what's up ahead.
We'll be back next month, talking with members of the desktop team all about Exchange support landing in Thunderbird 145!
Thunderbird for Android: One Year Later
The biggest visual change to the app since last year is the new Account Drawer. The mobile team wants to help users easily tell their accounts apart and switch between them. While this is still a work in progress, we've started making these changes in Thunderbird 11.0. We know not everyone is excited about UI changes, but we hope most users like these initial changes!
Another major but hidden change involves updating our very old code, which came from K-9 Mail. Much of the K-9 code goes back to 2009! Having to work with old code explains why some fixes or new features, which should be simple, turn out to be complex and time consuming. Changes end up affecting more components than we expect, which cause delivery timelines to change from a week to months.
We are also still working to proactively eliminate tech debt, which will make the code more reliable and secure, plus allow future improvements and feature additions to be done more quickly. Even though the team didn't eliminate as much tech debt as they planned, they feel the work they've done this year will help reduce even more next year.
Over this past year, the team has also realized Thunderbird for Android users have different needs from K-9 Mail users. Thunderbird desktop users want more features from the desktop app, and this is definitely a major goal we have for our future development. The current feature gap won't always be here!
Recently, the mobile team has started moving to a monthly release cadence, similar to Firefox and the monthly Thunderbird channel. Changing from bi-monthly to monthly reduces the risks of changing huge amounts of code all at once. The team can make more incremental changes, like the account drawer, in a smaller window. Regular, "bite size" changes allow us to have more conversation with the community. The development team also benefits because they can make better timelines and can more accurately predict the amount of work needed to ship future releases.
A Growing Team and Community
Since we released the Android app, the mobile team and contributor community has grown! One of the unexpected benefits of growing the team and community has been improved documentation. Documentation makes things visible for our talented engineers and existing volunteers, and makes it easier for newcomers to join the project!
Our volunteers have made some incredible contributions to the app! Translators have not only bolstered popular languages like German and French, but have enabled previously unsupported languages. In addition to localization, community members have helped develop the app. Shamin-emon has taken on complicated changes, and has been very patient when some of his proposed changes were delayed. Arnt, another community member, debugged and patched an issue with utf-8 strings in IMAP. And Platform34 triaged numerous issues to give developers insights into reported bugs.
Finally, we're learning how to balance refactoring and improving an Android app, and at the same time building an iOS app from scratch! Both apps are important, but the team has had to think about what's most important in each app. Android development is focusing on prioritizing top bugs and splitting the work to fix them into bite size pieces. With iOS, the team can develop in small increments from the start. Fortunately, the growing team and engaged community is making this balancing act easier than it would have been a year ago.
Looking Forward
In the next year, what can Android users look forward to? At the top of the priority list is better architecture leading to a better user experience, along with view and Message List improvements, HTML signatures, and JMAP support. For the iOS app, the team is focused on getting basic functionality like place, such as reading and writing mail, attachments, and work on the JMAP and IMAP protocols.
VIDEO (Also on Peertube):
Listen to the Episode
The post VIDEO: An Android Retrospective appeared first on The Thunderbird Blog.
14 Nov 2025 6:00pm GMT
The Servo Blog: October in Servo: better for the web, better for embedders, better for you
Servo now supports several new web platform features:
- <source> in <video> and <audio> (@tharkum, #39717)
- CompressionStream and DecompressionStream (@kkoyung, #39658)
- fetchLater() (@TimvdLippe, #39547)
- Document.parseHTMLUnsafe() (@lukewarlow, #40246)
- the which property on UIEvent (@Taym95, #40109)
- the relatedTarget property on UIEvent (@TimvdLippe, #40182)
- self.name and .onmessageerror in dedicated workers (@yerke, #40156)
- name and areas properties on HTMLMapElement (@tharkum, #40133)

servoshell for macOS now ships as native Apple Silicon binaries (@jschwe, #39981). Building servoshell for macOS x86-64 still works for now, but is no longer officially supported by automated testing in CI (see § For developers).
In servoshell for Android, you can now enable experimental mode with just two taps (@jdm, #40054), use the software keyboard (@jdm, #40009), deliver touch events to web content (@mrobinson, #40240), and dismiss the location field (@jdm, #40049). Pinch zoom is now fully supported in both Servo and servoshell, taking into account the locations of pinch inputs (@mrobinson, @atbrakhi, #40083) and allowing keyboard scrolling when zoomed in (@mrobinson, @atbrakhi, #40108).
AbortController and AbortSignal are now enabled by default (@jdm, @TimvdLippe, #40079, #39943), after implementing AbortSignal.timeout() (@Taym95, #40032) and fixing throwIfAborted() on AbortSignal (@Taym95, #40224). If this is the first time you've heard of them, you might be surprised how important they are for real-world web compat! Over 40% of Google Chrome page loads at least check if they are supported, and many popular websites including GitHub and Discord are broken without them.
XPath is now enabled by default (@simonwuelker, #40212), after implementing '@attr/parent' queries (@simonwuelker, #39749), Copy > XPath in the DevTools Inspector (@simonwuelker, #39892), completely rewriting the parser (@simonwuelker, #39977), and landing several other fixes (@simonwuelker, #40103, #40105, #40161, #40167, #39751, #39764).
Servo now supports new KeyboardEvent({keyCode}) and ({charCode}) (@atbrakhi, #39590), which is enough to get Speedometer 3.0 and 3.1 working on macOS.

ImageData can now be sent over postMessage() and structuredClone() (@Gae24, #40084).
Layout engine
Our layout engine can now render text in synthetic bold (@minghuaw, @mrobinson, #39519, #39681, #39633, #39691, #39713), and now selects more appropriate fallback fonts for Kanji in Japanese text (@arayaryoma, #39608).
'initial-scale' now does the right thing in <meta name=viewport> (@atbrakhi, @shubhamg13, @mrobinson, #40055).
We've improved the way we handle 'border-radius' (@Loirooriol, #39571) and margin collapsing (@Loirooriol, #36322). While they're fairly unassuming fixes on the surface, both of them allowed us to find interop issues in the big incumbent engines (@Loirooriol, #39540, #36321) and help improve web standards (@noamr, @Loirooriol, csswg-drafts#12961, csswg-drafts#12218).
In other words, Servo is good for the web, even if you're not using it yet!
Embedding and ecosystem
Our HTML-compatible XPath implementation now lives in its own crate, and it's no longer limited to the Servo DOM (@simonwuelker, #39546). We don't have any specific plans to release this as a standalone library just yet, but please let us know if you have a use case that would benefit from this!
You can now take screenshots of webviews with WebView::take_screenshot (@mrobinson, @delan, #39583).
Historically Servo has struggled with situations causing 100% CPU usage or unnecessary work on every tick of the event loop, whenever a page is considered "active" or "animating" (#25305, #3406). We had since throttled animations (@mrobinson, #37169) and reflows (@mrobinson, @Loirooriol, #38431), but only to fixed rates of 120 Hz and 60 Hz respectively.
But starting this month, you can run Servo with vsync, thanks to the RefreshDriver trait (@coding-joedow, @mrobinson, #39072), which allows embedders to tell Servo when to start rendering each frame. The default driver continues to run at 120 Hz, but you can define and install your own with ServoBuilder::refresh_driver.
Breaking changes
Servo's embedding API has had a few breaking changes:
-
Opts::wait_for_stable_imagewas removed; to wait for a stable image, callWebView::take_screenshotinstead (@mrobinson, @delan, #39583). -
MouseButtonAction::Clickwas removed; useDownfollowed byUp. Click events need to be derived from mouse button downs and ups to ensure that they are fired correctly (@mrobinson, #39705). -
Scrolling is now derived from mouse wheel events. When you have mouse wheel input to forward to Servo, you should now call
WebView::notify_input_eventonly, notnotify_scroll_event(@mrobinson, @atbrakhi, #40269). -
WebView::set_pinch_zoomwas renamed topinch_zoom, to better reflect that pinch zoom is always relative (@mrobinson, @atbrakhi, #39868).
We've improved page zoom in our webview API (@atbrakhi, @mrobinson, @shubhamg13, #39738), which includes some breaking changes:
WebView::set_zoomwas renamed toset_page_zoom, and it now takes an absolute zoom value. This makes it idempotent, but it means if you want relative zoom, you'll have to multiply the zoom values yourself.- Use the new
WebView::page_zoommethod to get the current zoom value. WebView::reset_zoomwas removed; useset_page_zoom(1.0)instead.
Some breaking changes were also needed to give embedders a more powerful way to share input events with webviews (@mrobinson, #39720). Often both your app and the pages in your webviews may be interested in knowing when users press a key. Servo handles these situations by asking the embedder for all potentially useful input events, then echoing some of them back:
- Embedder calls
WebView::notify_input_eventto tell Servo about an input event, then web content (and Servo) can handle the event. - Servo calls
WebViewDelegate::notify_keyboard_eventto tell the embedder about keyboard events that were neither canceled by scripts nor handled by Servo itself. The event details is included in the arguments.
Embedders had no way of knowing when non-keyboard input events, or keyboard events that were canceled or handled by Servo, have completed all of their effects in Servo. This was good enough for servoshell's overridable key bindings, but not for WebDriver, where commands like Perform Actions need to reliably wait for input events to be handled. To solve these problems, we've replaced notify_keyboard_event with notify_input_event_handled:
- Embedder calls
WebView::notify_input_eventto tell Servo about an input event, then web content (and Servo) can handle the event. This now returns anInputEventId, allowing embedders to remember input events that they still care about for step 2. - Servo calls
WebViewDelegate::notify_input_event_handledto tell the embedder about every input event, when Servo has finished handling it. The event details are not included in the arguments, but you can use theInputEventIdto look up the details in the embedder.
Perf and stability
Servo now does zero unnecessary layout work when updating canvases and animated images, thanks to a new "UpdatedImageData" layout mode (@mrobinson, @mukilan, #38991).
We've fixed crashes when clicking on web content on Android (@mrobinson, #39771), and when running Servo on platforms where JIT is forbidden (@jschwe, @sagudev, #40071, #40130).
For developers
CI builds for pull requests should now take 70% less time, since they now run on self-hosted CI runners (@delan, #39900, #39915). Bencher builds for runtime benchmarking now run on our new dedicated servers, so our Speedometer and Dromaeo data should now be more accurate and less noisy (@delan, #39272).
We've now switched all of our macOS builds to run on arm64 (@sagudev, @jschwe, #38460, #39968). This helps back our macOS releases with thorough automated testing on the same architecture as our releases, but we can't run them on self-hosted CI runners yet, so they may be slower for the time being.
Work is underway to set up faster macOS arm64 runners on our own servers (@delan, ci-runners#64), funded by your donations. Speaking of which!
Donations
Thanks again for your generous support! We are now receiving 5753 USD/month (+1.7% over September) in recurring donations.
This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo. Keep an eye out for further CI improvements in the coming months, including faster macOS arm64 builds and ten-minute WPT builds.
Servo is also on thanks.dev, and already 28 GitHub users (same as September) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
14 Nov 2025 12:00am GMT
13 Nov 2025
Planet Mozilla
The Mozilla Blog: The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online

Here at Mozilla, we are the first to admit the internet isn't perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong - their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can't get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.
We caught up with Jacque Aye, the author behind "Diary of a Sad Black Woman." She talks about blogging culture, writing fiction for "perpetually sighing adults" and Lily Allen's new album.
What is an internet deep dive that you can't wait to jump back into?
Right now, I'm deep diving into Lily Allen's newest album! Not for the gossip, although there's plenty of that to dive into, but for the psychology behind it all. I appreciate creatives who share so vulnerably but in nuanced and honest ways. Sharing experiences is what makes us feel human, I think. The way she outlined falling in love, losing herself, struggling with insecurities, and feeling numb was so relatable to me. Now, would I share as many details? Probably not. But I do feel her.
What was the first online community you engaged with?
Blogger. I was definitely a Blogger baby, and I used to share my thoughts and outfits there, the same way I currently share on Substack. I sometimes miss those times and my little oversharing community. Most people didn't really have personal brands then, so everything felt more authentic, anonymous and free.
What is the one tab you always regret closing?
Substack! I always find the coolest articles, save the tab, then completely forget I meant to read it, ahhhh.
What can you not stop talking about on the internet right now?
I post about my books online to an obsessive and almost alarming degree, ha. I've been going on and on about my weird, whimsical, and woeful novels, and people seem to resonate with that. I describe my work as Lemony Snicket meets a Boots Riley movie, but for perpetually sighing adults. I also never, ever shut up about my feelings. You can even read my diary online. For free. On Substack.
If you could create your own corner of the internet, what would it look like?
I feel super lucky to have my own little corner of the internet! In my corner, we love wearing cute outfits, listening to sad girl music, watching Tim Burton movies, and reading about flawed women going through absurd trials.
What articles and/or videos are you waiting to read/watch right now?
I can't wait to settle in and watch Knights of Guinevere! It looks so, so good, and I adore the creator.
What is your favorite corner of the internet?
This will seem so random, but right now, besides Substack, I'm really loving Threads. People are so vulnerable on there, and so willing to share personal stories and ask for help and advice. I love any space where I can express the full range of my feelings… and also share my books and outfits, ha.
How do you imagine the next version of the internet supporting creators who lead with emotion and care?
I really hope the next version of the internet reverts back to the days of Blogger and Tumblr. Where people could design their spaces how they see fit, integrate music and spew their hearts out without all the judgment.
Jacque Aye is an author and writes "Diary of a Sad Black Woman" on Substack. As a woman who suffers from depression and social anxiety, she's made it her mission to candidly share her experiences with the hopes of helping others dealing with the same. This extends into her fiction work, where she pens tales about woeful women trying their best, with a surrealist, magical touch. Inspired by authors like Haruki Murakami, Sayaka Murata, and Lemony Snicket, Jacque's stories are dark, magical, and humorous with a hint… well, a bunch… of absurdity.
The post The writer behind 'Diary of a Sad Black Woman' on making space for feelings online appeared first on The Mozilla Blog.
13 Nov 2025 6:26pm GMT
The Mozilla Blog: Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it

We recently shared how we are approaching AI in Firefox - with user choice and openness as our guiding principles. That's because we believe AI should be built like the internet - open, accessible, and driven by choice - so that users and the developers helping to build it can use it as they wish, help shape it and truly benefit from it.
In Firefox, you'll never be locked into one ecosystem or have AI forced into your browsing experience. You decide when, how or whether to use it at all. You've already seen this approach in action through some of our latest features like the AI chatbot in the sidebar for desktop or Shake to Summarize on iOS.
Now, we're excited to invite you to help shape the work on our next innovation: an AI Window. It's a new, intelligent and user-controlled space we're building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it's not for you, you can choose to switch it off.
As always, we're building in the open - and we want to build this with you. Starting today, you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback.

AI Window: Built for choice & control
Join the waitlistWe're building a better browser, not an agenda
We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice - either use AI all the time or don't use it at all.
We're focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it's useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.
Regardless of your choice, with Firefox, you're in control.
You can continue using Firefox as you always have for the most customizable experience, or switch from classic to Private Window for the most private browsing experience. And now, with AI Window, you have the option to opt in to our most intelligent and personalized experience yet - providing you with new ways to interact with the web.
Why is investing in AI important for Firefox?
With AI becoming a more widely adopted interface to the web, the principles of transparency, accountability, and respect for user agency are critical to keeping it free, open, and accessible to all. As an independent browser, we are well positioned to uphold these principles.
While others are building AI experiences that keep you locked in a conversational loop, we see a different path - one where AI serves as a trusted companion, enhancing your browsing experience and guiding you outward to the broader web.
We believe standing still while technology moves forward doesn't benefit the web or humanity. That's why we see it as our responsibility to shape how AI integrates into the web - in ways that protect and give people more choice, not less.
Help us shape the future of the web
Our success has always been driven by our community of users and developers, and we'll continue to rely on you as we explore how AI can serve the web - without ever losing focus on our commitment to build what matters most to our users: a Firefox that remains fast, secure and private.
Join us by contributing to open-source projects and sharing your ideas on Mozilla Connect.
The post Introducing AI, the Firefox way: A look at what we're working on and how you can help shape it appeared first on The Mozilla Blog.
13 Nov 2025 2:00pm GMT
12 Nov 2025
Planet Mozilla
Mozilla Privacy Blog: Behind the Manifesto: The Survivors of the Open Web
Welcome to the blog series "Behind the Manifesto," where we unpack core issues that are critical to Mozilla's mission. The Mozilla Manifesto represents Mozilla's commitment to advancing an open, global internet. This blog series digs deeper on our vision for the web and the people who use it, and how these goals are advanced in policymaking and technology.
The internet wasn't always a set of corporate apps and walled gardens. In its early days, it was a place of experimentation - a digital commons where anyone could publish, connect, and build without asking permission. That openness depended on invisible layers of technology that allowed the web to function as a true public space. Layers such as browser engines, open standards, and shared protocols are the scaffolding that made the internet free, creative, and interoperable.
In 2013, there were five major browser engines. Now, only three remain: Apple's WebKit, Google's Blink, and Mozilla's Gecko (which powers Firefox). In a world of giants, Gecko fights not for dominance, but for an internet that is open and accessible to all.
In an era of consolidation, a thriving and competitive browser engine ecosystem is critical. But sadly, browser engines are subject to the same trends towards concentration. As we've lost competitors, we lose more than a piece of code. We lose choice, perspectives, and ideas about how the web works.
So, how do we drive competition in browser engines and more widely across the web? How do we promote policies that protect people and encourage meaningful choice? How do we contend with AI as both a disruptor and an impetus for innovation? Can competition interventions protect the open web? What's the impact of landmark antitrust cases for consumers and the future technology landscape?
These aren't new questions for Mozilla. They're the same questions that have shaped our mission for more than 20 years, and the ones we continue to ask today. Our recent Mozilla Meetup in Washington D.C., a panel-style event and happy hour, brought these debates to the forefront.
On October 8th, we convened leading minds in tech policy to explore the future of competition and its role in saving the open web. Before a standing-room-only audience, the panelists discussed browser competition, leading antitrust legislation, landmark cases currently under review, and AI's impact. Their insights underscored a critical point: the same questions about access, agency and choice that defined parts of the early internet are just as pressing in today's digital ecosystem, shaping our continued pursuit of an open and diverse web. Below are a few takeaways.
On today's competition landscape:
Luke Hogg, Director, Technology Policy, Foundation for American Innovation:
"Antitrust is back. One of the emerging lessons of the last year in antitrust cases and competition policy is that with these big questions being answered, the results do tend to be bipartisan. Antitrust is a cross-partisan issue."
On the United States v. Google LLC search case:
Kush Amlani, Director, Global Competition & Regulation, Mozilla:
"One of our key concerns was ensuring that search competition didn't come at the expense of browser competition. And the payments to independent browsers were not banned, and that was obviously granted by the judge…What's next is really how the remedies are implemented, and how effective they are. And the devil is going to be in the detail, in terms of how useful is this data? How much can third parties benefit from syndicating search results?"
Alissa Cooper, Executive Director, Knight-Georgetown Institute:
"The search case is set up as being pro-divestiture or anti-divestiture, but it's really about what is going to work. Divestiture aligns with what was requested. If you leave Chrome under Google, you have to build in surveillance and monitoring in the market to make sure their behavior aligns. If you divest, it becomes independent and can operate on its own without the need for monitoring. In the end, do you think that would be an effective remedy to open the market to reentry? Or do you think there is another option?"
On the impact of AI:
Amba Kak, Co-Executive Director, AI Now Institute:
"AI has upended the market and changed technology, but it's also true Big Tech, in many ways, has been training for this very disruption for the last ten years.
In the early 2010s, key resources - data, compute, talent - were already concentrated within a few players due to regulatory inaction. It's important to understand that this trajectory of AI aligning with the incentives of Big Tech isn't an accident, it's by design."
On the timing of this fight for the open web:
Alissa Cooper, Executive Director, Knight-Georgetown Institute:
"The difference now [as opposed to previous fights for the web] is that we have a lot of experience. We know what the open world and open web look like. In some ways, this is an advantage. The difference now is the unbelievable amount of corporate power involved. There needs to be a field where new businesses can enter. Without it, we are fighting the last war."
This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla's policy priorities.
The post Behind the Manifesto: The Survivors of the Open Web appeared first on Open Policy & Advocacy.
12 Nov 2025 9:42pm GMT
The Mozilla Blog: Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress
Today, Mozilla is thrilled to join the Digital Public Goods Alliance (DPGA) as its newest member. The DPGA is a UN-backed initiative that seeks to advance open technologies and ensure that technology is put to use in the public interest and serves everyone, everywhere - like Mozilla's Common Voice, which has been recognized as a Digital Public Good (DPG). This announcement comes on the heels of a big year of digital policy-making globally, where Mozilla has been at the forefront in advocating for open source AI across Europe, North America and the UK.
The DPGA is a multi-stakeholder initiative with a mission to accelerate the attainment of the Sustainable Development Goals (SDGs) "by facilitating the discovery, development, use of and investment in digital public goods." Digital public goods means open-source technology, open data, open and transparent AI models, open standards and open content that adhere to privacy, the do no harm principle, and other best practices.
This is deeply aligned with Mozilla's mission. It creates a natural opportunity for collaboration and shared advocacy in the open ecosystem, with allies and like-minded builders from across the globe. As part of the DPGA's Annual Roadmap for 2025, Mozilla will focus on three work streams:
- Promoting DPGs in the Open Source Ecosystem: Mozilla has long championed open-source, public-interest technology as an alternative to profit-driven development. Through global advocacy, policy engagement, and research, we highlight the societal and economic value of open-source, especially in AI. Through our work in the DPGA,, we'll continue pushing for better enabling conditions and funding opportunities for open source, public interest technology.
- DPGs and Digital Commons: Mozilla develops and maintains a range of open source projects through our various entities. These include Common Voice, a digital public good with over 33,000 hours of multilingual voice data, and applications like the Firefox web browser and Thunderbird email client. Mozilla also supports open-source AI through our product work, including by Mozilla.ai, and our venture fund, Mozilla Ventures.
- Funding Open Source & Public Interest Technology: Grounded by our own open source roots, Mozilla will continue to fund open source technologies that help to untangle thorny sociotechnical issues. We've fueled a broad and impactful portfolio of technical projects. Beginning in the Fall of 2025, we will introduce our latest grantmaking program: an incubator that will help community-driven projects find "product-community fit" in order to attain long-term sustainability.
We hope to use our membership to share research, tooling, and perspectives with a like-minded audience and partner with the DPGA's diverse community of builders and allies.
"Open source AI and open data aren't just about tech," said Mark Surman, president of Mozilla. "They're about access to technology and progress for people everywhere. As a double bottom line, mission-driven enterprise, Mozilla is proud to be part of the DPGA and excited to work toward our joint mission of advancing open-source, trustworthy technology that puts people first."
To learn more about DPGA, visit https://digitalpublicgoods.net.
The post Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress appeared first on The Mozilla Blog.
12 Nov 2025 5:09pm GMT
This Week In Rust: This Week in Rust 625
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
Newsletters
Project/Tooling Updates
- channels-console - real-time monitoring, metrics and logs for Rust channels
- qstr: Cache-efficient, stack-allocated string types
- Announcing Magika 1.0: now faster, smarter, and rebuilt in Rust
- Tokuin 0.1.2: Load Testing LLMs from the Terminal
- semver-query: semantic versioning data query tool
- SeaORM 2.0: Strongly-Typed Column
- LLMs: nanoGPT model in Rust - arrowspace v0.22.0 released
- InterpN: Fast Interpolation
- Tako 0.5.0 road to v1.0.0
Observations/Thoughts
- Just call clone (or alias)
- Engineering a Rust optimization quiz
- Rust vs. Python: Finding the right balance between speed and simplicity
- [video] A Quick Start to Rust Lang
- [video] Rust & JavaScript - Jakob Meier - Rust Zürisee November 2024
- [audio] Netstack.FM Episode 13 - Inside Ping Proxies with Joseph Dye
Rust Walkthroughs
Miscellaneous
Crate of the Week
This week's crate is automesh, a crate for high-performance automatic mesh generation in Rust.
Thanks to Michael R. Buche for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- TokioConf 2026| CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
409 pull requests were merged in the last week
Compiler
- add LLVM realtime sanitizer
- don't completely reset
HeadUsages - use annotate-snippets by default on nightly
- implement SIMD funnel shifts in const-eval/Miri
- recover
[T: N]as[T; N]
Library
- add Allocator proxy impls for Box, Rc, and Arc
- add
extend_frontto VecDeque with specialization like extend - add alignment parameter to
simd_masked_{load,store} - constify
ControlFlowmethods with unstable bounds - constify
ControlFlowmethods without unstable bounds - constify result unwrap unchecked
- optimize path components iteration on platforms that don't have prefixes
- stabilize
as_arrayin[_]and*const [_]; stabiliseas_mut_arrayin[_]and*mut [_] - stabilize
vec_deque_pop_if - stabilize s390x
vectortarget feature andis_s390x_feature_detected!macro - stop specializing on
Copy
Cargo
cli: Refer to commands, not subcommandscompletions: don't wrap completion item help in parenthesis- add native completions for
--packageon various commands
Rustdoc
- search: remove broken index special case
- properly highlight shebang, frontmatter & weak keywords in source code pages and code blocks
Clippy
- perf:
manual_is_power_of_two: perform theis_integer_literalcheck first - consider type conversion that won't overflow
- don't flag
cfg(test)as multiple inherent impl - fix
match_single_bindingsuggesting wrongly inside tuple - fix
missing_asserts_for_indexingchangingassert_eqtoassert - fix
missing_inline_in_public_itemsfailing to fulfillexpectin--testbuild - fix
mod_module_filesfalse positive for tests in workspaces - fix
nonminimal_boolwrongly unmangled terms - fix
useless_let_if_seqfalse negative whenifis in the last expr of block
Rust-Analyzer
- support rename after adding loop label
- add block on postfix
.constcompletion - fix panicking while resolving callable sigs for
AsyncFnMut - handle guards in
replace_if_let_with_match - handle method calls in
apply_demorgan - parse
impl ! {} - move safe computation out of unsafe block
- perf: only populate public items in dependency symbol index
- perf: reduce memory usage of symbol index
Rust Compiler Performance Triage
Mostly quiet week, with the majority of changes coming from the standard library work towards removal of Copy specialization (#135634).
Triage done by @simulacrum. Revision range: 35ebdf9b..055d0d6a
3 Regressions, 1 Improvement, 7 Mixed; 3 of them in rollups 37 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Warn against calls which mutates an interior mutable const -item
- parser/lexer: bump to Unicode 17, use faster unicode-ident
- const-eval: fix and re-enable pointer fragment support
- Replace OffsetOf by an actual sum of calls to intrinsic.
- Stabilize
asm_cfg - Stabilize
-Zremap-path-scope - error out when
repr(align)exceeds COFF limit
- target tier 3 support for hexagon-unknown-qurt
- Proposal for a dedicated test suite for the parallel frontend
- Proposal for Adapt Stack Protector for Rust
- Give integer literals a sign instead of relying on negation expressions
No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
Upcoming Events
Rusty Events between 2025-11-12 - 2025-12-10 🦀
Virtual
- 2025-11-12 | Virtual (Boulder, CO, US) | Boulder Elixir
- 2025-11-12 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-13 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-11-16 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-18 | Virtual (Washington, DC, US) | Rust DC
- 2025-11-19 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-19 | Virtual (Vancouver, BC, CA) | Vancouver Rust
- 2025-11-20 | Virtual (Berlin, DE) | Rust Berlin
- 2025-11-20 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2025-11-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-11-25 | Virtual (London, UK) | Women in Rust
- 2025-11-26 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-30 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-02 | Virtual (London, GB) | Women in Rust
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
Africa
- 2025-11-18 | Johannesburg, ZA | Johannesburg Rust Meetup
Asia
- 2025-11-15 | Bangalore, IN | Rust Bangalore
Europe
- 2025-11-12 | Cambridge, UK | Cambridge Rust Meetup
- 2025-11-12 | Reading, UK | Reading Rust Workshop
- 2025-11-13 | Geneva, CH | Rust Geneva
- 2025-11-13 | London, UK | London Rust Project Group
- 2025-11-13 | London, UK | Rust London User Group
- 2025-11-13 | Paris, FR | Rust Paris
- 2025-11-14 | Stockholm, SE | Stockholm Rust
- 2025-11-18 | Bergen, NO | Rust Bergen
- 2025-11-18 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2025-11-19 | Ostrava, CZ | TechMeetup Ostrava
- 2025-11-20 | Aarhus, DK | Rust Aarhus
- 2025-11-20 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2025-11-20 | Luzern, CH | Rust Luzern
- 2025-11-26 | Bergen, NO | Hubbel kodeklubb
- 2025-11-26 | Bern, CH | Rust Bern
- 2025-11-27 | Barcelona, ES | BcnRust
- 2025-11-27 | Edinburgh, UK | Rust and Friends
- 2025-11-28 | Prague, CZ | Rust Prague
- 2025-12-03 | Girona, ES | Rust Girona | Silicon Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
North America
- 2025-11-13 | Lehi, UT, US | Utah Rust
- 2025-11-13 | New York, NY, US | Rust NYC
- 2025-11-13 | Portland, OR, US | PDXRust
- 2025-11-13 | San Diego, CA, US | San Diego Rust
- 2025-11-16 | Boston, MA, US | Boston Rust Meetup
- 2025-11-18 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-11-20 | Seattle, WA, US | Seattle Rust User Group
- 2025-11-20 | Spokane, WA, US | Spokane Rust
- 2025-11-23 | Boston, MA, US | Boston Rust Meetup
- 2025-11-26 | Austin, TX, US | Rust ATX
- 2025-11-27 | Mountain View, CA, US | Hacker Dojo
- 2025-11-29 | Boston, MA, US | Boston Rust Meetup
- 2025-12-02 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
Making your
unsafevery tiny is sort of like putting caution markings on the lethally strong robot arm with no proximity sensors, rather than on the door into the protective cage.
- Stephan Sokolow on lobste.rs
Thanks to llogiq for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: * nellshamrell * llogiq * ericseppanen * extrawurst * U007D * mariannegoldin * bdillo * opeolluwa * bnchi * KannanPalani57 * tzilist
Email list hosting is sponsored by The Rust Foundation
12 Nov 2025 5:00am GMT
11 Nov 2025
Planet Mozilla
Firefox Developer Experience: Firefox WebDriver Newsletter 145
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we've done as part of the Firefox 145 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 145, a new contributor landed two patches in our codebase. Thanks to Khalid AlHaddad for the following fixes:
- Renamed the
add_cookietest fixture toadd_document_cookieto make our cookie tests easier to understand. - Added logic to cleanup leftover cookies added by our tests.
WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!
WebDriver BiDi
- Implemented the
emulation.setUserAgentOverridecommand, which allows to override the user-agent string used by the browser either for a set of contexts, user contexts or globally. - Implemented the
browsingContext.downloadEndevent, which is emitted when a download finishes (whether it is successful or canceled). - Updated the destination property of the
network.beforeRequestSentevent to document for top-level navigations. - Updated the
browsingContextdownload events to reuse the same navigation id as the previousbrowsingContext.navigationStartedevent. - Fixed a bug for network data collection, where non-ASCII characters in response bodies were not properly encoded.
- Fixed a bug with the
network.getDatacommand, which would fail for requests with an empty response body. - Fixed a bug where some network events could be flagged as blocked even if they were not.
11 Nov 2025 1:50pm GMT
10 Nov 2025
Planet Mozilla
Niko Matsakis: Just call clone (or alias)
Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I'm calling "just call clone (or alias)". This proposal specializes the clone and alias methods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases in move closures where needed.
The goal of this proposal is to simplify the user's mental model: whenever you see an error like "use of moved value", the fix is always the same: just call clone (or alias, if applicable). This model is aiming for the balance of "low-level enough for a Kernel, usable enough for a GUI" that I described earlier. It's also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created - but that it's ok if the fine-grained details around exactly when the alias is created is a bit subtle.
The proposal in a nutshell
Part 1: Closure desugaring that is aware of clones and aliases
Consider this move future:
fn spawn_services(cx: &Context) {
tokio::task::spawn(async move {
// ---- move future
manage_io(cx.io_system.alias(), cx.request_name.clone());
// -------------------- -----------------------
});
...
}
Because this is a move future, this takes ownership of cx.io_system and cx_request_name. Because cx is a borrowed reference, this will be an error unless those values are Copy (which they presumably are not). Under this proposal, capturing aliases or clones in a move closure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):
fn spawn_services(cx: &Context) {
tokio::task::spawn(
async move(cx.io_system.alias(), cx.request_name.clone()) {
// -------------------- -----------------------
// capture alias/clone respectively
manage_io(cx.io_system.alias(), cx.request_name.clone());
}
);
...
}
Part 2: Last-use transformation
Now, this result is inefficient - there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to alias or clone that are not needed to satisfy the borrow checker and remove them. This code would therefore become:
fn spawn_services(cx: &Context) {
tokio::task::spawn(
async move(cx.io_system.alias(), cx.request_name.clone()) {
manage_io(cx.io_system, cx.request_name);
// ------------ ---------------
// converted to moves
}
);
...
}
The last-use transformation would apply beyond closures. Given an example like this one, which clones id even though id is never used later:
fn send_process_identifier_request(id: String) {
let request = Request::ProcessIdentifier(id.clone());
// ----------
// unnecessary
send_request(request)
}
the user would get a warning like so1:
warning: unnecessary `clone` call will be converted to a move
--> src/main.rs:7:40
|
8 | let request = Request::ProcessIdentifier(id.clone());
| ^^^^^^^^^^ unnecessary call to `clone`
|
= help: the compiler automatically removes calls to `clone` and `alias` when not
required to satisfy the borrow checker
help: change `id.clone()` to `id` for greater clarity
|
8 - let request = Request::ProcessIdentifier(id.clone());
8 + let request = Request::ProcessIdentifier(id);
|
and the code would be transformed so that it simply does a move:
fn send_process_identifier_request(id: String) {
let request = Request::ProcessIdentifier(id);
// --
// transformed
send_request(request)
}
Mental model: just call "clone" (or "alias")
The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call clone (or alias). It doesn't matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).
I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like & in more-or-less at random as they try to develop a firm mental model - this is where the "keep calm and call clone" joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.
Experienced users can trust the compiler to get it right
But the real question is how this works for experienced users. I've been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:
"What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better."
The first half is clearly satisfied. If you don't call clone or alias, this proposal has no impact on your life.
The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I'd like to hear if there are things I'm overlooking.
The last-use transformation codifies a widespread intuition, that clone is never necessary
I think most users would expect that changing message.clone() to just message is fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that make clone significant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that "significant clones" have another name. I think this is a good thing.
Frequently asked questions
I think I've covered the key points. Let me dive into some of the details here with a FAQ.
Can you summarize all of these posts you've been writing? It's a lot to digest!
I get it, I've been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:
- I believe our goal should be to focus first on a design that is "low-level enough for a Kernel, usable enough for a GUI".
- The key part here is the word enough. We need to make sure that low-level details are exposed, but only those that truly matter. And we need to make sure that it's ergonomic to use, but it doesn't have to be as nice as TypeScript (though that would be great).
- Rust's current approach to
Clonefails both groups of users;- calls to
cloneare not explicit enough for kernels and low-level software: when you seesomething.clone(), you don't know that is creating a new alias or an entirely distinct value, and you don't have any clue what it will cost at runtime. There's a reason much of the community recommends writingArc::clone(&something)instead. - calls to
clone, particularly in closures, are a major ergonomic pain point, this has been a clear consensus since we first started talking about this issue.
- calls to
I then proposed a set of three changes to address these issues, authored in individual blog posts:
- First, we introduce the
Aliastrait (originally calledHandle). TheAliastrait introduces a new methodaliasthat is equivalent toclonebut indicates that this will be creating a second alias of the same underlying value. - Second, we introduce explicit capture clauses, which lighten the syntactic load of capturing a clone or alias, make it possible to declare up-front the full set of values captured by a closure/future, and will support other kinds of handy transformations (e.g., capturing the result of
as_reforto_string). - Finally, we introduce the just call clone proposal described in this post. This modifies closure desugaring to recognize clones/aliases and also applies the last-use transformation to replace calls to clone/alias with moves where possible.
What would it feel like if we did all those things?
Let's look at the impact of each set of changes by walking through the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks:
let some_value = Arc::new(something);
// task 1
let _some_value = some_value.clone();
tokio::task::spawn(async move {
do_something_with(_some_value);
});
// task 2: listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
do_something_else_with(_some_a, _some_b, _some_c)
});
As the original blog post put it:
Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say "lol get gud," but the engineers on this team were the sharpest people I've ever worked with. Cloudflare is all-in on Rust. They're willing to throw money at codebases like this. Nuclear fusion won't be solved with Rust if this is how sharing state works.
Applying the Alias trait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls to clone are alias calls, and you don't have the awkward _some_value and _some_a variables. However, the code is still pretty verbose:
let some_value = Arc::new(something);
// task 1
tokio::task::spawn(async move(some_value.alias()) {
do_something_with(some_value);
});
// task 2: listen for dns connections
tokio::task::spawn(async move(
self.some_a.alias(),
self.some_b.alias(),
self.some_c.alias(),
) {
do_something_else_with(self.some_a, self.some_b, self.some_c)
});
Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to alias reveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call to self.some_a.alias() will actually occur when the future is created and not when it is awaited:
let some_value = Arc::new(something);
// task 1
tokio::task::spawn(async move {
do_something_with(some_value.alias());
});
// task 2: listen for dns connections
tokio::task::spawn(async move {
do_something_else_with(
self.some_a.alias(),
self.some_b.alias(),
self.some_c.alias(),
)
});
I'm worried that the execution order of calls to alias will be too subtle. How is thie "explicit enough for low-level code"?
There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:
tokio::task::spawn(async move {
do_something_with(some_value.alias());
});
this gets desugared to a call to alias when the future is created (not when it is awaited). Using the explicit form:
tokio::task::spawn(async move(some_value.alias()) {
do_something_with(some_value)
});
I can definitely imagine people getting confused at first - "but that call to alias looks like its inside the future (or closure), how come it's occuring earlier?"
Yet, the code really seems to preserve what is most important: when I search the codebase for calls to alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn't matter. Look at this code: the important thing is that do_something_with is called with an alias of some_value, so some_value will stay alive as long as do_something_else is executing. It doesn't really matter how the "plumbing" worked.
What about futures that conditionally alias a value?
Yeah, good point, those kind of examples have more room for confusion. Like look at this:
tokio::task::spawn(async move {
if false {
do_something_with(some_value.alias());
}
});
In this example, there is code that uses some_value with an alias, but only under if false. So what happens? I would assume that indeed the future will capture an alias of some_value, in just the same way that this future will move some_value, even though the relevant code is dead:
tokio::task::spawn(async move {
if false {
do_something_with(some_value);
}
});
Can you give more details about the closure desugaring you imagine?
Yep! I am thinking of something like this:
- If there is an explicit capture clause, use that.
- Else:
- For non-
moveclosures/futures, no changes, so- Categorize usage of each place and pick the "weakest option" that is available:
- by ref
- by mut ref
- moves
- Categorize usage of each place and pick the "weakest option" that is available:
- For
moveclosures/futures, we would change- Categorize usage of each place
Pand decide whether to capture that place…- by clone, there is at least one call
P.clone()orP.alias()and all other usage ofPrequires only a shared ref (reads) - by move, if there are no calls to
P.clone()orP.alias()or if there are usages ofPthat require ownership or a mutable reference
- by clone, there is at least one call
- Capture by clone/alias when a place
a.b.cis only used via shared references, and at least one of those is a clone or alias.- For the purposes of this, accessing a "prefix place"
aor a "suffix place"a.b.c.dis also considered an access toa.b.c.
- For the purposes of this, accessing a "prefix place"
- Categorize usage of each place
- For non-
Examples that show some edge cased:
if consume {
x.foo().
}
Why not do something similar for non-move closures?
In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:
let f = async {
// ----- NOT async move
self.some_a.alias()
};
do_something_else(self.some_a.alias());
// ----------- later use succeeds
f.await;
This future does not need to take ownership of self.some_a to create an alias, so it will just capture a reference to self.some_a. That means that later uses of self.some_a can still compile, no problem. If this had been a move closure, however, that code above would currently not compile.
There is an edge case where you might get an error, which is when you are moving:
let f = async {
self.some_a.alias()
};
do_something_else(self.some_a);
// ----------- move!
f.await;
In that case, you can make this an async move closure and/or use an explicit capture clause:
Can you give more details about the last-use transformation you imagine?
Yep! We would during codegen identify candidate calls to Clone::clone or Alias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:
- Will this place be accessed later?
- Will some reference potentially referencing this place be accessed later?
If the answer to both questions is no, then we will replace the call with a move of the original place.
Here are some examples:
fn borrow(message: Message) -> String {
let method = message.method.to_string();
send_message(message.clone());
// ---------------
// would be transformed to
// just `message`
method
}
fn borrow(message: Message) -> String {
send_message(message.clone());
// ---------------
// cannot be transformed
// since `message.method` is
// referenced later
message.method.to_string()
}
fn borrow(message: Message) -> String {
let r = &message;
send_message(message.clone());
// ---------------
// cannot be transformed
// since `r` may reference
// `message` and is used later.
r.method.to_string()
}
Why are you calling it the last-use transformation and not optimization?
In the past, I've talked about the last-use transformation as an optimization - but I'm changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.
Would the transformation "see through" references?
I think yes, but in a limited way. In other words I would expect
Clone::clone(&foo)
and
let p = &foo;
Clone::clone(p)
to be transformed in the same way (replaced with foo), and the same would apply to more levels of intermediate usage. This would kind of "fall out" from the MIR-based optimization technique I imagine. It doesn't have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.
On the other hand, you could still fool it e.g. like so
fn identity<T>(x: &T) -> &T { x }
identity(&foo).clone()
Would the transformation apply across function boundaries?
The way I imagine it, no. The transformation would be local to a function body. This means that one could write a force_clone method like so that "hides" the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):
fn pipe<Msg: Clone>(message: Msg) -> Msg {
log(message.clone()); // <-- keep this one
force_clone(&message)
}
fn force_clone<Msg: Clone>(message: &Msg) -> Msg {
// Here, the input is `&Msg`, so the clone is necessary
// to produce a `Msg`.
message.clone()
}
Won't the last-use transformation change behavior by making destructors run earlier?
Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an Alias trait:
async fn process_and_stuff(tx: mpsc::Sender<Message>) {
tokio::spawn({
async move(tx.alias()) {
// ---------- alias here
process(tx).await
}
});
do_something_unrelated().await;
}
The precise timing when Sender values are dropped can be important - when all senders have dropped, the Receiver will start returning None when you call recv. Before that, it will block waiting for more messages, since those tx handles could still be used.
So, in process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:
- Without the transformation, there are two aliases: the original
txand the one being held by the future. So the receiver will only start returningNonewhendo_something_unrelatedhas finished and the task has completed. - With the transformation, the call to
tx.alias()is removed, and so there is only one alias -tx, which is moved into the future, and dropped once the spawned task completes. This could well be earlier than in the previous code, which had to wait until bothprocess_and_stuffand the new task completed.
Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs - a typical example is a Mutex<()> where the guard is being used to protect some external resource.
How can we change when code runs? Doesn't that break stability?
This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.
The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it's just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call-by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).
Does that mean that the last-use transformation would change with Polonius or other borrow checker improvements?
In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It's a bit of a pain, but I think we can live with it - and I'm unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.
Isn't it weird to do this after borrow check?
This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:
let p: *const T = &*some_place;
let q: T = some_place.clone();
// ---------- assuming `some_place` is
// not used later, becomes a move
unsafe {
do_something(p);
// -
// This now refers to a stack slot
// whose value is uninitialized.
}
Note though that, in this case, there would be a lint identifying that the call to some_place.clone() will be transformed to just some_place. We could also detect simple examples like this one and report a stronger deny-by-default lint, as we often do when we see guaranteed UB.
Shouldn't we use a keyword for this?
When I originally had this idea, I called it "use-use-everywhere" and, instead of writing x.clone() or x.alias(), I imagined writing x.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I've changed my mind for a few reasons.
First, Santiago Pastorino gave strong pushback that x.use was going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means - in contrast, if they see method calls, they will likely not even notice something strange is going on.
The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I've heard with the Alias trait], which is that there are things you want to ergonomically clone but which don't correspond to "aliases". True.
In general I think that clone (and alias) are fundamental enough to how Rust is used that it's ok to special case them. Perhaps we'll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.
What about "deferred ref-counting"?
One point that I've raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:
fn use_data(rc: Rc<Data>) {
for datum in rc.iter() {
println!("{datum:?}");
}
}
This function requires ownership of an alias to a ref-counted value but it doesn't actually do anything but read from it. A caller like this one…
use_data(source.alias())
…doesn't really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a &:
fn use_data(rc: &Rc<Data>) {
for datum in rc.iter() {
println!("{datum:?}");
}
}
so that the caller can do use_data(&source) - this then allows the callee to write rc.alias() in the case that it wants to take ownership.
I've basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use &Arc and the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.
-
Surprisingly to me,
clippy::pedanticdoesn't have a dedicated lint for unnecessary clones. This particular example does get a lint, but it's a lint about taking an argument by value and then not consuming it. If you rewrite the example to createidlocally, clippy does not complain. ↩︎
10 Nov 2025 6:55pm GMT
The Mozilla Blog: Firefox expands fingerprint protections: advancing towards a more private web
With Firefox 145, we're rolling out major privacy upgrades that take on browser fingerprinting - a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you're in private browsing. These protections build on Mozilla's long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.
Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup - ranging from your time zone to your operating system settings - that together create a "fingerprint" identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser's private browsing mode.
Protecting people's privacy has always been core to Firefox. Since 2020, Firefox's built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.
Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren't in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.
How we built stronger defenses
Drawing from a global analysis of how real people's browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.

How Firefox protects you
These fingerprinting protections work on multiple layers, building on Firefox's already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection.
Beyond blocking trackers, Firefox also limits the information it makes available to websites - a privacy-by-design approach - that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer. But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.
Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.
Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.
Our research shows these improvements cut the percentage of users seen as unique by almost half.

Firefox's new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox's approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don't get in your way.
What's next for your privacy
If you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox's fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically - no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.

Take control of your internet
Download FirefoxThe post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.
10 Nov 2025 2:00pm GMT
The Rust Programming Language Blog: Announcing Rust 1.91.1
The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:
rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website.
What's in 1.91.1
Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release.
Linker and runtime errors on Wasm
Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The #[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to:
extern "C"
Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates.
Rust 1.91.1 fixes the regression. More details are available in issue #148347.
Cargo target directory locking broken on illumos
Cargo relies on locking the target/ directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns the Unsupported error when attempting to lock, Cargo assumes locking is not supported and proceeds without it.
Cargo 1.91.0 switched from custom code interacting with the OS APIs to the File::lock standard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returned Unsupported on the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.
Rust 1.91.1 fixes the oversight in the standard library by enabling the File::lock family of functions on illumos, indirectly fixing the Cargo regression.
Contributors to 1.91.1
Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!
10 Nov 2025 12:00am GMT
07 Nov 2025
Planet Mozilla
The Mozilla Blog: Introducing early access for Firefox Support for Organizations

Increasingly, businesses, schools, and government institutions deploy Firefox at scale for security, resilience, and data sovereignty. Organizations have fine-grained administrative and orchestration control of the browser's behavior using policies with Firefox and the Extended Support Release (ESR). Today, we're opening early access to Firefox Support for Organizations, a new program that begins operation in January 2026.
What Firefox Support for Organizations offers
Support for Organizations is a dedicated offering for teams who need private issue triage and escalation, defined response times, custom development options, and close collaboration with Mozilla's engineering and product teams.
- Private support channel: Access a dedicated support system where you can open private help tickets directly with expert support engineers. Issues are triaged by severity level, with defined response times and clear escalation paths to ensure timely resolution.
- Discounts on custom development: Paid support customers get discounts on custom development work for integration projects, compatibility testing, or environment-specific needs. With custom development as a paid add-on to support plans, Firefox can adapt with your infrastructure and third-party updates.
- Strategic collaboration: Gain early insight into upcoming development and help shape the Firefox Enterprise roadmap through direct collaboration with Mozilla's team.
Support for Organizations adds a new layer of help for teams and businesses that need confidential, reliable, and customized levels of support. All Firefox users will continue to have full access to existing public resources including documentation, the knowledge base, and community forums, and we'll keep improving those for everyone in future. Support plans will help us better serve users who rely on Firefox for business-critical and sensitive operations.
Get in touch for early access
If these levels of support are interesting for your organization, get in touch using our inquiry form and we'll get back to you with more information.

Firefox Support for Organizations
Get early accessThe post Introducing early access for Firefox Support for Organizations appeared first on The Mozilla Blog.
07 Nov 2025 12:00pm GMT
Creating accounts via auto-config with EWS, server-side folder manipulation
Filter actions requiring full body content are not yet supported.