03 Dec 2025
Planet Mozilla
J.C. Jones: Reflecting on 10 years of Let’s Encrypt
My friend Christophe Brocas has just published a retrospective on the ten years since we unveiled the ACME protocol to the world. He interviewed me and some colleagues for the piece, and I recommend it! There's even nice comments on HackerNews, which always makes me smile.
It's been fun to think back on the early days that made such a dramatic inflection to my career. In early 2014 I was still working on selling turn-key PKI systems based on my SAIFE framework, though the company had been dealt quite a blow by the 2013 U.S. Federal Government shutdown. Having just constructed a certificate authority that would go on to be added to relevant trust lists, it turns out that the freshness of that experience became a key part of my recruitment into what became Let's Encrypt.
Joining Mozilla in Q4 2014 (basically 3 weeks after this blog post), my new manager Richard Barnes introduced me immediately to Josh Aas and the secret "build a free CA" project. It was to be a side project for me, alongside coming up to speed on NSS. But this was a very fun side project: Given 38U in one datacenter and 62U in a second, design a network that exceeds WebTrust requirements, is usable and maintainable by a small team, and build a functional CA out of it in six months.
Naturally, it actually took thirteen months.
But we pulled it off. We aggressively kept everything as simple as we could, with the one bit of deliberate complexity being to structure Boulder, the CA software in microservices, to have strong network security partitions.
A considerable amount has been written about what happened then. There's also a recording of me talking a bit about it shortly after.
But thinking back ten years now, to that day on 3 December 2015 when I, sick in bed and operating dose-to-dose on fever reducers, had the privilege of running the commands that opened the public beta… what a ride.
While I've done things since, I can't imagine anything in my career topping helping to launch Let's Encrypt.
03 Dec 2025 7:00am GMT
This Week In Rust: This Week in Rust 628
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
Foundation
Newsletters
Project/Tooling Updates
- Wasmi 1.0 - WebAssembly Interpreter Stable At Last
- hyper-util Composable Pools
- Fall Updates: Standard Library Support with vexide 0.8.0!
- 3DCF/doc2dataset v0.1.0 - Rust document-to-dataset pipeline for RAG & LLM fine-tuning
- PGM-Extra: High-performance learned index structures for Rust
Observations/Thoughts
- In defense of lock poisoning in Rust
- How CRDTs and Rust are revolutionizing distributed systems and real-time applications
- KCL part 1: units
- New rust lint: function_casts_as_integer
- [audio] Netstack.FM episode 16 - WebRTC and Sans IO with Martin Algesten
- [audio] Canonical with Jon Seager - Rust in Production Podcast
Rust Walkthroughs
- The Impatient Programmer's Guide to Bevy and Rust: Chapter 3 - Let The Data Flow
- Cross-Compiling Rust for Raspberry Pi and making CI
- Rootless pings in Rust
- Mutation testing for librsvg
- [video] impl Rust: One Billion Row Challenge
Miscellaneous
- The Rust Africa Hackathon 2026
- Ferrous Systems Achieves IEC 61508 (SIL 2) Certification for Rust Core Library Subset
Crate of the Week
This week's crate is corosensei, a crate that allows you to write stackful coroutines on stable Rust.
Thanks to Christiaan for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- Rustikon 2026 | CFP closes 2025-11-24 | Warsaw, Poland | 2025-03-19 - 2025-03-20 | Event Website
- TokioConf 2026 | CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
- RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
509 pull requests were merged in the last week
Compiler
- add
Box::clone_from_refand similar underfeature(clone_from_ref) - add
Command::get_env_clear - add a diagnostic attribute for special casing const bound errors for non-const impls
- collapse
constnessquerymatchlogic
Library
- add
impl TrustedLenonBTree{Map,Set}iterators - constify
from_fn, try_from_fn, try_map,map - implement
Iterator::{exactly_one, collect_array} - implement
clamp_magnitudemethod for primitive floats & signed integers - in
BTreeMap::eq, do not compare the elements if the sizes are different - num: implement
uint_gather_scatter_bitsfeature for unsigned integers - offload intrinsic
- optimize
slice::Iter::next_chunk - stabilize
asm_cfg - stabilize
maybe_uninit_slice - stabilize
maybe_uninit_write_slice - stabilize
unchecked_negandunchecked_shifts
Cargo
clean: Clean hosts builds with new layoutcompletion: Put host-tuple before actual tuplescompletions: includeallincargo tree --targetcandidatesconfig-include: remove support of single string shorthandlints: show lint error numberclean: add --workspace support- do not lock the artifact-dir for check builds + fix uplifting
- properly validate crate names in
cargo install
Rustdoc
- fix bad intra-doc-link preprocessing
- fix invalid link generation for type alias methods
- fix rustdoc search says "Consider searching for "null" instead." #149324
Clippy
manual_ilog2: new lintequatable_if_let: don't lint if pattern or initializer come from expansion- add
ptr_offset_by_literallint - clippy lints page improvements and cleanups
- fix
implicit_hasherwrongly unmangled macros - fix
large_stack_framesfalse positive on compiler generated targets - fix display of dropdown menu "buttons"
- fix:
zero_repeat_side_effectsmisses curlies - new lint:
decimal_bitwise_operands - stop inserting redundant parenthesis around desugared match expressions
Rust-Analyzer
- add multiple generate for
enumgenerate is, as,try_into - build releases with static CRT for
-windows-msvctargets - completions: fix completions disregarding snippet capabilities
- feature: set
enclosing_rangefield on SCIP output - fix Display scope inlay hints after closing brace for more types of blocks #18833
- fix
syntax_editorduplicated changed element - fix complete after
extern, addcratecompletion - fix not complete after inner-attr in source-file
- fix not complete type alias in pattern
- fix skipiter not applicable in autoderef
- do not try to connect via postcard to proc-macro-srv
- don't run cache priming when disabled in settings
- fix proc-macro-srv passing invalid extra none group to proc-macros
- fix proc-macro-srv protocol read implementation
- pass the correct per-token (not global) edition when expanding
macro_rules - rewrite dyn trait lowering to follow rustc
- support multiple
enablein#[target_feature] - use per-token, not global, edition in the parser
- use root hygiene for speculative resolution
- perf: use one query per crate for lang items, not one per lang item
- proc-macro-srv: fix
<TokenStream as Display>::fmtimpl producing trailing whitespace - proc-macro-srv: fix
<TokenStream as Display>::fmtimpl rendering puncts as u8 - proc-macro-srv: fix unnecessary subtree wrapping in protocol
- re-introduce attribute rewrite
Rust Compiler Performance Triage
A fairly quiet week overall, despite a slightly higher than usual amount of merged PRs.
Triage done by @simulacrum. Revision range: b64df9d1..eca9d93f
3 Regressions, 1 Improvement, 4 Mixed; 3 of them in rollups 43 artifact comparisons made in total
See the [full report](https://github.com/rust-lang/rustc-perf/blob/master/triage/2025/2025-12-02.md] for details.
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs entered Final Comment Period this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- don't normalize where-clauses when checking well-formedness
- Stabilize
const_mul_add - Do not propogate unnecessary closure constraints.
- Not linting
irrefutable_let_patternson let chains - Make closure capturing have consistent and correct behaviour around patterns
Compiler Team (MCPs only) * Use annotate-snippets as the default emitter * Promote powerpc64-unknown-linux-musl to tier 2 with host tools * Proposal for a dedicated test suite for the parallel frontend * Promote tier 3 riscv32 ESP-IDF targets to tier 2 * Proposal for Adapt Stack Protector for Rust * Give integer literals a sign instead of relying on negation expressions * Also enable ICE file dumps on stable * New Tier-3 target proposal: loongarch64-linux-android
Rust RFCs * Adding a crates.io Security tab
Cargo * feat: stabilize -Zconfig-include
No Items entered Final Comment Period this week for Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
Upcoming Events
Rusty Events between 2025-12-03 - 2025-12-31 🦀
Virtual
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-05 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-06 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2025-12-07 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-10 | Virtual (Girona, ES) | Rust Girona
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-12-16 | Virtual (Washington, DC, US) | Rust DC
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Virtual (Girona, ES) | Rust Girona
- 2025-12-18 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-25 | Virtual (Nürnberg, DE) | Rust Nuremberg
Asia
- 2025-12-08 | Tokyo, JP | Rust Global: Tokyo
- 2025-12-20 | Bangalore, IN | Rust Bangalore
Europe
- 2025-12-03 | Girona, ES | Rust Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-04 | Vienna, AT | Rust Vienna
- 2025-12-06 | Stockholm, SE | Stockholm Rust
- 2025-12-08 | Dortmund, DE | Rust Dortmund
- 2025-12-08 | Paris, FR | Rust Paris
- 2025-12-10 | London, UK | Rust London User Group
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
- 2025-12-15 | Trondheim, NO | Rust Trondheim
- 2025-12-16 | Bergen, NO | Rust Bergen
- 2025-12-16 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2025-12-19 | Lyon, FR | Rust Lyon
North America
- 2025-12-04 | México City, MX | Rust MX
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
- 2025-12-10 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Lehi, UT, US | Utah Rust
- 2025-12-11 | Mountain View, CA, US | Hacker Dojo
- 2025-12-11 | San Diego, CA, US | San Diego Rust
- 2025-12-13 | Boston, MA, US | Boston Rust Meetup
- 2025-12-16 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-12-17 | Austin, TX, US | Rust ATX
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Spokane, WA, US | Spokane Rust
- 2025-12-20 | Boston, MA, US | Boston Rust Meetup
Oceania
- 2025-12-11 | Brisbane City, QL, AU | Rust Brisbane
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
[...] just returning an error is not error handling, it is just user space unwinding.
Thanks to Aleksander Krauze for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
03 Dec 2025 5:00am GMT
The Rust Programming Language Blog: crates.io: Malicious crates evm-units and uniswap-utils
Summary
On December 2nd, the crates.io team was notified by Olivia Brown from the Socket Threat Research Team of two malicious crates which were downloading a payload that was likely attempting to steal cryptocurrency.
These crates were:
evm-units- 13 versions published in April 2025, downloaded 7257 timesuniswap-utils- 14 versions published in April 2025, downloaded 7441 times, usedevm-unitsas a dependency
Actions taken
The user in question, ablerust, was immediately disabled, and the crates in question were deleted from crates.io shortly after. We have retained the malicious crate files for further analysis.
The deletions were performed at 22:01 UTC on December 2nd.
Analysis
Socket has published their analysis in a blog post.
These crates had no dependent downstream crates on crates.io.
Thanks
Our thanks to Olivia Brown from the Socket Threat Research Team for reporting the crates. We also want to thank Carol Nichols from the crates.io team and Walter Pearce and Adam Harvey from the Rust Foundation for aiding in the response.
03 Dec 2025 12:00am GMT
The Rust Programming Language Blog: Lessons learned from the Rust Vision Doc process
Starting earlier this year, a group of us set on a crazy quest: to author a "Rust vision doc". As we described it in the original project goal proposal:
The Rust Vision Doc will summarize the state of Rust adoption -- where is Rust adding value? what works well? what doesn't? -- based on conversations with individual Rust users from different communities, major Rust projects, and companies large and small that are adopting Rust.
Over the course of this year, the Vision Doc group has gathered up a lot of data. We began with a broad-based survey that got about 4200 responses. After that, we conducted over 70 interviews, each one about 45 minutes, with as broad a set of Rust users as we could find1.
This is the first of a series of blog posts covering what we learned throughout that process and what recommendations we have to offer as a result. This first post is going to go broad. We'll discuss the process we used and where we think it could be improved going forward. We'll talk about some of the big themes we heard -- some that were surprising and others that were, well, not surprising at all. Finally, we'll close with some recommendations for how the project might do more work like this in the future.
The questions we were trying to answer
One of the first things we did in starting out with the vision doc was to meet with a User Research expert, Holly Ellis, who gave us a quick tutorial on how User Research works2. Working with her, we laid out a set of research questions that we wanted to answer. Our first cut was very broad, covering three themes:
- Rust the technology:
- "How does Rust fit into the overall language landscape? What is Rust's mission?"
- "What brings people to Rust and why do they choose to use it for a particular problem...?"
- "What would help Rust to succeed in these domains...?" (e.g., network systems, embedded)
- "How can we scale Rust to industry-wide adoption? And how can we ensure that, as we do so, we continue to have a happy, joyful open-source community?"
- Rust the global project:
- "How can we improve the experience of using Rust for people across the globe?"
- "How can we improve the experience of contributing to and maintaining Rust for people across the globe?"
- Rust the open-source project:
- "How can we tap into the knowledge, experience, and enthusiasm of a growing Rust userbase to improve Rust?"
- "How can we ensure that individual or volunteer Rust maintainers are well-supported?"
- "What is the right model for Foundation-project interaction?"
Step 1: Broad-based survey
Before embarking on individual interviews, we wanted to get a broad snapshot of Rust usage. We also wanted to find a base of people that we could talk to. We created a survey that asked a few short "demographic" questions -- e.g., where does the respondent live, what domains do they work on, how would they rate their experience -- and some open-ended questions about their journey to Rust, what kind of projects they feel are a good fit for Rust, what they found challenging when learning, etc. It also asked for (optional) contact information.
We got a LOT of responses -- over 4200! Analyzing this much data is not easy, and we were very grateful to Kapiche, who offered us free use of their tool to work through the data. ❤
The survey is useful in two ways. First, it's an interesting data-set in its own right, although you have to be aware of selection bias. Second, the survey also gave us something that we can use to cross-validate some of what we heard in 1:1 interviews and to look for themes we might otherwise have missed. And of course it gave us additional names of people we can talk to (though most respondents didn't leave contact information).
Step 2: Interviewing individuals
The next step after the survey was to get out there and talk to people. We sourced people from a lot of places: the survey and personal contacts, of course, but we also sat down with people at conferences and went to meetups. We even went to a Python meetup in an effort to find people who were a bit outside the usual "Rust circle".
When interviewing people, the basic insight of User Experience research is that you don't necessarily ask people the exact questions you want to answer. That is likely to get them speculating and giving you the answer that they think they "ought" to say. Instead, you come at it sideways. You ask them factual, non-leading questions. In other words, you certainly don't say, "Do you agree the borrow checker is really hard?" And you probably don't even say, "What is the biggest pain point you had with Rust?" Instead, you might say, "What was the last time you felt confused by an error message?" And then go from there, "Is this a typical example? If not, what's another case where you felt confused?"
To be honest, these sorts of "extremely non-leading questions" are kind of difficult to do. But they can uncover some surprising results.
We got answers -- but not all the answers we wanted
4200 survey responses and 70 interviews later, we got a lot of information -- but we still don't feel like we have the answers to some of the biggest questions. Given the kinds of questions we asked, we got a pretty good view on the kinds of things people love about Rust and what it offers relative to other languages. We got a sense for the broad areas that people find challenging. We also learned a few things about how the Rust project interacts with others and how things vary across the globe.
What we really don't have is enough data to say "if you do X, Y, and Z, that will really unblock Rust adoption in this domain". We just didn't get into enough technical detail, for example, to give guidance on which features ought to be prioritized, or to help answer specific design questions that the lang or libs team may consider.
One big lesson: there are only 24 hours in a day
One of the things we learned was that you need to stay focused. There were so many questions we wanted to ask, but only so much time in which to do so. Ultimately, we wound up narrowing our scope in several ways:
- we focused primarily on the individual developer experience, and only had minimal discussion with companies as a whole;
- we dove fairly deep into one area (the Safety Critical domain) but didn't go as deep into the details of other domains;
- we focused primarily on Rust adoption, and in particular did not even attempt to answer the questions about "Rust the open-source project".
Another big lesson: haters gonna... stay quiet?
One thing we found surprisingly difficult was finding people to interview who didn't like Rust. 49% of survey respondents, for example, rated their Rust comfort as 4 or 5 out of 5, and only 18.5% said 1 or 2. And of those, only a handful gave contact information.
It turns out that people who think Rust isn't worth using mostly don't read the Rust blog or want to talk about that with a bunch of Rust fanatics.3 This is a shame, of course, as likely those folks have a lot to teach us about the boundaries of where Rust adds value. We are currently doing some targeted outreach in an attempt to grow our scope here, so stay tuned, we may get more data.
One fun fact: enums are Rust's underappreciated superpower
We will do a deeper dive into the things people say that they like about Rust later (hint: performance and reliability both make the cut). One interesting thing we found was the number of people that talked specifically about Rust enums, which allow you to package up the state of your program along with the data it has available in that state. Enums are a concept that Rust adapted from functional languages like OCaml and Haskell and fit into the system programming setting.
"The usage of Enum is a new concept for me. And I like this concept. It's not a class and it's not just a boolean, limited to false or true. It has different states." -- New Rust developer
"Tagged unions. I don't think I've seriously used another production language which has that. Whenever I go back to a different language I really miss that as a way of accurately modeling the domain." -- Embedded developer
Where do we go from here? Create a user research team
When we set out to write the vision doc, we imagined that it would take the form of an RFC. We imagined that RFC identifying key focus areas for Rust and making other kinds of recommendations. Now that we've been through it, we don't think we have the data we need to write that kind of RFC (and we're also not sure if that's the right kind of RFC to write). But we did learn a lot and we are convinced of the importance of this kind of work.
Therefore, our plan is to do the following. First, we're going to write-up a series of blog posts diving into what we learned about our research questions along with other kinds of questions that we encountered as we went.
Second, we plan to author an RFC proposing a dedicated user research team for the Rust org. The role of this team would be to gather data of all forms (interviews, surveys, etc) and make it available to the Rust project. And whenever they can, they would help to connect Rust customers directly with people extending and improving Rust.
The vision doc process was in many ways our first foray into this kind of research, and it taught us a few things:
- First, we have to go broad and deep. For this first round, we focused on high-level questions about people's experiences with Rust, and we didn't get deep into technical blockers. This gives us a good overview but limits the depth of recommendations we can make.
- Second, to answer specific questions we need to do specific research. One of our hypotheses was that we could use UX interviews to help decide thorny questions that come up in RFCs -- e.g., the notorious debate between
await xandx.awaitfrom yesteryear. What we learned is "sort of". The broad interviews we did did give us information about what kinds of things are important to people (e.g., convenience vs reliability, and so forth), and we'll cover some of that in upcoming write-ups. But to shed light on specific questions (e.g., "willx.awaitbe confused for a field access") will really require more specific research. This may be interviews but it could also be other kinds of tests. These are all things though that a user research team could help with. - Third, we should find ways to "open the data" and publish results incrementally. We conducted all of our interviews with a strong guarantee of privacy and we expect to delete the information we've gathered once this project wraps up. Our goal was to ensure people could talk in an unfiltered way. This should always be an option we offer people -- but that level of privacy has a cost, which is that we are not able to share the raw data, even widely across the Rust teams, and (worse) people have to wait for us to do analysis before they can learn anything. This won't work for a long-running team. At the same time, even for seemingly innocuous conversations, posting full transcripts of conversations openly on the internet may not be the best option, so we need to find a sensible compromise.
-
"As wide a variety of Rust users as we could find" -- the last part is important. One of the weaknesses of this work is that we wanted to hear from more Rust skeptics than we did. ↩
-
Thanks Holly! We are ever in your debt. ↩
-
Shocking, I know. But, actually, it is a little -- most programmers love telling you how much they hate everything you do, in my experience? ↩
03 Dec 2025 12:00am GMT
02 Dec 2025
Planet Mozilla
The Mozilla Blog: Fast Company names Firefox as a ‘Brand That Matters’

Fast Company has named Firefox to its 2025 "Brands That Matter" list, recognizing companies that go beyond acquiring customers to build meaningful relevance and cultural impact. For us, that honor reflects a simple truth about Mozilla's mission: We build Firefox to give people agency and choice every time they go online.
In 2025 this brand promise showed up clearly in the features we shipped. One standout was Shake to Summarize, our new AI-powered feature that helps you cut through information overload. With a quick shake or tap, Firefox creates a clean summary of a webpage so you can navigate information with more ease. TIME Magazine gave Shake to Summarize a special mention in its Best Inventions of 2025 list, highlighting how it turns the browser into a helpful assistant instead of a passive window.
Security and privacy remained a constant focus too. Firefox continued to patch high severity vulnerabilities quickly while reinforcing protections that limit tracking and keep more of your data in your hands.
This year also reminded us that Firefox is much more than a browser. It is a global community. Volunteers, contributors, and supporters helped shape everything from accessibility improvements to support forums to the evolving tab grouping experience. Their work shows up in the small details that make browsing calmer and more intuitive. When people choose Firefox, they join a network of individuals who want the internet to feel more open and more human.
Being named a Brand That Matters is a milestone, but it's also an ongoing commitment to delivering on our brand promise. As you head into a new year and think about how you want your digital life to feel, you can pick a browser that reflects your values. Choose Firefox, and choose an internet built around your agency and your choices.

Take control of your internet
Download FirefoxThe post Fast Company names Firefox as a 'Brand That Matters' appeared first on The Mozilla Blog.
02 Dec 2025 8:34pm GMT
The Mozilla Blog: Data War goes digital: Firefox’s card game is now online
Last month, Firefox turned 21, marking two decades of building a web that reflects creativity, independence and trust. At TwitchCon, we celebrated by launching billionaires into space and launching a new card game, Data War.
Billionaire Blast Off started with a simple premise: send billionaires into space and have fun doing it. With Data War, we created a fun and often chaotic game where you compete to win a one-way ticket to space for a data-hungry billionaire. We were thrilled that so many people at TwitchCon had a blast playing it.
You can download your own physical deck of Data War here, and now the chaos comes to your browser. The digital version of Data War is now free to play right online.

Jump in, stack your deck and blast off with our Data War digital game
Play nowFrom convention floor to your screen
During TwitchCon, visitors packed tables to duel it out, shouting "Data Grab!" and swapping decks mid-game as billionaires blasted off into orbit. The new online version brings that same energy to everyone.
<figcaption class="wp-element-caption">TwitchCon attendees playing Data War</figcaption>"If you laugh at something, you have power over it," said Dave Rowley, Executive Creative Director at Mondo Robot, Firefox's partner behind Data War. "We took the approach of applying absurdity as a cathartic device wherever we could. That allowed us to balance the realities of billionaires profiting off your data with a sort of reductive sarcasm, creating an outlet for frustration that lets you reclaim some control through a genuinely fun and accessible play experience."
How it started and evolved
Rowley worked with the Firefox team to design Data War to be instantly learnable and endlessly chaotic. Think War meets Exploding Kittens: data is currency, billionaires are unpredictable and Firefox shows up to remind players they are the ones who really are in control.
To bring Firefox's perspective into the game's creation, the team invited people who actually build Firefox to get involved. One of them was Philip Luk, Firefox's Director of Engineering, who playtested early versions of the physical game with his teenage kids. Their feedback helped shape Data War into something more dynamic than classic War.
"The game aims to spotlight how big tech companies and their billionaire owners profit from our data ," said Philip Luk, Firefox Director of Engineering. "My kids and I contributed ideas for new cards that add different strategic twists, make it more than just flipping cards - it's about reacting, laughing, and watching the chaos unfold."
"Playtesting with my teens helped us see where we could make it more unpredictable and fun," Luk added. "Those moments of surprise are what made the game engaging."
After TwitchCon, the team set out to create a digital version of Data War. A lighter, browser-based game that keeps the spirit of chaos and humor but can be played in quick bursts anytime.
"We wanted to design a digital version of Data War that anyone could play whenever they needed a quick break," said Benton Persons, Marketing Partnerships and Activations Lead at Mozilla. "That's what Firefox is all about, taking the stress out of being online because you're in control of your experience. And really, who doesn't want to launch little billionaires into space between meetings?"
Built for fun, powered by values
Every match is a reminder of the absurd things billionaires and Big Tech do to profit from your personal data.

But it's also a reminder that players are the ones in control and ultimately launch those billionaires into space.

"Our goal was simple: make Data War just as chaotic and fun in your browser as it is on a table. So we streamlined the rules and added digital-only moments like animations, fast turns and story hits, like the Data Grab minigame and Billionaire Blast Off win sequences, that keep every round feeling fresh, even when you're playing solo," said Rowley. "When the table erupts in laughter, that's when you know you've won.
Play now
You can play Data War Digital right now at: https://billionaireblastoff.firefox.com/datawar
Take it offline and download your own version of the physical deck to play with friends, because launching billionaires into space is even better together.
The post Data War goes digital: Firefox's card game is now online appeared first on The Mozilla Blog.
02 Dec 2025 4:00pm GMT
Mozilla Thunderbird: State of the Thunder 14: The 2026 Mobile Roadmap

Welcome back to the latest State of the Thunder. In the last Community Office Hours, Heather and Monica sat down with members of the mobile team in a retrospective to celebrate the first year of the Thunderbird for Android app. In this recording, however, Alessandro is leading viewers through the upcoming mobile roadmap, both for Android and iOS.
Looking ahead for Android
Key Priorities
Next year's top priority is rearchitecture and core maintenance. The underlying code behind the Thunderbird for Android app, which was built on top of K-9 Mail, is 15 years old. That's ancient in software terms. This work will make the app more stable and reduce the odds of developers breaking the app through their changes. This is a broad initiative with lots of elements. This includes bringing consistency across apps, including UI. For several reasons, we won't be continuing with Material UI. Instead, we'll be using our own homegrown Bolt UI.
Another feature the mobile team would like to prioritize is continuity with Thunderbird Pro. Since the exact delivery dates for these services have not yet been defined, setting priorities is difficult, but the team has confirmed the Thundermail integration will come first. Integrating Send, our end-to-end encryption file share, will be trickier when it comes to mobile. However, this may ultimately enable encrypted sync for user account settings as a future feature.
The team also plans to modernize the Message List and Message View, in addition to ensuring they work well. As users probably spend most of their time on these screens, this is key to get right. We want to have an experience that compares to other mobile mail apps in a good way.
Additional Goals
Several features and feature explorations fill out the rest of the Android roadmap: This includes HTML Signatures that can be synced from the desktop app. The team will also explore providing JMAP support, Exchange support, and calendar support. It's been a while since the Android app has added a new protocol, but Thundermail includes support for JMAP, and the desktop app monthly release now includes Exchange support. It's important for users to have a similar experience across the apps. For calendar explorations, we'll determine whether it's better to integrate with the native Android calendar or build a calendar section for the app.
Prioritization
Our urgent priorities for next year are the Rearchitecture and Core Maintenance, and the Message View improvements. If we complete both of these goals with our growing yet still small team, we'll consider that a realistic success!
Our plans for iOS
The mobile team also includes our iOS developers, and we have some broad goals for iOS development next year. iOS is, Alessandro notes, a locked-in, opinionated platform and we want to make future iOS app users comfortable using Thunderbird on their chosen platform. Any iOS roadmap also needs to balance developer and community satisfaction. Prioritizing IMAP as the first supported protocol reflects this as most users still rely on it. Once that's completed, we can begin work on JMAP to help lead the way for other clients to adopt it. This is the same principle behind adding Exchange support to our apps. While it may be a proprietary protocol, adding it opens up Thunderbird for many people who want to use it but currently can't.
Watch the Video (also on PeerTube)
Listen to the Podcast
The post State of the Thunder 14: The 2026 Mobile Roadmap appeared first on The Thunderbird Blog.
02 Dec 2025 3:04pm GMT
26 Nov 2025
Planet Mozilla
This Week In Rust: This Week in Rust 627
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
- Switching to Rust's own mangling scheme on nightly | Rust Blog
- Interview with Jan David Nose | Rust Blog
- This Development-cycle in Cargo: 1.92 | Inside Rust Blog
Foundation
Project/Tooling Updates
- SeaORM 2.0: Nested ActiveModel and Cascade Operations
- Symbolica 1.0: Symbolic mathematics in Rust
- APT Rust requirement raises questions
Observations/Thoughts
- Running real-time Rust
- A look at Rust from 2012
- Making the case that Cargo features could be improved to alleviate Rust compile times
- How Cloudflare uses Rust to serve (and break) millions of websites at 50+ millions requests per second
- [audio] Netstack.FM episode 15 - Pingora with Edward and Noah from Cloudflare
- [video] Grind: Java Deserves Modern Tooling*
Rust Walkthroughs
- Rust Unit Testing: File reading
- Practical Performance Lessons from Apache DataFusion
- Describing binary data with Deku
Miscellaneous
- Rust For Linux Kernel Co-Maintainer Formally Steps Down
- JetBrains supports the open source Rust projects Ratatui and Biome
- filtra.io | Toyota's "Tip Of The Spear" Is Choosing Rust
Crate of the Week
This week's crate is grapheme-utils, a library of functions to ergonomically work with UTF graphemes.
Thanks to rustkins for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
- No calls for testing were issued this week by Rust, Cargo, Rust language RFCs or Rustup.
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- Rustikon 2026 | CFP closes 2025-11-24 | Warsaw, Poland | 2025-03-19 - 2025-03-20 | Event Website
- TokioConf 2026 | CFP closes 2025-12-08 | Portland, Oregon, USA | 2026-04-20
- RustWeek 2026 | CFP closes 2025-12-31 | Utrecht, The Netherlands | 2026-05-19 - 2026-05-20
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
456 pull requests were merged in the last week
Compiler
- allow unnormalized types in drop elaboration
- avoid encoding non-constness or non-asyncness in metadata
- fix MaybeUninit codegen using GVN
- fix suggestion for the
cfg!macro - handle cycles when checking impl candidates for
doc(hidden) - inherent const impl
- recommend using a HashMap if a HashSet's second generic parameter doesn't implement BuildHasher
- reduce confusing
unreachable_codelints - replace OffsetOf by an actual sum of calls to intrinsic
- sess: default to v0 symbol mangling on nightly
- turn moves into copies after copy propagation
- warn against calls which mutate an interior mutable
const-item
Library
- add
bit_widthfor unsignedNonZero<T> - alloc: fix
Debugimplementation ofExtractIf - make SIMD intrinsics available in
const-contexts - match
<OsStringasDebug>::fmtto that of str - see if this is the time we can remove
layout::size_align - unwrap ret ty of
iter::ArrayChunks::into_remainder - v0 mangling for std on nightly
- hashbrown: add
HashTablemethods related to the raw bucket index - hashbrown: allow providing the key at insertion time for EntryRef
Cargo
docs(guide): When suggesting alt dev profile, link to related issuefeat(generate-lockfile): Add unstable --publish-time flagfeat(tree): Add more native completionsfix(bindeps): do not propagate artifact dependency to proc macro or build depsfix(config-include): disallow glob and template syntaxfix(package): exclude target/package from backupsrefactor(timings): separate data collection and presentationtest(config-include): include always relative to including config- enable
CARGO_CFG_DEBUG_ASSERTIONSin build scripts based on profile - feat: emit a warning when both
package.publishand--indexare specified - test: re-enable test since not flaky anymore
Rustdoc
- rustdoc-json: add rlib path to ExternalCrate to enable robust crate resolution
- rustdoc: make mergeable crate info more usable
Clippy
explicit_deref_methods: don't lint inimpl Deref(Mut)- add
large-error-ignoredconfig-knob - fix
useless_asrefsuggests wrongly when used in ctor - fix wrongly unmangled macros for
transmute_ptr_to_ptrandtransmute_bytes_to_str - taking a raw pointer on a union field is a safe operation
Rust-Analyzer
- add
unsafe(…)attribute completion - add pretty number for
add_explicit_enum_discriminant - add semantic tokens for deprecated items
- add deprecated semantic token for extern crate shorthand
- add assist to convert char literal
- allow inferring array sizes
- basic support for declarative attribute/derive macros
- completion
= $0after keyval cfg predicate - derive ParamEnv from GenericPredicates
- don't suggest duplicate
constcompletionsraw - enhance
remove_parenthesesassist to handle return expressions - extract function panics on more than one usage of variable in macro
- fix hit
incorrect_caseonno_manglestatic items - fix not applicable on
andforreplace_method_eager_lazy - fix not fill guarded match arm for
add_missing_match_arms - fix trailing newline in
tool_path - fix field completion in irrefutable patterns
- fix formatting request blocking on
crate_def_mapquery - fix parameter info with missing arguments
- fix some inference of patterns
- include all target types with paths outside package root
- infer range patterns correctly
- make dyn inlay hints configurable
- make postfix completion handle all references correctly
- move visibility diagnostics for fields to correct location
- never remove parens from prefix ops with valueless return/break/continue
- parse cargo config files with origins
- remove some deep normalizations from infer
- rewrite method resolution to follow rustc more closely
- show no error when parameters match macro names
- implement precedence for
print_hir - improve assist qualified to top when on first segment
- infer range pattern fully
- integrate postcard support into proc-macro server CLI
- optimize
SmolStr::clone4-5x speedup inline, 0.5x heap (slow down) - perf: improve start up time
- perf: prime trait impls in cache priming
- perf: produce less progress reports
- perf: reduce allocations in
try_evaluate_obligations - print more macro information in
DefMapdumps - proc-macro-srv: reimplement token trees via immutable trees
- support multiple variant for
generate_from_impl_for_enum - use inferred type in "extract type as type alias" assist and display inferred type placeholder
_inlay hints
Rust Compiler Performance Triage
Only a handful of performance-related changes landed this week. The largest one was changing the default name mangling scheme in nightly to the v0 version, which produces slightly larger symbol names, so it had a small negative effect on binary sizes and compilation time.
Triage done by @kobzol. Revision range: 6159a440..b64df9d1
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
0.9% | [0.3%, 2.7%] | 48 |
| Regressions ❌ (secondary) |
0.9% | [0.2%, 2.1%] | 25 |
| Improvements ✅ (primary) |
-0.5% | [-6.8%, -0.1%] | 33 |
| Improvements ✅ (secondary) |
-0.5% | [-1.4%, -0.1%] | 53 |
| All ❌✅ (primary) | 0.4% | [-6.8%, 2.7%] | 81 |
1 Regression, 2 Improvements, 5 Mixed; 1 of them in rollups 28 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Make closure capturing have consistent and correct behaviour around patterns
- misc coercion cleanups and handle safety correctly
- Implement
TryFrom<char>forusize. - Contracts: primitive ownership assertions:
ownedandblock - const validation: remove check for mutable refs in final value of const
No Items entered Final Comment Period this week for Compiler Team (MCPs only), Cargo, Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- RFC: Exhaustive traits. Traits that enable cross trait casting between trait objects.
- CMSE calling conventions
RUSTC_ALLOW_UNSTABLE_<feature>: aRUSTC_BOOTSTRAPalternative- Target Stages, an improvement of the incremental system
Upcoming Events
Rusty Events between 2025-11-26 - 2025-12-24 🦀
Virtual
- 2025-11-26 | Virtual (Girona, ES) | Rust Girona | Silicon Girona
- 2025-11-27 | Virtual (Buenos Aires, AR) | Rust en Español
- 2025-11-30 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-02 | Virtual (London, UK) | Women in Rust
- 2025-12-03 | Virtual (Buffalo, NY, US) | Buffalo Rust Meetup
- 2025-12-03 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2025-12-04 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-05 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-06 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2025-12-07 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2025-12-09 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2025-12-10 | Virtual (Girona, ES) | Rust Girona
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2025-12-16 | Virtual (Washington, DC, US) | Rust DC
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-17 | Virtual (Girona, ES) | Rust Girona
- 2025-12-18 | Virtual (Berlin, DE) | Rust Berlin
- 2025-12-23 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
Asia
- 2025-12-08 | Tokyo, JP | Rust Global: Tokyo
- 2025-12-20 | Bangalore, IN | Rust Bangalore
Europe
- 2025-11-26 | Bern, CH | Rust Bern
- 2025-11-27 | Augsburg, DE | Rust Meetup Augsburg
- 2025-11-27 | Barcelona, ES | BcnRust
- 2025-11-27 | Berlin, DE | Rust Berlin
- 2025-11-27 | Copenhagen, DK | Copenhagen Rust Community
- 2025-11-27 | Edinburgh, UK | Rust and Friends
- 2025-11-28 | Prague, CZ | Rust Prague
- 2025-12-03 | Girona, ES | Rust Girona
- 2025-12-03 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2025-12-04 | Vienna, AT | Rust Vienna
- 2025-12-08 | Dortmund, DE | Rust Dortmund
- 2025-12-08 | Paris, FR | Rust Paris
- 2025-12-10 | München, DE | Rust Munich
- 2025-12-10 | Reading, UK | Reading Rust Workshop
- 2025-12-16 | Bergen, NO | Rust Bergen
- 2025-12-16 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
North America
- 2025-11-26 | Austin, TX, US | Rust ATX
- 2025-11-26 | Phoenix, AZ, US | Desert Rust
- 2025-11-27 | Mountain View, CA, US | Hacker Dojo
- 2025-11-29 | Boston, MA, US | Boston Rust Meetup
- 2025-12-02 | Chicago, IL, US | Chicago Rust Meetup
- 2025-12-04 | México City, MX | Rust MX
- 2025-12-04 | Saint Louis, MO, US | STL Rust
- 2025-12-05 | New York, NY, US | Rust NYC
- 2025-12-06 | Boston, MA, US | Boston Rust Meetup
- 2025-12-11 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2025-12-11 | Lehi, UT, US | Utah Rust
- 2025-12-11 | Mountain View, CA, US | Hacker Dojo
- 2025-12-11 | San Diego, CA, US | San Diego Rust
- 2025-12-13 | Boston, MA, US | Boston Rust Meetup
- 2025-12-16 | San Francisco, CA, US | San Francisco Rust Study Group
- 2025-12-17 | Hybrid (Vancouver, BC, CA) | Vancouver Rust
- 2025-12-20 | Boston, MA, US | Boston Rust Meetup
- 2025-12-24 | Austin, TX, US | Rust ATX
Oceania
- 2025-12-11 | Brisbane City, QL, AU | Rust Brisbane
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
Also: a program written in Rust had a bug, and while it caused downtime, there was no security issue and nobody's data was compromised .
Thanks to Michael Voelkl for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
26 Nov 2025 5:00am GMT
25 Nov 2025
Planet Mozilla
The Mozilla Blog: Celebrating the contributors that power Mozilla Support
Every day, Firefox users around the world turn to Mozilla Support (SUMO) with a question, a hiccup or just a little curiosity. It's community-powered - contributors offer answers and support to make someone's day a little easier.
We celebrated this global community last month with Ask-A-Fox, a weeklong virtual event that brought together longtime contributors, newcomers and Mozilla staffers. The idea was simple: connect across time zones, trade tips and yes, answer questions.
Contributor appreciation, AMAs and an emoji hunt
For one lively week, contributors across Firefox and Thunderbird rallied together. Reply rates soared, response times dropped, and the forums buzzed with renewed energy. But the real story was the sense of connection.
There were live Ask Me Anything sessions with Mozilla's WebCompat, Web Performance, and Thunderbird teams. There was even a playful
+
emoji hunt through our Knowledge Base.
"That AMA was really interesting," said longtime Firefox contributor Paul. "I learned a lot and I recommend those that could not watch it live catch the recording as I am sure it will be very useful in helping users in SUMO."
Ask-A-Fox was a celebration of people: long-time contributors, brand-new faces and everyone in between. Here are just a few standout contributors:
- Firefox Desktop (including Enterprise)
Paul, Denyshon, Jonz4SUSE, @next, jscher2000 - Firefox for Android
Paul, TyDraniu, GerardoPcp04, Mad_Maks, sjohnn - Firefox for iOS
Paul, Simon.c.lord, TyDraniu, Mad_Maks, Mozilla-assistent - Thunderbird (including Android)
Davidsk, Sfhowes, Mozilla98, MattAuSupport, Christ1
Newcomers mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7 also made a big impact.
New contributor Shirmaya John said, "I love helping people, and I'm passionate about computers, so assisting with bugs or other tech issues really makes my day. I'm excited to grow here!"
Contributor Vincent won our Staff Award for the highest number of replies during the week.
"Ask a Fox highlights the incredible collaborative spirit of our community. A reminder of what we can achieve when we unite around a shared goal," said Kiki Kelimutu, a senior community manager at Mozilla.
Firefox has been powered by community from the start
As Mozilla's communities program manager, I've seen firsthand how genuine connection fuels everything we do. Members of our community aren't just answering questions; they're about building relationships, learning together, and showing up for one another with authenticity and care.
Mozilla is built by people who believe the internet should be open and accessible to all, and our community is the heartbeat of that vision. What started back in 2007 (and found its online home in 2010 at support.mozilla.org) has grown into a global network of contributors helping millions of Firefox users find answers, share solutions and get back on their Firefox journey.
Every question answered not only helps a user, it helps us build a better Firefox. By surfacing real issues and feedback, our community shapes the course of our products and keeps the web stronger for everyone.
Join the next Ask-A-Fox
Ask-A-Fox is a celebration of what makes Mozilla unique: our people.
As someone who's spent years building communities, I know that lasting engagement doesn't come from numbers or dashboards. It comes from treating contributors as individuals - people who bring their own stories, skills, and care to the table.
When Mozillians come together to share knowledge, laughter or even a few emojis, the result is more than faster replies. It's a connection.
Two more Ask-A-Fox events are already planned for next year, continuing the work of building communities that make the web more open and welcoming.
If you've ever wanted to make the web a little more human, come join us. Because every answer, every conversation, and every connection helps keep Firefox thriving.

Join us in shaping the web
Sign up hereThe post Celebrating the contributors that power Mozilla Support appeared first on The Mozilla Blog.
25 Nov 2025 6:02pm GMT
The Rust Programming Language Blog: Interview with Jan David Nose
On the Content Team, we had our first whirlwind outing at RustConf 2025 in Seattle, Washington, USA. There we had a chance to speak with folks about interesting things happening in the Project and the wider community.
Jan David Nose, Infrastructure Team
In this interview, Xander Cesari sits down with Jan David Nose, then one of the full-time engineers on the Infrastructure Team, which maintains and develops the infrastructure upon which Rust is developed and deployed -- including CI/CD tooling and crates.io.
We released this video on an accelerated timeline, some weeks ago, in light of the recent software supply chain attacks, but the interview was conducted prior to the news of compromised packages in other languages and ecosystems.
Check out the interview here or click below.
Transcript
Xander Cesari: Hey, this is Xander Cesari with the Rust Project Content Team, recording on the last hour of the last day of RustConf 2025 here in Seattle. So it's been a long and amazing two days. And I'm sitting down here with a team member from the Rust Project Infra Team, the unsung heroes of the Rust language. Want to introduce yourself and kind of how you got involved?
Jan David Nose: Yeah, sure. I'm JD. Jan David is the full name, but especially in international contexts, I just go with JD. I've been working for the Rust Foundation for the past three years as a full-time employee and I essentially hit the jackpot to work full-time on open source and I've been in the Infra Team of the Rust Project for the whole time. For the past two years I've led the team together with Jake. So the Infra Team is kind of a thing that lets Rust happen and there's a lot of different pieces.
Xander Cesari: Could you give me an overview of the responsibility of the Infra Team?
Jan David Nose: Sure. I think on a high level, we think about this in terms of, we serve two different groups of people. On one side, we have users of the language, and on the other side, we really try to provide good tooling for the maintainers of the language.
Jan David Nose: Starting with the maintainer side, this is really everything about how Rust is built. From the moment someone makes a contribution or opens a PR, we maintain the continuous integration that makes sure that the PR actually works. There's a lot of bots and tooling helping out behind the scenes to kind of maintain a good status quo, a sane state. Lots of small things like triage tools on GitHub to set labels and ping people and these kinds of things. And that's kind of managed by the Infra Team at large.
Jan David Nose: And then on the user side, we have a lot of, or the two most important things are making sure users can actually download Rust. We don't develop crates.io, but we support the infrastructure to actually ship crates to users. All the downloads go through content delivery networks that we provide. The same for Rust releases. So if I don't do my job well, which has happened, there might be a global outage of crates.io and no one can download stuff. But those are kind of the two different buckets of services that we run and operate.
Xander Cesari: Gotcha. So on the maintainer side, the Rust organization on GitHub is a large organization with a lot of activity, a lot of code. There's obviously a lot of large code bases being developed on GitHub, but there are not that many languages the size of Rust being developed on GitHub. Are there unique challenges to developing a language and the tooling that's required versus developing other software projects?
Jan David Nose: I can think of a few things that have less to do with the language specifically, but with some of the architecture decisions that were made very early on in the life cycle of Rust. So one of the things that actually caused a lot of headache for mostly GitHub, and then when they complained to us, for us as well, is that for a long, long time, the index for crates.io was a Git repo on GitHub. As Rust started to grow, the activity on the repo became so big that it actually caused some issues, I would say, in a friendly way on GitHub, just in terms of how much resources that single repository was consuming. That then kind of started this work on a web-based, HTTP-based index to shift that away. That's certainly one area where we've seen how Rust has struggled a little bit with the platform, but also the platform provider struggled with us.
Jan David Nose: I think for Rust itself, especially when we look at CI, we really want to make sure that Rust works well on all of the targets and all the platforms we support. That means we have an extremely wide CI pipeline where, for every Tier 1 target, we want to run all the tests, we want to build the release artifacts, we want to upload all of that to S3. We want to do as much as we reasonably can for Tier 2 targets and, to a lesser extent, maybe even test some stuff on Tier 3. That has turned into a gigantic build pipeline. Marco gave a talk today on what we've done with CI over the last year. One of the numbers that came out of doing the research for this talk is that we accumulate over three million build minutes per month, which is about six years of CPU time every month.
Jan David Nose: Especially when it comes to open source projects, I think we're one of the biggest consumers of GitHub Actions in that sense. Not the biggest in total; there are definitely bigger commercial projects. But that's a unique challenge for us to manage because we want to provide as good a service as we can to the community and make sure that what we ship is high quality. That comes at a huge cost in terms of scaling. As Rust gets more popular and we want to target more and more platforms, this is like a problem that just continues to grow.
Jan David Nose: We'll probably never remove a lot of targets, so there's an interesting challenge to think about. If it's already big now, how does this look in 5 years, 10 years, 15 years, and how can we make sure we can maintain the level of quality we want to ship? When you build and run for a target in the CI pipeline, some of those Tier 1 targets you can just ask a cloud service provider to give you a VM running on that piece of hardware, but some of them are probably not things that you can just run in the cloud.
Xander Cesari: Is there some HIL (Hardware-In-the-Loop) lab somewhere?
Jan David Nose: So you're touching on a conversation that's happening pretty much as we speak. So far, as part of our target tier policy, there is a clause that says it needs to be able to run in CI. That has meant being very selective about only promoting things to Tier 1 that we can actually run and test. For all of this, we had a prerequisite that it runs on GitHub Actions. So far we've used very little hardware that is not natively supported or provided by GitHub.
Jan David Nose: But this is exactly the point with Rust increasing in popularity. We just got requests to support IBM platforms and RISC-V, and those are not natively supported on GitHub. That has kicked off an internal conversation about how we even support this. How can we as a project enable companies that can provide us hardware to test on? What are the implications of that?
Jan David Nose: On one side, there are interesting constraints and considerations. For example, you don't want your PRs to randomly fail because someone else's hardware is not available. We're already so resource-constrained on how many PRs we can merge each day that adding noise to that process would really slow down contributions to Rust. On the other side, there are security implications. Especially if we talk about promoting something to Tier 1 and we want to build release artifacts on that hardware, we need to make sure that those are actually secure and no one sneaks a back door into the Rust compiler target for RISC-V.
Jan David Nose: So there are interesting challenges for us, especially in the world we live in where supply chain security is a massive concern. We need to figure out how we can both support the growth of Rust and the growth of the language, the community, and the ecosystem at large while also making sure that the things we ship are reliable, secure, and performant. That is becoming an increasingly relevant and interesting piece to work on. So far we've gotten away with the platforms that GitHub supports, but it's really cool to see that this is starting to change and people approach us and are willing to provide hardware, provide sponsorship, and help us test on their platforms. But essentially we don't have a good answer for this yet. We're still trying to figure out what this means, what we need to take into consideration, and what our requirements are to use external hardware.
Xander Cesari: Yeah, everyone is so excited about Rust will run everywhere, but there's a maintenance cost there that is almost exponential in scope.
Jan David Nose: It's really interesting as well because there's a tension there. I think with IBM, for example, approaching us, it's an interesting example. Who has IBM platforms at home? The number of users for that platform is really small globally, but IBM also invests heavily in Rust, tries to make this happen, and is willing to provide the hardware.
Jan David Nose: For us, that leads to a set of questions. Is there a line? Is there a certain requirement? Is there a certain amount of usage that a platform would need for us to promote it? Or do we say we want to promote as much as we can to Tier 1? This is a conversation we haven't really had to have yet. It's only now starting to creep in as Rust is adopted more widely and companies pour serious money and resources into it. That's exciting to see.
Jan David Nose: In this specific case, companies approach the Infra Team to figure out how we can add their platforms to CI as a first step towards Tier 1 support. But it's also a broader discussion we need to have with larger parts of the Rust Project. For Tier 1 promotions, for example, the Compiler Team needs to sign off, Infra needs to sign off. Many more people need to be involved in this discussion of how we can support the growing needs of the ecosystem at large.
Xander Cesari: I get the feeling that's going to be a theme throughout this interview.
Jan David Nose: 100%.
Xander Cesari: So one other tool that's part of this pipeline that I totally didn't know about for a long time, and I think a talk at a different conference clued me into it, is Crater. It's a tool that attempts to run all of the Rust code it can find on the internet. Can you talk about what that tool does and how it integrates into the release process?
Jan David Nose: Whenever someone creates a pull request on GitHub to add a new feature or bug fix to the Rust compiler, they can start what's called a Crater run, or an experiment. Crater is effectively a large fleet of machines that tries to pull in as many crates as it can. Ideally, we would love to test all crates, but for a variety of reasons that's not possible. Some crates simply don't build reliably, so we maintain lists to exclude those. From the top of my head, I think we currently test against roughly 60% of crates.
Jan David Nose: The experiment takes the code from your pull request, builds the Rust compiler with it, and then uses that compiler to build all of these crates. It reports back whether there are any regressions related to the change you proposed. That is a very important tool for us to maintain backwards compatibility with new versions and new features in Rust. It lets us ask: does the ecosystem still compile if we add this feature to the compiler, and where do we run into issues? Then, and this is more on the Compiler Team side, there's a decision about how to proceed. Is the breakage acceptable? Do we need to adjust the feature? Having Crater is what makes that conversation possible because it gives us real data on the impact on the wider ecosystem.
Xander Cesari: I think that's so interesting because as more and more companies adopt Rust, they're asking whether the language is going to be stable and backward compatible. You hear about other programming languages that had a big version change that caused a lot of drama and code changes. The fact that if you have code on crates.io, the Compiler Team is probably already testing against it for backwards compatibility is pretty reassuring.
Jan David Nose: Yeah, the chances are high, I would say. Especially looking at the whole Python 2 to Python 3 migration, I think as an industry we've learned a lot from those big version jumps. I can't really speak for the Compiler Team because I'm not a member and I wasn't involved in the decision-making, but I feel this is one of the reasons why backwards compatibility is such a big deal in Rust's design. We want to make it as painless as possible to stay current, stay up to date, and make sure we don't accidentally break the language or create painful migration points where the entire ecosystem has to move at once.
Xander Cesari: Do you know if there are other organizations pulling in something like Crater and running it on their own internal crate repositories, maybe some of the big tech companies or other compiler developers or even other languages? Or is this really bespoke for the Rust compiler team?
Jan David Nose: I don't know of anyone who runs Crater itself as a tool. Crater is built on a sandboxing framework that we also use in other places. For example, docs.rs uses some of the same underlying infrastructure to build all of the documentation. We try to share as much as we can of the functionality that exists in Crater, but I'm not aware of anyone using Crater in the same way we do.
Xander Cesari: Gotcha. The other big part of your job is that the Infra Team works on supporting maintainers, but it also supports users and consumers of Rust who are pulling from crates.io. It sounds like crates.io is not directly within your team, but you support a lot of the backend there.
Jan David Nose: Yeah, exactly. crates.io has its own team, and that team maintains the web application and the APIs. The crates themselves, all the individual files that people download, are hosted within our infrastructure. The Infra Team maintains the content delivery network that sits in front of that. Every download of a crate goes through infrastructure that we maintain. We collaborate very closely with the crates.io team on this shared interface. They own the app and the API, and we make sure that the files get delivered to the end user.
Xander Cesari: So it sounds like there's a lot of verification of the files that get uploaded and checks every time someone pushes a new version to crates.io. That part all happens within crates.io as an application.
Jan David Nose: Cargo uses the crates.io API to upload the crate file. crates.io has a lot of internal logic to verify that it is valid and that everything looks correct. For us, as the Infra Team, we treat that as a black box. crates.io does its work, and if it is happy with the upload, it stores the file in S3. From that point onward, infrastructure makes sure that the file is accessible and can be downloaded so people can start using your crate.
Xander Cesari: In this theme of Rust being a bit of a victim of its own success, I assume all of the traffic graphs and download graphs are very much up and to the right.
Jan David Nose: On the Foundation side, one of our colleagues likes to check how long it takes for one billion downloads to happen on crates.io, and that number has been falling quickly. I don't remember what it was three years ago, but it has come down by orders of magnitude. In our download traffic we definitely see exponential growth. Our traffic tends to double year over year, and that trend has been pretty stable. It really seems like Rust is getting a lot of adoption in the ecosystem and people are using it for more and more things.
Xander Cesari: How has the Infra Team scaled with that? Are you staying ahead of it, or are there a lot of late nights?
Jan David Nose: There have definitely been late nights. In the three years I've been working in the Infra Team, every year has had a different theme that was essentially a fire to put out.
Jan David Nose: It changes because we fix one thing and then the next thing breaks. So far, luckily, those fires have been mostly sequential, not parallel. When I joined, bandwidth was the big topic. Over the last year, it has been more about CI. About three years ago, we hit this inflection point where traffic was doubling and the sponsorship capacity we had at the time was reaching its limits.
Jan David Nose: Two or three years ago, Fastly welcomed us into their Fast Forward program and has been sponsoring all of our bandwidth since then. That has mostly helped me sleep at night. It has been a very good relationship. They have been an amazing partner and have helped us at every step to remove the fear that we might hit limits. They are very active in the open source community at large; most famously they also sponsor PyPI and the Python ecosystem, compared to which we're a tiny fish in a very big pond. That gives us a lot of confidence that we can sustain this growth and keep providing crates and releases at the level of quality people expect.
Xander Cesari: In some ways, Rust did such a good job of making all of that infrastructure feel invisible. You just type Cargo commands into your terminal and it feels magical.
Jan David Nose: I'm really happy about that. It's an interesting aspect of running an infrastructure team in open source. If you look at the ten-year history since the first stable release, or even the fifteen years since Rust really started, infrastructure was volunteer-run for most of that time. I've been here for three years, and I was the first full-time infrastructure engineer. So for ten to twelve years, volunteers ran the infrastructure.
Jan David Nose: For them, it was crucial that things just worked, because you can't page volunteers in the middle of the night because a server caught fire or downloads stopped working. From the beginning, our infrastructure has been designed to be as simple and as reliable as possible. The same is true for our CDNs. I always feel a bit bad because Fastly is an amazing sponsor. Every time we meet them at conferences or they announce new features, they ask whether we want to use them or talk about how we use Fastly in production. And every time I have to say: we have the simplest configuration possible. We set some HTTP headers. That's pretty much it.
Jan David Nose: It's a very cool platform, but we use the smallest set of features because we need to maintain all of this with a very small team that is mostly volunteer-based. Our priority has always been to keep things simple and reliable and not chase every fancy new technology, so that the project stays sustainable.
Xander Cesari: Volunteer-based organizations seem to have to care about work-life balance, which is probably terrific, and there are lessons to be learned there.
Jan David Nose: Yeah, it's definitely a very interesting environment to work in. It has different rules than corporations or commercial teams. We have to think about how much work we can do in a given timeframe in a very different way, because it's unpredictable when volunteers have time, when they're around, and what is happening in their lives.
Jan David Nose: Over the last few years, we've tried to reduce the number of fires that can break out. And when they do happen, we try to shield volunteers from them and take that work on as full-time employees. That started with me three years ago. Last year Marco joined, which increased the capacity we have, because there is so much to do on the Infra side that even with me working full-time, we simply did not have enough people.
Xander Cesari: So you're two full-time and everything else is volunteer.
Jan David Nose: Exactly. The team is around eight people. Marco and I work full-time and are paid by the Rust Foundation to focus exclusively on infrastructure. Then we have a handful of volunteers who work on different things.
Jan David Nose: Because our field of responsibility is so wide, the Infra Team works more in silos than other teams might. We have people who care deeply about very specific parts of the infrastructure. Otherwise there is simply too much to know for any one person. It has been a really nice mix, and it's amazing to work with the people on the team.
Jan David Nose: As someone who is privileged enough to work full-time on this and has the time and resources, we try to bear the bigger burden and create a space that is fun for volunteers to join. We want them to work on exciting things where there is less risk of something catching fire, where it's easier to come in, do a piece of work, and then step away. If your personal life takes over for two weeks, that's okay, because someone is there to make sure the servers and the lights stay on.
Jan David Nose: A lot of that work lives more on the maintainer side: the GitHub apps, the bots that help with triage. It's less risky if something goes wrong there. On the user side, if you push the wrong DNS setting, as someone might have done, you can end up in a situation where for 30 minutes no one can download crates. And in this case, "no one" literally means no user worldwide. That's not an experience I want volunteers to have. It's extremely stressful and was ultimately one of the reasons I joined in the first place-there was a real feeling of burnout from carrying that responsibility.
Jan David Nose: It's easier to carry that as a full-timer. We have more time and more ways to manage the stress. I'm honestly extremely amazed by what the Infra Team was able to do as volunteers. It's unbelievable what they built and how far they pushed Rust to get to where we are now.
Xander Cesari: I think anyone who's managing web traffic in 2025 is talking about traffic skyrocketing due to bots and scrapers for AI or other purposes. Has that hit the Rust network as well?
Jan David Nose: Yeah, we've definitely seen that. It's handled by a slightly different team, but on the docs.rs side in particular we've seen crawlers hit us hard from time to time, and that has caused noticeable service degradation. We're painfully aware of the increase in traffic that comes in short but very intense bursts when crawlers go wild.
Jan David Nose: That introduces a new challenge for our infrastructure. We need to figure out how to react to that traffic and protect our services from becoming unavailable to real users who want to use docs.rs to look up something for their work. On the CDN side, our providers can usually handle the traffic. It is more often the application side where things hurt.
Jan David Nose: On the CDN side we also see people crawling crates.io, presumably to vacuum up the entire crates ecosystem into an LLM. Fortunately, over the last two years we've done a lot of work to make sure crates.io as an application is less affected by these traffic spikes. Downloads now bypass crates.io entirely and go straight to the CDN, so the API is not hit by these bursts. In the past, this would have looked like a DDoS attack, with so many requests from so many sources that we couldn't handle it.
Jan David Nose: We've done a lot of backend work to keep our stack reliable, but it's definitely something that has changed the game over the last year. We can clearly see that crawlers are much more active than before.
Xander Cesari: That makes sense. I'm sure Fastly is working on this as well. Their business has to adapt to be robust to this new internet.
Jan David Nose: Exactly. For example, one of the conversations we're having right now is about docs.rs. It's still hosted on AWS behind CloudFront, but we're talking about putting it behind Fastly because through Fastly we get features like bot protection that can help keep crawlers out.
Jan David Nose: This is a good example of how our conversations have changed in the last six months. At the start of the year I did not think this would be a topic we would be discussing. We were focused on other things. For docs.rs we have long-term plans to rebuild the infrastructure that powers it, and I expected us to spend our energy there. But with the changes in the industry and everyone trying to accumulate as much data as possible, our priorities have shifted. The problems we face and the order in which we tackle them have changed.
Xander Cesari: And I assume as one of the few paid members of a mostly volunteer team, you often end up working on the fires, not the interesting next feature that might be more fun.
Jan David Nose: That is true, although it sounds a bit negative to say I only get to work on fires. Sometimes it feels like that because, as with any technology stack, there is a lot of maintenance overhead. We definitely pay that price on the infrastructure side.
Jan David Nose: Marco, for example, spent time this year going through all the servers we run, cataloging them, and making sure they're patched and on the latest operating system version. We updated our Ubuntu machines to the latest LTS. It feels a bit like busy work-you just have to do it because it's important and necessary, but it's not the most exciting project.
Jan David Nose: On the other hand, when it comes to things like CDN configuration and figuring out how bot protection features work and whether they are relevant to us, that is also genuinely interesting work. It lets us play with new tools vendors provide, and we're working on challenges that the wider industry is facing. How do you deal with this new kind of traffic? What are the implications of banning bots? How high is the risk of blocking real users? Sometimes someone just misconfigures a curl script, and from the outside it looks like they're crawling our site.
Jan David Nose: So it's an interesting field to work in, figuring out how we can use new features and address new challenges. That keeps it exciting even for us full-timers who do more of the "boring" work. We get to adapt alongside how the world around us is changing. If there's one constant, it's change.
Xander Cesari: Another ripped-from-the-headlines change around this topic is software supply chain security, and specifically xz-utils and the conversation around open source security. How much has that changed the landscape you work in?
Jan David Nose: The xz-utils compromise was scary. I don't want to call it a wake-up call, because we've been aware that supply chain security is a big issue and this was not the first compromise. But the way it happened felt very unsettling. You saw an actor spend a year and a half building social trust in an open source project and then using that to introduce a backdoor.
Jan David Nose: Thinking about that in the context of Rust: every team in the project talks about how we need more maintainers, how there's too much workload on the people who are currently contributing, and how Rust's growth puts strain on the organization as a whole. We want to be an open and welcoming project, and right now we also need to bring new people in. If someone shows up and says, "I'm willing to help, please onboard me," and they stick around for a year and then do something malicious, we would be susceptible to that. I don't think this is unique to Rust. This is an inherent problem in open source.
Xander Cesari: Yeah, it's antithetical to the culture.
Jan David Nose: Exactly. So we're trying to think through how we, as a project and as an ecosystem, deal with persistent threat actors who have the time and resources to play a long game. Paying someone to work full-time on open source for a year is a very different threat model than what we used to worry about.
Jan David Nose: I used to joke that the biggest threat to crates.io was me accidentally pulling the plug on a CDN. I think that has changed. Today the bigger threat is someone managing to insert malicious code into our releases, our supply chain, or crates.io itself. They could find ways to interfere with our systems in ways we're simply not prepared for, where, as a largely volunteer organization, we might be too slow to react to a new kind of attack.
Jan David Nose: Looking back over the last three years, this shift became very noticeable, especially after the first year. Traffic was doubling, Rust usage was going up a lot, and there were news stories about Rust being used in the Windows kernel, in Android, and in parts of iOS. Suddenly Rust is everywhere. If you want to attack "everywhere," going after Rust becomes attractive. That definitely puts a target on our back and has changed the game.
Jan David Nose: I'm very glad the Rust Foundation has a dedicated security engineer who has done a lot of threat modeling and worked with us on infrastructure security. There's also a lot of work happening specifically around the crates ecosystem and preventing supply chain attacks through crates. Luckily, it's not something the Infra side has to solve alone. But it is getting a lot more attention, and I think it will be one of the big challenges for the future: how a mostly volunteer-run project keeps up with this looming threat.
Xander Cesari: And it is the industry at large. This is not a unique problem to the Rust package manager. All package registries, from Python to JavaScript to Nix, deal with this. Is there an industry-wide conversation about how to help each other out and share learnings?
Jan David Nose: Yeah, there's definitely a lot happening. I have to smile a bit because, with a lot of empathy but also a bit of relief, we sometimes share news when another package ecosystem gets compromised. It is a reminder that it's not just us, sometimes it's npm this time.
Jan David Nose: We really try to stay aware of what's happening in the industry and in other ecosystems: what new threats or attack vectors are emerging, what others are struggling with. Sometimes that is security; sometimes it's usability. A year and a half ago, for example, npm had the "everything" package where someone declared every package on npm as a dependency, which blew up the index. We look at incidents like that and ask whether crates.io would struggle with something similar and whether we need to make changes.
Jan David Nose: On the security side we also follow closely what others are doing. In the packaging community, the different package managers are starting to come together more often to figure out which problems everyone shares. There is a bit of a joke that we're all just shipping files over the internet. Whether it's an npm package or a crate, ultimately it's a bunch of text files in a zip. So from an infrastructure perspective the problems are very similar.
Jan David Nose: These communities are now talking more about what problems PyPI has, what problems crates.io has, what is happening in the npm space. One thing every ecosystem has seen-even the very established ones-is a big increase in bandwidth needs, largely connected to the emergence of AI. PyPI, for example, publishes download charts, and it's striking. Python had steady growth-slightly exponential, but manageable-for many years. Then a year or two ago you see a massive hockey stick. People discovered that PyPI was a great distribution system for their models. There were no file size limits at the time, so you could publish precompiled GPU models there.
Jan David Nose: That pattern shows up everywhere. It has kicked off a new era for packaging ecosystems to come together and ask: in a time where open source is underfunded and traffic needs keep growing, how can we act together to find solutions to these shared problems? crates.io is part of those conversations. It's interesting to see how we, as an industry, share very similar problems across ecosystems-Python, npm, Rust, and others.
Xander Cesari: With a smaller, more hobbyist-focused community, you can have relaxed rules about what goes into your package manager. Everyone knows the spirit of what you're trying to do and you can get away without a lot of hard rules and consequences. Is the Rust world going to have to think about much harder rules around package sizes, allowed files, and how you're allowed to distribute things?
Jan David Nose: Funnily enough, we're coming at this from the opposite direction. Compared to other ecosystems, we've always had fairly strict limits. A crate can be at most around ten megabytes in size. There are limits on what kinds of files you can put in there. Ironically, those limits have helped us keep traffic manageable in this period.
Jan David Nose: At the same time, there is a valid argument that these limits may not serve all Rust use cases. There are situations where you might want to include something precompiled in your crate because it is hard to compile locally, takes a very long time, or depends on obscure headers no one has. I don't think we've reached the final state of what the crates.io package format should look like.
Jan David Nose: That has interesting security implications. When we talk about precompiled binaries or payloads, we all have that little voice in our head every time we see a curl | sh command: can I trust this? The same is true if you download a crate that contains a precompiled blob you cannot easily inspect.
Jan David Nose: The Rust Foundation is doing a lot of work and research here. My colleague Adam, who works on the crates.io team, is working behind the scenes to answer some of these questions. For example: what kind of security testing can we do before we publish crates to make sure they are secure and don't contain malicious payloads? How do we surface this information? How do we tell a publisher that they included files that are not allowed? And from the user's perspective, when you visit crates.io, how can you judge how well maintained and how secure a crate is?
Jan David Nose: Those conversations are happening quite broadly in the ecosystem. On the Infra side we're far down the chain. Ultimately we integrate with whatever security scanning infrastructure crates.io builds. We don't have to do the security research ourselves, but we do have to support it.
Jan David Nose: There's still a lot that needs to happen. As awesome as Rust already is, and as much as I love using it, it's important to remember that we're still a very young ecosystem. Python is now very mature and stable, but it's more than 25 years old. Rust is about ten years old as a stable language. We still have a lot to learn and figure out.
Xander Cesari: Is the Rust ecosystem running into problems earlier than other languages because we're succeeding at being foundational software and Rust is used in places that are even more security-critical than other languages, so you have to hit these hard problems earlier than the Python world did?
Jan David Nose: I think that's true. Other ecosystems probably had more time to mature and answer these questions. We're operating on a more condensed timeline. There is also simply more happening now. Open source has been very successful; it's everywhere. That means there are more places where security is critical.
Jan David Nose: So this comes with the success of open source, with what is happening in the ecosystem at large, and with the industry we're in. It does mean we have less time to figure some things out. On the flip side, we also have less baggage. We have less technical debt and fifteen fewer years of accumulated history. That lets us be on the forefront in some areas, like how a package ecosystem can stay secure and what infrastructure a 21st century open source project needs.
Jan David Nose: Here I really want to call out the Rust Foundation. They actively support this work: hiring people like Marco and me to work full-time on infrastructure, having Walter and Adam focus heavily on security, and as an organization taking supply chain considerations very seriously. The Foundation also works with other ecosystems so we can learn and grow together and build a better industry.
Jan David Nose: Behind the scenes, colleagues constantly work to open doors for us as a relatively young language, so we can be part of those conversations and sit at the table with other ecosystems. That lets us learn from what others have already gone through and also help shape where things are going. Sustainability is a big part of that: how do we fund the project long term? How do we make sure we have the human resources and financial resources to run the infrastructure and support maintainers? I definitely underestimated how much of my job would be relationship management and budget planning, making sure credits last until new ones arrive.
Xander Cesari: Most open core business models give away the thing that doesn't cost much-the software-and charge for the thing that scales with use-the service. In Rust's case, it's all free, which is excellent for adoption, but it must require a very creative perspective on the business side.
Jan David Nose: Yeah, and that's where different forces pull in opposite directions. As an open source project, we want everyone to be able to use Rust for free. We want great user experience. When we talk about downloads, there are ways for us to make them much cheaper, but that might mean hosting everything in a single geographic location. Then everyone, including people in Australia, would have to download from, say, Europe, and their experience would get much worse.
Jan David Nose: Instead, we want to use services that are more expensive but provide a better experience for Rust users. There's a real tension there. On one side we want to do the best we can; on the other side we need to be realistic that this costs money.
Xander Cesari: I had been thinking of infrastructure as a binary: it either works or it doesn't. But you're right, it's a slider. You can pick how much money you want to spend and what quality of service you get. Are there new technologies coming, either for the Rust Infra Team or the packaging world in general, to help with these security problems? New sandboxing technologies or higher-level support?
Jan David Nose: A lot of people are working on this problem from different angles. Internally we've talked a lot about it, especially in the context of Crater. Crater pulls in all of those crates to build them and get feedback from the Rust compiler. That means if someone publishes malicious code, we will download it and build it.
Jan David Nose: In Rust this is a particular challenge because build scripts can essentially do anything on your machine. For us that means we need strong sandboxing. We've built our own sandboxing framework so every crate build runs in an isolated container, which prevents malicious code from escaping and messing with the host systems.
Jan David Nose: We feel that pain in Crater, but if we can solve it in a way that isn't exclusive to Crater-if it also protects user machines from the same vulnerabilities-that would be ideal. People like Walter on the Foundation side are actively working on that. I'm sure there are conversations in the Cargo and crates teams as well, because every team that deals with packages sees a different angle of the problem. We all have to come together to solve it, and there is a lot of interesting work happening in that area.
Xander Cesari: I hope help is coming.
Jan David Nose: I'm optimistic.
Xander Cesari: We have this exponential curve with traffic and everything else. It seems like at some point it has to taper off.
Jan David Nose: We'll see. Rust is a young language. I don't know when that growth will slow down. I think there's a good argument that it will continue for quite a while as adoption grows.
Jan David Nose: Being at a conference like RustConf, it's exciting to see how the mix of companies has changed over time. We had a talk from Rivian on how they use Rust in their cars. We've heard from other car manufacturers exploring it. Rust is getting into more and more applications that a few years ago would have been hard to imagine or where the language simply wasn't mature enough yet.
Jan David Nose: As that continues, I think we'll see new waves of growth that sustain the exponential curve we currently have, because we're moving into domains that are new for us. It's amazing to see who is talking about Rust and how they're using it, sometimes in areas like space that you wouldn't expect.
Jan David Nose: I'm very optimistic about Rust's future. With this increase in adoption, we'll see a lot of interesting lessons about how to use Rust and a lot of creative ideas from people building with it. With more corporate adoption, I also expect a new wave of investment into the ecosystem: companies paying people to work full-time on different parts of Rust, both in the ecosystem and in the core project. I'm very curious what the next ten years will look like, because I genuinely don't know.
Xander Cesari: The state of Rust right now does feel a bit like the dog that caught the car and now doesn't know what to do with it.
Jan David Nose: Yeah, I think that's a good analogy. Suddenly we're in a situation where we realize we haven't fully thought through every consequence of success. It's fascinating to see how the challenges change every year. We keep running into new growing pains where something that wasn't an issue a year ago suddenly becomes one because growth keeps going up.
Jan David Nose: We're constantly rebuilding parts of our infrastructure to keep up with that growth, and I don't see that stopping soon. As a user, that makes me very excited. With the language and the ecosystem growing at this pace, there are going to be very interesting things coming that I can't predict today.
Jan David Nose: For the project, it also means there are real challenges: financing the infrastructure we need, finding maintainers and contributors, and creating a healthy environment where people can work without burning out. There is a lot of work to be done, but it's an exciting place to be.
Xander Cesari: Well, thank you for all your work keeping those magic Cargo commands I can type into my terminal just working in the background. If there's any call to action from this interview, it's that if you're a company using Rust, maybe think about donating to keep the Infra Team working.
Jan David Nose: We always love new Rust Foundation members. Especially if you're a company, that's one of the best ways to support the work we do. Membership gives us a budget we can use either to fund people who work full-time on the project or to fill gaps in our infrastructure sponsorship where we don't get services for free and have to pay real money.
Jan David Nose: And if you're not a company, we're always looking for people to help out. The Infra Team has a lot of Rust-based bots and other areas where people can contribute relatively easily.
Xander Cesari: Small scoped bots that you can wrap your head around and help out with.
Jan David Nose: Exactly. It is a bit harder on the Infra side because we can't give people access to our cloud infrastructure. There are areas where it's simply not possible to contribute as a volunteer because you can't have access to the production systems. But there is still plenty of other work that can be done.
Jan David Nose: Like every other team in the project, we're a bit short-staffed. So when you're at conferences, come talk to me or Marco. We have work to do.
Xander Cesari: Well, thank you for doing the work that keeps Rust running.
Jan David Nose: I'm happy to.
Xander Cesari: Awesome. Thank you so much.
25 Nov 2025 12:00am GMT
24 Nov 2025
Planet Mozilla
Firefox Nightly: Getting Better Every Day – These Weeks in Firefox: Issue 192
Highlights
- Collapsed tab group hover preview is going live in Firefox 145!
- Nicolas Chevobbe added a feature that collapses unreferenced CSS variables declarations in the Rules view (#1719461)
- Alexandre Poirot [:ochameau] added a setting to enable automatic pretty printing in the Debugger (#1994128)
- Improved performance on pages making heavy usage of CSS variables
- Jared H added a "copy this profile" button to the app menu (bug 1992199)
Friends of the Firefox team
Resolved bugs (excluding employees)
Volunteers that fixed more than one bug
- Khalid AlHaddad
- Kyler Riggs [:kylr]
New contributors (🌟 = first patch)
- Alex Stout
- Khalid AlHaddad
- Jim Gong
- Mason Abbruzzese
- PhuongNam
- Thomas J Faughnan Jr
- Mingyuan Zhao [:MagentaManifold]
Project Updates
Add-ons / Web Extensions
WebExtensions Framework
- Fixed an issue that was preventing dynamic import from resolving moz-extensions ES modules when called from content scripts attached to sandboxed sub frames - Bug 1988419
- Thanks to Yoshi Cheng-Hao Huang from the Spidermonkey Team for looking into and fixing this issue hitting dynamic imports usage from content scripts
Addon Manager & about:addons
- As a followup to the work to improve the extensions button panel's empty states, starting from Nightly 146 Firefox Desktop will be showing a message bar notice in both the extensions button panel and about:addons to highlight to the users when Firefox is running in Troubleshoot mode (also known as Safe mode) and all add-ons are expected to be disabled, along with a "Learn more link" pointing the user to the SUMO page describing Troubleshoot mode in more details - Bug 1992983 / Bug 1994074 / Bug 1727828
DevTools
- gopi made the Rule view format grid-template-areas even when the value is invalid (#1940198)
- Emilio Cobos Álvarez fixed an issue where editing constructed Rule in the shadow DOM would make them disappear (#1986702)
- Nicolas Chevobbe fixed a bug that would render erroneous data in the var() tooltip for variables defined in :host rule on shared stylesheet (#1995943)
- Julian Descottes improved inspector reload time when shadow DOM element was selected (#1986704)
- Hubert Boma Manilla fixed an issue where we could have duplicated inline preview when paused in the Debugger (#1994114)
- Nicolas Chevobbe [:nchevobbe] exposed devtools.inspector.showAllAnonymousContent in the settings panel (#1995333)
WebDriver
- Khalid added a dedicated switch_to_parent_frame method to the WebDriver Classic Python client, and renamed the existing switch_frame method to switch_to_frame for consistency with the WebDriver specification.
- Julian updated the network.getData command to return response bodies for requests using the data: scheme.
- Julian fixed a bug where different requests would reuse the same id, which could lead to unexpected behaviours when using commands targeting specific requests (e.g. network.provideResponse, network.getData etc…).
- Sasha updated the reset behaviour of "emulation.setLocaleOverride" and "emulation.setTimezoneOverride" commands to align with the spec changes. With this update, when calling these command to reset the override for e.g. a browsing context, only this override will be reset and if there is an override set for a user context, related to this browsing context, this override will be applied instead.
Lint, Docs and Workflow
- ESLint
- We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
- jsdoc/no-bad-blocks has now been enabled.
- jsdoc comments are required to have two stars at the start, this will raise an issue if it looks like it should be a jsdoc comment (e.g. has an @ symbol) but only one star.
- jsdoc/multiline-blocks has also been enabled.
- This is being used mainly for layout consistency of multi-line comments, so that the text of the comment does not start on the first line, nor ends on the last line. This also helps with automatically fixing other rules.
- jsdoc/no-bad-blocks has now been enabled.
- We are working on rolling out automatically fixable JSDoc rules across the whole tree. The aim being to reduce the amount of disabled rules in roll-outs, and make it simpler for enabling JSDDoc rules on new areas.
- StyleLint
- More rules have been enabled - background-color tokens, space tokens, text-color tokens, box-shadow tokens
- A new rule has been added to prevent using browser/ css files in toolkit/
Migration Improvements
- We've disabled the IE migrator by default now, since IE (the separate browser, not the compatibility mode) stopped being supported by Microsoft in 2022. We will let this ride to release, and then begin the work of removing support entirely.
- To help users migrate their data off of Windows 10, we've revived the Backup effort, and have landed a number of fixes:
- Restores now preserve the user's default profile if it was default pre‑backup, and the prior profile is renamed to old-[profile name] for clarity. This prevents unexpected startup profiles after a restore and makes rollback obvious in Profile Manager.
- The restore file picker (restore modal and about:welcome restore) now opens at the detected backup location, cutting navigation friction and errors.
- The primary CTA label in about:preferences updates correctly ("Manage backup" → "Turn off backup") immediately after enabling, aligning UI state with functionality.
- The "Backup now" button is hidden until backup is enabled, avoiding a dead‑end action and guiding users through the correct setup sequence.
- Enterprise policy prefs were added for fxbackup, enabling admins on Windows/macOS/Linux to enforce/lock backup availability and behavior for managed users.
- Error and warning banners in about:preferences were updated to match spec for clearer state and failure messaging.
- The backup HTML archive support link now points to the correct documentation.
- Copy updates clarify what cookie data is included in backups, improving user expectations and privacy transparency.
New Tab Page
- We successfully train-hopped New Tab version 145.1.20251009.134757 to 100% of the release channel on October 20th!
- New Tab defaults and freshness: DiscoveryStream cache now expires when browser.newtabpage.activity-stream.discoverystream.sections.enabled changes, so toggling layouts updates content immediately. First‑run shows far fewer placeholders, improving perceived load. Startup correctness improves by keying the about:home startup cache on the newtab add-on version.
- Accessibility, keyboard, and RTL: Fixed a broken focus order where Settings jumped ahead of Weather. For Windows High Contrast Mode, story cards no longer disappear on hover and get clearer visuals. RTL locales now get intuitive reversed arrow-key navigation across story cards.
- Weather opt-in and reach: Opt-in flow now surfaces "Enable current location" and adds a "Detect my location" context-menu action; availability expands to more regions, reducing setup friction and increasing coverage.
- Visual polish and correctness: Standardized opacity plus hover/blur effects make story cards feel more responsive; made sure the search bar stays vertically centered while scrolling. Medium refined cards now show longer publisher names without affecting small cards.
- Wallpaper and language fixes: Missing custom wallpaper thumbnails now load reliably, and a friendly error state appears if Remote Settings wallpapers fail. The language switcher no longer lists add-on locales, restoring expected language selection.
Performance Tools (aka Firefox Profiler)
- Marker tooltips now have a 'filter' button to quickly filter the marker chart to similar markers:

- Link to the profile in the screenshot: https://share.firefox.dev/42kDTuf (and after filtering: https://share.firefox.dev/4gQHPsx)
- This is a resource usage profile of an xpcshell test job. To see them, select a test job in treeherder and press 'g'.
Profile Management
- Profiles is rolling out to all non-win10 users in 144, looking healthy so far
- Niklas refactored the BackupService to support using it to copy profiles (bug 1992203)
- Jared H added per-profile desktop shortcuts on Windows (bug 1958955), available via a toggle on the about:editprofile page
- Dave fixed an intermittent test crash in debug builds (bug 1994849) caused by a race between deleting a directory and attempting to open a lock file. nsProfileLock::LockWithFcntl now returns a warning instead of an error in this case.
Search and Navigation
- New Features
- We are working on enabling better search suggestions in the address bar (link to blog post).
- Mandy has rolled out Perplexity as a new engine to all users
- Google Lens is being rolled out to users in 144 with additional in-product demoing.
- Address Bar
- Daisuke has implemented a prototype for flight status suggestions @ 1990951 + 1994317
- Dale has been working on enabling the unified trust panel @ 1992940 + 1979713
- Dale introduced Option + Up / Down as a keyboard shortcut to open the unified search panel @ 1962200
- Moritz removed the code for "Add a keyword for this search" as it was deprecated functionality @ 1995002
- Search
- Mandy and Drew have been working on releasing the visual search + messaging @ 1995645
Storybook/Reusable Components/Acorn Design System
- <moz-message-bar> now supports arbitrary content with slot="message" elements
- Ideally this is still something short, like a message as opposed to inputs, etc
- <moz-message-bar><span slot="message" data-l10n-id="my-message"><a data-l10n-name="link"></a></span></moz-message-bar>
- Note: if you're using Lit, @click listeners etc set on Fluent elements (data-l10n-name) won't work, you'll need to attach it to the data-l10n-id element or another parent
24 Nov 2025 8:26pm GMT
21 Nov 2025
Planet Mozilla
Niko Matsakis: Move Expressions
This post explores another proposal in the space of ergonomic ref-counting that I am calling move expressions. To my mind, these are an alternative to explicit capture clauses, one that addresses many (but not all) of the goals from that design with improved ergonomics and readability.
TL;DR
The idea itself is simple, within a closure (or future), we add the option to write move($expr). This is a value expression ("rvalue") that desugars into a temporary value that is moved into the closure. So
|| something(&move($expr))
is roughly equivalent to something like:
{
let tmp = $expr;
|| something(&{tmp})
}
How it would look in practice
Let's go back to one of our running examples, the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks. As a reminder, this is how the code looks today - note the let _some_value = ... lines for dealing with captures:
// task: listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
do_something_else_with(_some_a, _some_b, _some_c)
});
Under this proposal it would look something like this:
tokio::task::spawn(async {
do_something_else_with(
move(self.some_a.clone()),
move(self.some_b.clone()),
move(self.some_c.clone()),
)
});
There are times when you would want multiple clones. For example, if you want to move something into a FnMut closure that will then give away a copy on each call, it might look like
data_source_iter
.inspect(|item| {
inspect_item(item, move(tx.clone()).clone())
// ---------- -------
// | |
// move a clone |
// into the closure |
// |
// clone the clone
// on each iteration
})
.collect();
// some code that uses `tx` later...
Credit for this idea
This idea is not mine. It's been floated a number of times. The first time I remember hearing it was at the RustConf Unconf, but I feel like it's come up before that. Most recently it was proposed by Zachary Harrold on Zulip, who has also created a prototype called soupa. Zachary's proposal, like earlier proposals I've heard, used the super keyword. Later on @simulacrum proposed using move, which to me is a major improvement, and that's the version I ran with here.
This proposal makes closures more "continuous"
The reason that I love the move variant of this proposal is that it makes closures more "continuous" and exposes their underlying model a bit more clearly. With this design, I would start by explaining closures with move expressions and just teach move closures at the end, as a convenient default:
A Rust closure captures the places you use in the "minimal way that it can" - so
|| vec.len()will capture a shared reference to thevec,|| vec.push(22)will capture a mutable reference, and|| drop(vec)will take ownership of the vector.You can use
moveexpressions to control exactly what is captured: so|| move(vec).push(22)will move thevectorinto the closure. A common pattern when you want to be fully explicit is to list all captures at the top of the closure, like so:|| { let vec = move(input.vec); // take full ownership of vec let data = move(&cx.data); // take a reference to data let output_tx = move(output_tx); // take ownership of the output channel process(&vec, &mut output_tx, data) }As a shorthand, you can write
move ||at the top of the closure, which will change the default so that closures > take ownership of every captured variable. You can still mix-and-match withmoveexpressions to get more control. > So the previous closure might be written more concisely like so:move || { process(&input.vec, &mut output_tx, move(&cx.data)) // --------- --------- -------- // | | | // | | closure still // | | captures a ref // | | `&cx.data` // | | // because of the `move` keyword on the clsoure, // these two are captured "by move" // }
This proposal makes move "fit in" for me
It's a bit ironic that I like this, because it's doubling down on part of Rust's design that I was recently complaining about. In my earlier post on Explicit Capture Clauses I wrote that:
To be honest, I don't like the choice of
movebecause it's so operational. I think if I could go back, I would try to refashion our closures around two concepts
- Attached closures (what we now call
||) would always be tied to the enclosing stack frame. They'd always have a lifetime even if they don't capture anything.- Detached closures (what we now call
move ||) would capture by-value, likemovetoday.I think this would help to build up the intuition of "use
detach ||if you are going to return the closure from the current stack frame and use||otherwise".
move expressions are, I think, moving in the opposite direction. Rather than talking about attached and detached, they bring us to a more unified notion of closures, one where you don't have "ref closures" and "move closures" - you just have closures that sometimes capture moves, and a "move" closure is just a shorthand for using move expressions everywhere. This is in fact how closures work in the compiler under the hood, and I think it's quite elegant.
Why not suffix?
One question is whether a move expression should be a prefix or a postfix operator. So e.g.
|| something(&$expr.move)
instead of &move($expr).
My feeling is that it's not a good fit for a postfix operator because it doesn't just take the final value of the expression and so something with it, it actually impacts when the entire expression is evaluated. Consider this example:
|| process(foo(bar()).move)
When does bar() get called? If you think about it, it has to be closure creation time, but it's not very "obvious".
We reached a similar conclusion when we were considering .unsafe operators. I think there is a rule of thumb that things which delineate a "scope" of code ought to be prefix - though I suspect unsafe(expr) might actually be nice, and not just unsafe { expr }.
Edit: I added this section after-the-fact in response to questions.
Conclusion
I'm going to wrap up this post here. To be honest, what this design really has going for it, above anything else, is its simplicity and the way it generalizes Rust's existing design. I love that. To me, it joins the set of "yep, we should clearly do that" pieces in this puzzle:
- Add a
Sharetrait (I've gone back to preferring the nameshare😁) - Add
moveexpressions
These both seem like solid steps forward. I am not yet persuaded that they get us all the way to the goal that I articulated in an earlier post:
"low-level enough for a Kernel, usable enough for a GUI"
but they are moving in the right direction.
21 Nov 2025 10:45am GMT
The Servo Blog: Servo Sponsorship Tiers
The Servo project is happy to announce the following new sponsorship tiers to encourage more donations to the project:
- Platinum: 10,000 USD/month
- Gold: 5,000 USD/month
- Silver: 1,000 USD/month
- Bronze: 100 USD/month
Organizations and individual sponsors donating in these tiers will be acknowledged on the servo.org homepage with their logo or name. Please note that such donations should come with no obligations to the project i.e they should be "no strings attached" donations. All the information about these new tiers is available at the Sponsorship page on this website.
Please contact us at join@servo.org if you are interested in sponsoring the project through one of these tiers.
Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187.
Last, but not least, we're excited to welcome our first bronze sponsor LambdaTest who has recently started donating to the Servo project. Thank you very much!
21 Nov 2025 12:00am GMT
20 Nov 2025
Planet Mozilla
Mozilla Localization (L10N): Localizer spotlight: Robb
About You
My profile in Pontoon is robbp, but I go by Robb. I'm based in Romania and have been contributing to Mozilla localization since 2018 - first between 2018 and 2020, and now again after a break. I work mainly on Firefox (desktop and mobile), Thunderbird, AMO, and SUMO. When I'm not volunteering for open-source projects, I work as a professional translator in Romanian, English, and Italian.
Getting Started
Q: How did you first get interested in localization? Do you remember how you got involved in Mozilla localization?
A: I've used Thunderbird for many years, and I never changed the welcome screen. I'd always see that invitation to contribute somehow.
Back in 2018, I was using freeware only - including Thunderbird - and I started feeling guilty that I wasn't giving back. I tried donating, but online payments seemed shady back then, and I thought a small, one-time donation wouldn't make a difference.
Around the same time, my mother kept asking questions like, "What is this trying to do on my phone? I think they're asking me something, but it's in English!" My generation learned English from TV, Cartoon Network, and software, but when the internet reached the older generation, I realized how big of a problem language barriers could be. I wasn't even aware that there was such a big wave of localizing everything seen on the internet. I was used to having it all in English (operating system, browser, e-mail client, etc.).
After translating for my mom for a year, I thought, why not volunteer to localize, too? Mozilla products were the first choice - Thunderbird was "in my face" all day, all night, telling me to go and localize. I literally just clicked the button on Thunderbird's welcome page - that's where it all started.
I had also tried contributing to other open-source projects, but Mozilla's Pontoon just felt more natural to me. The interface is very close to the CAT tools I am used to.
Your Localization Journey
Q: What do you do professionally? How does that experience influence your Mozilla work and motivate you to contribute to open-source localization?
A: I've been a professional translator since 2012. I work in English, Romanian, and Italian - so yes, I type all the time.
In Pontoon, I treat the work as any professional project. I check for quality, consistency, and tone - just like I would for a client.
I was never a writer. I love translating. That's why I became a translator (professionally). And here… I actually got more feedback here than in my professional translation projects. I think that's why I stayed for so long, that's why I came back.
It is a change of scenery when I don't localize professionally, a long way from the texts I usually deal with. This is where I unwind, where I translate for the joy of translation, where I find my translator freedom.
Q: At what moment did you realize that your work really mattered?
A: When my mom stopped asking me what buttons to click! Now she just uses her phone in Romanian. I can't help but smile when I see that. It makes me think I'm a tiny little part of that confidence she has now.
Community & Collaboration
Q: Since your return, Romanian coverage has risen from below 70% to above 90%. You translate, review suggestions, and comment on other contributors' work. What helps you stay consistent and motivated?
A: I set small goals - I like seeing the completion percentage climb. I celebrate every time I hit a milestone, even if it's just with a cup of coffee.
I didn't realize it was such a big deal until the localization team pointed it out. It's hard to see the bigger picture when you work in isolation. But it's the same motivation that got me started and brought me back - you just need to find what makes you hum.
Q: Do you conduct product testing after you localize the strings or do you test them by being an active user?
A: I'm an active user of both Firefox and Thunderbird - I use them daily and quite intensely. I also have Firefox Nightly installed in Romanian, and I like to explore it to see what's changed and where. But I'll admit, I'm not as thorough as I should be! Our locale manager gives me a heads-up about things to check which helps me stay on top of updates. I need to admit that the testing part is done by the team manager. He is actively monitoring everything that goes on in Pontoon and checks how strings in Pontoon land in the products and to the end users.
Q: How do you collaborate with other contributors and support new ones?
A: I'm more of an independent worker, but in Pontoon, I wanted to use the work that was already done by the "veterans" and see how I could fit in. We had email conversations over terms, their collaboration, their contributions, personal likes and dislikes etc. I think they actually did me a favor with the email conversations, given I am not active on any channels or social media and email was my only way of talking to them.
This year I started leaving comments in Pontoon - it's such an easy way to communicate directly on specific strings. Given I was limited to emails until now, I think comments will help me reach out to other members of the team and start collaborating with them, too.
I keep in touch with the Romanian managers by email or Telegram. One of them helps me with technical terms, he helped get the Firefox project to 100% before the deadline. He contacts me with information on how to use options (I didn't know about) in Pontoon and ideas on wording (after he tests and reviews strings). Collaboration doesn't always mean meetings; sometimes it's quiet cooperation over time.
Mentoring is a big word, but I'm willing for the willing. If someone reaches out, I'll always try to help.
Q: Have you noticed improvements in Pontoon since 2020? How does it compare to professional tools you use, and what features do you wish it had?
A: It's fast - and I love that.
There's no clutter - and that's a huge plus. Some of the "much-tooted" professional tools are overloaded with features and menus that slow you down instead of helping. Pontoon keeps things simple and focused.
I also appreciate being able to see translations in other languages. I often check the French and Italian versions, just to compare terms.
The comments section is another great feature - it makes collaboration quick and to the point, perfect for discussing terms or string-specific questions. Machine translation has also improved a lot across the board, and Pontoon is keeping pace.
As for things that could be better - I'd love to try the pre-translation feature, but I've noticed that some imported strings confirm the wrong suggestion out of several options. That's when a good translation-memory cleanup becomes necessary. It would be helpful if experienced contributors could trim the TM, removing obsolete or outdated terms so new contributors won't accidentally use them.
Pontoon sometimes lags when I move too quickly through strings - like when approving matches or applying term changes across projects. And, unlike professional CAT tools, it doesn't automatically detect repeated strings or propagate translations for identical text. That's a small but noticeable gap compared to professional tools.
Personal Reflections
Q: Professional translators often don't engage in open-source projects because their work is paid elsewhere. What could attract more translators - especially women - to contribute?
A: It's tricky. Translation is a profession, not a hobby, and people need to make a living.
But for me, working on open-source projects is something different - a way to learn new things, use different tools, and have a different mindset. Maybe if more translators saw it as a creative outlet instead of extra work, they'd give it a try.
Involvement in open source is a personal choice. First, one has to hear about it, understand it, and realize that the software they use for free is made by people - then decide they want to be part of that.
I don't think it's a women's thing. Many come and many go. Maybe it's just the thrill at the beginning. Some try, but maybe translation is not for them…
Q: What does contributing to Mozilla mean to you today?
A: It's my way of giving back - and of helping people like my mom, who just want to understand new technology without fear or confusion. That thought makes me smile every time I open Firefox or Thunderbird.
Q: Any final words…
A: I look forward to more blogs featuring fellow contributors and learning and being inspired from their personal stories.
20 Nov 2025 6:46pm GMT
The Mozilla Blog: Rewiring Mozilla: Doing for AI what we did for the web

AI isn't just another tech trend - it's at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks - and further concentrating power over how tech works in the hands of a few.
This leaves us with a choice: push the trajectory of AI in a direction that's good for humanity - or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity.
Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it's currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital - everything people build on the internet - it's imperative that we step in to shape where it goes.
This post is the first in a series that will lay out Mozilla's evolving strategy to do for AI what we did for the web.
What did we do for the web?
Twenty five years ago, Microsoft Internet Explorer had 95% browser market share - controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft's monopoly control of the web, and dropped Internet Explorer's market share to 55% in just a few short years.
The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer - and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era.
How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto - values like privacy, openness and trust. And by gathering a global community of tens of thousands - a rebel alliance of sorts - to build an alternative to the big tech behemoth of the time.
What does success look like?
This is what we intend to do again: grow an alliance of people, communities, companies who envision - and want to build - a different future for AI.
What does 'different' look like? There are millions of good answers to this question. If your native tongue isn't a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it's probably apps and services that become more useful and delightful as they add AI - and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice.
Our task is to create a future for AI that is built around these values. We've started to rewire Mozilla to take on this task - and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework - a way to measure our progress against both mission and money:
| Double bottom line | In the world | In Mozilla |
| Mission | Empower people with tech that promotes agency and choice - make AI for and about people. | Build AI that puts humanity first. 100% of Mozilla orgs building AI that advances the Mozilla Manifesto. |
| Money | Decentralize the tech industry - and create an tech ecosystem where the 'people part' of AI can flourish. | Radically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue. |
Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit - and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla's portfolio will design their strategies - and measure their success - against this double bottom line.
What will we build?
As we've rewired Mozilla, we've not only laid out a new strategy - we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things - real technology and products and services that start to carve a different path for AI.
While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:
| Open source AI - for developers |
Public interest AI - by and for communities |
Trusted AI experiences - for everyone |
| Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI - and that enables people everywhere to build with AI on their own terms. | Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won't build it for them. | Focus: create trusted AI-driven products that give people new ways to interact with the web - with user choice and openness as guiding principles. |
| Early examples: Mozilla.ai's Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI. | Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. | Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 - offering an opt-in way to choose models and add AI features in a browser you trust. |
The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment - and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction.
We need to do this - together
These are the stakes: if we can't push AI in a better direction, the internet - a place where 6 billion of us now spend much of our lives - will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI.
For Mozilla, whether or not to tackle this challenge isn't a question anymore. We need to do this. The question is: how? The high level strategy that I've laid out is our answer. It doesn't prescribe all the details - but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things - and we know that we can't do this alone.
Which means it's incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well - and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow - and win - just as we did in the web era.
You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI.
The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.
20 Nov 2025 3:00pm GMT
Mozilla Thunderbird: Thunderbird Pro November 2025 Update

Welcome back to the latest update on our progress with Thunderbird Pro, a set of additional subscription services designed to enhance the email client you know, while providing a powerful open-source alternative to many of the big tech offerings available today. These services include Appointment, an easy to use scheduling tool; Send, which offers end-to-end encrypted file sharing; and Thundermail, an email service from the Thunderbird team. If you'd like more information on the broader details of each service and the road to getting here you can read our past series of updates here. Do you want to receive these and other updates and be the first to know when Thunderbird Pro is available? Be sure to sign up for the waitlist.
With that said, here's how progress has shaped up on Thunderbird Pro since the last update.
Current Progress
Thundermail
It took a lot of work to get here, but Thundermail accounts are now in production testing. Internal testing with our own team members has begun, ensuring everything is in place for support and onboarding of the Early Bird wave of users. On the visual side, we've implemented improved designs for the new Thundermail dashboard, where users can view and edit their settings, including adding custom domains and aliases.

The new Thunderbird Pro add-on now features support for Thundermail, which will allow future users who sign-up through the add-on to automatically add their Thundermail account in Thunderbird. Work to boost infrastructure and security has also continued, and we've migrated our data hosting from the Americas to Germany and the EU where possible. We've also been improving our email delivery to reduce the chances of Thundermail messages landing in spam folders.

Appointment
The team has been busy with design work, getting Zoom and CalDAV better integrated, and addressing workflow, infrastructure, and bugs. Appointment received a major visual update in the past few months, which is being applied across all of Thunderbird Pro. While some of these updates have already been implemented, there's still lots of remodelling happening and under discussion - all in preparation for the Early Bird beta release.

Send
One of the main focuses for Send has been migrating it from its own add-on to the new Thunderbird Pro add-on, which will make using it in Thunderbird desktop much smoother. Progress continues on improving file safety through better reporting and prevention of illegal uploads. Our security review is now complete, with an external assessor validating all issues scheduled for fixing and once finalized, this report will be shared publicly with our community. Finally, we've refined the Send user experience by optimizing mobile performance, improving upload and download speeds, enhancing the first-time user flow, and much more.

Bringing it all together
Our new Thunderbird Pro website is now live, marking a major milestone in bringing the project to life. The website offers more details about Thunderbird Pro and serves as the first step for users to sign up, sign in and access their accounts.
Our initial subscription tier, the Early Bird Plan, priced at $9 per month, will include all three services: Thundermail, Send, and Appointment. Email hosting, file storage, and the security behind all of this come at a cost, and Thunderbird Pro will never be funded by selling user data, showing ads, or compromising its independence. This introductory rate directly supports Thunderbird Pro's early development and growth, positioning it for long-term sustainability. We will also be actively listening to your feedback and reviewing the pricing and plans we offer. Once the rough edges are smoothed out and we're ready to open the doors to everyone, we plan to introduce additional tiers to better meet the needs of all our users.
What's next
Thunderbird Pro is now awaiting its initial closed test run which will include a core group of community contributors. This group will help conduct a broader test and identify critical issues before we gradually open Early Bird access to our waitlist subscribers in waves. While these services will still be considered under active development, with your help this early release will continue to test and refine them for all future users.
Be sure you sign up for our Early Bird waitlist at tb.pro and help us shape the future of Thunderbird Pro. See you soon!
The post Thunderbird Pro November 2025 Update appeared first on The Thunderbird Blog.
20 Nov 2025 12:00pm GMT







