13 Oct 2025

feedPlanet Mozilla

Niko Matsakis: We need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole "Ergonomic RC" work was originally proposed by Dioxus and their answer is simple: definitely not. For the kind of high-level GUI applications they are building, having to call cx.handle() to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes - knowing where handles are created can impact performance, memory usage, and even correctness (don't worry, I'll give examples later in the post). So how do we reconcile this?

This blog argues that we should make it ergonomic to be explicit. This wasn't always my position, but after an impactful conversation with Josh Triplett, I've come around. I think it aligns with what I once called the soul of Rust: we want to be ergonomic, yes, but we want to be ergonomic while giving control1.

I like Tyler Mandry's Clarity of purpose contruction, "Great code brings only the important characteristics of your application to your attention". The key point is that there is great code in which cloning and handles are important characteristics, so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code.

This does not mean we cannot (later) support automatic clones and handles. It's inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get "fully explicit" to be nice enough that we don't really need the automatic version. There are benefits from having "one Rust", where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don't suck too bad2 when they're overkill.

"Rust should not surprise you." (hat tip: Josh Triplett)

I mentioned this blog post resulted from a long conversation with Josh Triplett3. The key phrase that stuck with me from that conversation was: Rust should not surprise you. The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session - to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote if foo and not if !foo. And occasionally you find out that your language was doing something that you didn't expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun.

Overall, Rust is remarkably good at avoiding footguns4. And part of how we've achieved that is by making sure that things you might need to know are visible - like, explicit in the source. Every time you see a Rust match, you don't have to ask yourself "what cases might be missing here" - the compiler guarantees you they are all there. And when you see a call to a Rust function, you don't have to ask yourself if it is fallible - you'll see a ? if it is.5

Creating a handle can definitely "surprise" you

So I guess the question is: would you ever have to know about a ref-count increment? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between Rc and Arc and then proving that you don't mess it up.

But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC'd languages, has deterministic destruction. This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to-RAII entitled "Rust means never having to close a socket". But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down.

Just recently, I was debugging Symposium, which is written in Swift. Somehow I had two IPCManager instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write .handle() explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes.6

Josh gave me a similar example from the "bytes" crate. A Bytes type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It's not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can't see explicitly in the where those handles are created.

A similar case occurs with APIs like like Arc::get_mut7. get_mut takes an &mut Arc<T> and, if the ref-count is 1, returns an &mut T. This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used - but when you need it, it's so nice it's there.

"What I love about Rust is its versatility: low to high in one language" (hat tip: Alex Crichton)

Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don't want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses get_mut or make_mut might also.

And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn't in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the "Rustacean Principles". Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts.

I feel like you can interpret Alex's quote in two ways, depending on what you choose to emphasize. You could hear it as, "It's important that Rust is good for high-level use cases". That is true, and it is what leads us to ask whether we should even make handles visible at all.

But you can also read Alex's quote as, "It's important that there's one language that works well enough for both" - and I think that's true too. The "true Rust gestalt" is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers.

The "soul of Rust": low-level enough for a kernel, usable enough for a GUI

Let's be honest. High-level GUI programming is not Rust's bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel.

The goal of Rust is to be a single language that can, by and large, be "good enough" for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain't easy, but it's the job.

This isn't the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the "soul of Rust" and a followup going into greater detail. I think the catchphrase "low-level enough for a Kernel, usable enough for a GUI" kind of captures it.

Conclusion: Explicit handles should be the first step, but it doesn't have to be the final step

There is a slight caveat I want to add. I think another part of Rust's soul is preferring nuance to artificial simplicity ("as simple as possible, but no simpler", as they say). And I think the reality is that there's a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land8) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift9 makes ref-count increments invisible - and they get a big lift out of that!10 I'd wager most Swift users don't even realize that Swift is not garbage-collected11.

But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first.

OK, I think I've made this point 3 ways from Sunday now, so I'll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness.


  1. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee ↩︎

  2. It's an industry term. ↩︎

  3. Actually, by the standards of the conversations Josh and I often have, it was't really all that long - an hour at most. ↩︎

  4. Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that's a topic for another blog post. ↩︎

  5. Modulo panics, of course - and no surprise that accounting for panics is a major pain point for some Rust users. ↩︎

  6. In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep'ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring. ↩︎

  7. Or Arc::make_mut, which is one of my favorite APIs. It takes an Arc<_> and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍 ↩︎

  8. My experience is that, due to language limitations we really should fix, many async constructs force you into 'static bounds which in turn force you into Rc and Arc where you'd otherwise have been able to use &. ↩︎

  9. I've been writing more Swift and digging it. I have to say, I love how they are not afraid to "go big". I admire the ambition I see in designs like SwiftUI and their approach to async. I don't think they bat 100, but it's cool they're swinging for the stands. I want Rust to dare to ask for more! ↩︎

  10. Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion). ↩︎

  11. Though I'd also wager that many eventually find themselves scratching their heads about a ref-count cycle. I've not dug into how Swift handles those, but I see references to "weak handles" flying around, so I assume they've not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It's harder to do since we discourage interior mutability, but not that hard. ↩︎

13 Oct 2025 11:39am GMT

10 Oct 2025

feedPlanet Mozilla

Mozilla Thunderbird: State of the Thunder 13: How We Make Our Roadmap

Welcome back to our thirteenth episode of State of the Thunder! Nothing unlucky about this latest installment, as Managing Director Ryan Sipes walks us through how Thunderbird creates its roadmap. Unlike other companies where roadmaps are driven solely by business needs, Thunderbird is working with our community governance and feedback from the wider user community to keep us honest even as we move forward.

Want to find out how to join future State of the Thunders? Be sure to join our Thunderbird planning mailing list for all the details.

Open Source, Open Roadmaps

In other companies, product managers tend to draft roadmaps based on business needs. Publishing that roadmap might be an afterthought, or might not happen at all. Thunderbird, however, is open source, so that's not our process.

A quick history lesson provides some needed context. Eight years ago, Thunderbird was solely a community project driven by a community council. We didn't have a roadmap like we do today. With the earlier loss of funding and support, the project was in triage mode. Since then, thanks to a wonderful user community who has donated their skill, time, and money, we've changed our roadmap process.

The Supernova release (Thunderbird 115) was where we first really focused on making a roadmap with a coherent product vision: a modernized app in performance and appearance. We developed this roadmap with input from the community, even if there was pushback to a UI change.

The 2026 Roadmap Process

At this point, the project has bylaws for the roadmap process, which unites the Thunderbird Council, MZLA staff, and user feedback. Over the past year we've added two new roadmaps: one for the Android app and another for ThunderbirdPro. (Note, iOS doesn't have a roadmap yet. Our current goal is: let's be able to receive email!) But even with these changes and additions, the Mozilla Manifesto is still at the heart of everything we do. We firmly believe that making roadmaps with community governance and feedback from the larger community keeps us honest and helps us make products that genuinely improve people's lives.

Want to see how our 2025-2026 Roadmaps are taking shape? Check out the Desktop Roadmap, as well the mobile roadmaps for Android and iOS.

Questions

Integrating Community Contributions

In the past, community contributors have picked up "nice to have" issues and developed them alongside us. Or people want to pursue problems or challenges that affect them the most. Sometimes, either of these scenarios coincide with our roadmap, and we get features like the new drag and drop folders!

Needless to say, we love when the community helps us get the product where we hope it will go. Sometimes, we have to pause development because of shifted priorities, and we're trying to get better at updating contributors when these shifts happen on places like the tb-planning and mobile-planning mailing lists.

And these community contributions aren't just code! Testing is a crucial way to help make Thunderbird shine on desktop and mobile. Community suggestions on Mozilla Connect help us dream big, as we discussed in the last two episodes. Reporting bugs, either on Bugzilla for the desktop app or GitHub for the Android app, help us know when things aren't working. We encourage our community to learn more about the Council, and don't be afraid to get in touch with them at council@thunderbird.net.

Telemetry and the Roadmap

While we know there are passionate debates on telemetry in the open source community, we want to mention how respectful telemetry can make Thunderbird better. Our telemetry helps us see what features are important, and which ones just clutter up the UI. We don't collect Personally Identifying Information (PII), and our code is open so you can check us on this. Unlike Outlook, who shares their data with 801 partners, we don't. You can read all about what we use and how we use it here.

So if you have telemetry turned off, please, we ask you to turn it on, and if it's already on, to keep it on! Especially if you're a Linux user, enabling telemetry helps us have a better gauge of our Linux user base and how to best support you.

Roadmap Categories and Organizing

Should we try to 'bucket' similar items on our roadmap and spread development evenly between them, or should we concentrate on the bucket that needs it most? The answer to this question depends on who you ask! Sometimes we're focused on a particular area of focus, like UI work in Supernova and current UX work in Calendar. Sometimes we're working to pay down tech debt across our code. That effort in reducing tech debt can pave the way for future work, like the current efforts to modernize our database so we can have a true Conversation View and other features. Sometimes roadmaps reveal obstacles you have to overcome, and Ryan thinks we're getting faster at this.

Where to see the roadmaps

The current desktop roadmap is here, while the current Android roadmap is on our GitHub repo. In the future, we're hoping to update where these roadmaps live, how they look, and how you can interact with them. (Ryan is particularly partial to Obsidian's roadmap.) We ultimately want our roadmaps to be storytelling devices, and to keep them more updated to any recent changes.

Current Calls for Involvement

Join us for the last few days of testing EWS mail support! Also, we had a fantastic time with the Ask a Fox replython, and would love if you helped us answer support questions on SUMO.

Watch the Video (also on PeerTube)

Listen to the Podcast

The post State of the Thunder 13: How We Make Our Roadmap appeared first on The Thunderbird Blog.

10 Oct 2025 6:37pm GMT

09 Oct 2025

feedPlanet Mozilla

The Mozilla Blog: Shake to Summarize recognized with special mention in TIME’s Best Inventions of 2025

Illustration featuring a TIME magazine cover titled “Best Inventions of 2025,” showing a humanoid robot folding clothes, alongside a smartphone displaying the Firefox logo and a screen reading “Summarizing…” with a dessert recipe below it.<figcaption class="wp-element-caption">Cover credit: Photography by Spencer Lowell for TIME </figcaption>

Shake to Summarize has been recognized with a Special Mention in TIME's Best Inventions of 2025.

Each year TIME spotlights a range of new industry-defining innovations across consumer electronics, health tech, apps and beyond. This year, Firefox's Shake to Summarize feature made the list for bringing a smart solution to a modern user problem: information overload.

With a single shake or tap, users on iOS devices can get to the heart of an article in seconds. The cool part? Summaries adapt to what you're reading: recipes pull out the steps for cooking, sports focus on game scores and stats, and news highlights the key takeaways from a story.

"We're thrilled to see Firefox earn a TIME Best Inventions 2025 Special Mention! Our work on Shake to Summarize reflects how Firefox is evolving," said Anthony Enzor-DeMeo, general manager of Firefox. "We're reimagining our browser to fit seamlessly into modern life, helping people browse with less clutter and more focus. The feature is also part of our efforts to give mobile users a cleaner UI and smarter tools that make browsing on the go fast, seamless, and even fun."

Launched in September 2025 and currently available to English-language users in the U.S., Shake to Summarize generates summaries using Apple Intelligence on iPhone 15 Pro or later running iOS 26 or above, and Mozilla-hosted AI for other devices running iOS 16 or above.

"This recognition is a testament to the incredible work of our UX, design, product, and engineering teams who brought this innovation to life, showcasing that Firefox continues to lead with purpose, creativity, and a deep commitment to user-centric design. Big thank you!" added Enzor-DeMeo.

The Firefox team is working on making the feature available to more users and for those on Android. In the meantime, iOS users can already make the most of Shake to Summarize available in the Apple app store now.

Take control of your internet

Download Firefox

The post Shake to Summarize recognized with special mention in TIME's Best Inventions of 2025 appeared first on The Mozilla Blog.

09 Oct 2025 2:28pm GMT

08 Oct 2025

feedPlanet Mozilla

Mozilla Thunderbird: State Of The Bird 2024/25

The past twelve months have been another remarkable chapter in Thunderbird's journey. Together, we started expanding Thunderbird beyond its strong desktop roots, introducing it to smartphones and web browsers to make it more accessible to more people. Thunderbird for Android arrived in the fall and has been steadily improving thanks to our growing mobile team, as well as feedback and contributions from our growing global family. A few months later, in December 2024, we celebrated an extraordinary milestone: 20 years of Thunderbird! We also looked toward a sustainable future with the announcement of Thunderbird Pro, with one of its first services, Appointment, already finding an audience in closed beta.

The past year also saw a shift in how Thunderbird evolves. Although we recently released our latest annual ESR update (codenamed Eclipse), the bigger news is that our team built the new Monthly Release channel, which is now the default for most of you. This change means you'll see more frequent updates that make Thunderbird feel fresher, more responsive, and more in tune with your personalized needs.
Before diving into all the details, I want to pause and express our deepest gratitude to the incredible global community that makes all of this possible. To the hundreds of thousands of people who donated financially, the volunteers who contributed their time and expertise, and the beta testers who carefully helped us polish each update: thank you! Thunderbird thrives because of you. Every milestone we celebrate is a shared achievement, and a shining example of the power of community-driven, open source software development.

Team and Product Updates

Desktop and release updates

In December 2024, we celebrated Thunderbird's 20th anniversary. Two decades of proving that email software can be both powerful and principled was not without its ups and downs, but that milestone reaffirmed something we hear so often from our community: Thunderbird continues to matter deeply to people all over the world.

One of the biggest changes this year was the introduction of a new monthly release channel, simply called "Thunderbird Release." Making this shift required an enormous amount of coordination and care across our desktop and release teams. Unlike the long-standing Extended Support Release (ESR), which provides a single major update every July, the new Thunderbird Release delivers monthly updates. This approach means we can bring you useful improvements and new features significantly faster, while keeping the stability and reliability you rely on.

Over the past year, our desktop team focused heavily on introducing changes that people have been asking for. Specifically, changes that make Thunderbird feel more efficient, intuitive, and modern. We improved visual consistency across system themes, gave you more ways to control the appearance of your message lists and how they're organized, modernized notifications with native OS integration and quick actions, and moved closer to full Microsoft Exchange support.

Many of you who switched from the ESR to the new Thunderbird Release channel started seeing these updates as early as April. For those who stuck with the ESR, the annual update, codenamed Eclipse, arrived in July. Thanks to the solid foundation established in those smaller monthly updates, Eclipse enjoyed the smoothest rollout of any annual release in Thunderbird's history.

In-depth details on Desktop development can be found in our monthly Developer Digest updates on our blog.

Thunderbird Mobile

Android

It took longer than we originally anticipated, but Thunderbird has finally arrived as a true smartphone app. The launch of Thunderbird for Android in October 2024 was one of our most exciting steps forward in years. Releasing it took more than two years of active development, beta testing, and invaluable community feedback.

​​This milestone was made possible by transforming the much-loved K-9 Mail app into something we could proudly call Thunderbird. That process included a full redesign of the interface, including bringing it up to modern design standards, and building an easy way for people to bring their existing Thunderbird desktop accounts directly into the Android app.

We've been encouraged by the enthusiastic response to Thunderbird on Android, but we're also listening closely to your feedback. Our team, together with community contributors, has one very focused goal: to make Thunderbird the best Android email app available.

iOS

We've also seen the overwhelming demand to build a version of Thunderbird for the iOS community. Unlike the Android app, the iOS app is being built from the ground up.

Fortunately, Thunderbird for iOS took some major steps forward this year. We published the initial repository (a central location for open-source project files and code) for the Thunderbird mobile team and contributors to work together, and we're laying the groundwork for public testing.

Our goal for the first public alpha will be to support manual account setup and basic inbox viewing to meet Apple's minimum review standards. These early pre-release versions will be distributed through TestFlight, allowing Thunderbird for iOS to benefit from your real-world feedback.

When we started building Thunderbird for iOS, a core decision was made to use a modern foundation (JMAP) designed for mobile devices. This will allow for, among other advantages, faster mail synchronization and more efficient resource usage. The first pieces of that foundation are already in place, with the basic ability to view folders and messages. We've also set up internal tools that will make regular updates, language translations, and community testing possible.

Thunderbird for iOS is still in the early stages of development, but momentum is strong, our team is growing, and we're confidently moving toward the first community-accessible release.

In depth details on mobile development can be found in our monthly Mobile Progress Report on our blog.

Thundermail and Thunderbird Pro services

It's no secret we've been building additional web services under the Thunderbird Pro name, and 2025 marked a pivotal moment in our vision for a complete, open-source Thunderbird ecosystem.

This year we announced Thundermail, a dedicated email service by Thunderbird. During the past decade, we've seen a large move away from dedicated email clients to products like Gmail, partially because of the robust ecosystem around them. The plan for Thundermail is to eventually offer an alternative webmail solution that protects your privacy, and doesn't use your messages to train AI or show you ads.

Here's what else we've been working on in addition to Thundermail:

During its current beta, Thunderbird Appointment saw great improvements in managing your schedule, with many of the changes focused on reliability and visual polish.

Thunderbird Send, an app for securely sharing encrypted files, also saw forward momentum. Together, these services are steadily moving toward a wider beta launch this fall, and we're excited to see how you'll use them to improve your personal and professional lives.

All of the work going into Thundermail and Thunderbird Pro services is guided by a clear goal: providing you with an ethical alternative to the closed-off "walled gardens" that dominate our digital communication. You shouldn't have to sacrifice your values and give up your personal data to enjoy convenience and powerful features.

In depth details on Thunderbird Pro development can be found in our Thunderbird Pro updates on our blog.

2024 Financial Picture

The generosity of our donors continues to power everything we do, and the importance of these financial contributions cannot be understated. In 2024, the Thunderbird project once again saw continued growth in donations which paved the way for Thundermail and the Thunderbird Pro services you just read about. It also gave us the opportunity to grow our mobile development team, improve our user support outreach, and expand our connections to the community.

Here's a detailed breakdown of our donation revenue in 2024, and why many of these statistics are so meaningful.

Contribution Revenue

In 2024, financial contributions to Thunderbird reached $10.3 million, representing a 19% increase over the previous year. This support came courtesy of more than 539,000 transactions from more than 335,000 individual donors. A healthy 25% of these contributions were given as recurring monthly support.

What makes this so meaningful to us isn't the total revenue, or the scale of the donations. It's how those donations break down. The average contribution was $18.88, with a median of $16.66. Among our recurring donors, the average monthly gift was only $6.25. In fact, 53% of all donations were $20 or less, and 94% were $35 or less. Only 17 contributions were $1,000 or more.

What does this represent when we go beyond the numbers? It means Thunderbird isn't sustained by a handful of wealthy benefactors or corporate sponsors. Rather, it is sustained by a global community of people who believe in what we've built and what we're still building, and they come together to keep it moving forward.

And that global reach continues to inspire us. We received contributions from more than 200 countries. The top ten contributing countries - Germany, the United States, France, the United Kingdom, Switzerland, the Netherlands, Japan, Italy, Austria, and Canada - accounted for 83% of our total revenue.

But products aren't just numbers and code. Products are the people that work on them. To support the ambitions of our expanding roadmap, our team grew significantly in 2024. We added 14 new team members throughout the year, closing out 2024 with 43 full-time staff members. Much of this growth strengthened our mobile development, web services, and desktop + release teams. 80% of our staff focuses on technical work - things like product development and infrastructure - but we also added more roles to actively support users, improve community outreach, and smooth out internal operations.

Expenses

When we talk about how we use financial contributions, we're really talking about investments in our shared values. The majority of our spending goes to personnel; the talented individuals who write code, design interfaces, test features, and support our users. Infrastructure is the next largest expense, followed by administrative costs to keep operations running smoothly.

Below is a breakdown of our 2024 expenses:

Community Snapshot

Contributor & Community Growth

For two decades, Thunderbird has survived and thrived because of its dedicated open-source community. In 2024, we continued using our Bitergia dashboard to give our community a clear view of the project's overall activity across the board. (You can read more about how we collaborated on and use this beneficial tool here.)

This dashboard helps us track participation, identify and celebrate successes, and find areas to improve, which is especially important as we expand the Thunderbird ecosystem with new products and services.

For this report, we've highlighted some of the most notable community metrics and growth milestones from 2024.

For reference, Github and Bugzilla measure developer contributions. TopicBox measures activity across our many mailing lists. Pontoon measures the activity from volunteers who help us translate and localize Thunderbird. SUMO (the Mozilla support website) measures the impact of Thunderbird's support volunteers who engage with our users and respond to their varied support questions.

We estimate that in 2024, the total number of people who contributed to Thunderbird - by writing code, answering support questions, providing translations, or other meaningful areas - is more than 20,000.

It's especially encouraging to see the number of translation locales increase from 58 to 70, as Thunderbird continues to find new users around the world.

But there are areas of opportunity, too. For example, making it less complicated for people who want to start contributing to Thunderbird. We've started addressing this by recording two Community Office Hours videos, talking about how to write Knowledge Base articles, and how to effectively answer questions on the Mozilla Support website.

Mozilla Connect is another portal that lets anyone interested in the betterment of Thunderbird suggest ideas, openly discuss them, and vote on them. In 2024, four desktop ideas as well as four of your ideas in our relatively new mobile space were implemented, and we saw more than 500 new thoughtful ideas suggested across mobile and desktop. Our staff and community are watching for your ideas, so keep them coming!

Thank you

As we close out this year's State of the Bird, we want to once again shine a light on the incredible global community of Thunderbird supporters. Whether you've contributed your valuable time, financial donations, or simply shared Thunderbird with colleagues, friends, and family, your support continues to brighten Thunderbird's future.

After all, products aren't just numbers on a chart. Products are the people who create them, support them, improve them, and believe in crucial concepts like privacy, digital wellbeing, and open standards.

We're so very grateful to you.

The post State Of The Bird 2024/25 appeared first on The Thunderbird Blog.

08 Oct 2025 10:02am GMT

Niko Matsakis: SymmACP: extending Zed's ACP to support Composable Agents

This post describes SymmACP - a proposed extension to Zed's Agent Client Protocol that lets you build AI tools like Unix pipes or browser extensions. Want a better TUI? Found some cool slash commands on GitHub? Prefer a different backend? With SymmACP, you can mix and match these pieces and have them all work together without knowing about each other.

This is pretty different from how AI tools work today, where everything is a monolith - if you want to change one piece, you're stuck rebuilding the whole thing from scratch. SymmACP allows you to build out new features and modes of interactions in a layered, interoperable way. This post explains how SymmACP would work by walking through a series of examples.

Right now, SymmACP is just a thought experiment. I've sketched these ideas to the Zed folks, and they seemed interested, but we still have to discuss the details in this post. My plan is to start prototyping in Symposium - if you think the ideas I'm discussing here are exciting, please join the Symposium Zulip and let's talk!

"Composable agents" let you build features independently and then combine them

I'm going to explain the idea of "composable agents" by walking through a series of features. We'll start with a basic CLI agent1 tool - basically a chat loop with access to some MCP servers so that it can read/write files and execute bash commands. Then we'll show how you could add several features on top:

  1. Addressing time-blindness by helping the agent know what time it is.
  2. Injecting context and "personality" to the agent.
  3. Spawning long-running, asynchronous tasks.
  4. A copy of Q CLI's /tangent mode that lets you do a bit of "off the books" work that gets removed from your history later.
  5. Implementing Symposium's interactive walkthroughs, which give the agent a richer vocabulary for communicating with you than just text.
  6. Smarter tool delegation.

The magic trick is that each of these features will be developed as separate repositories. What's more, they could be applied to any base tool you want, so long as it speaks SymmACP. And you could also combine them with different front-ends, such as a TUI, a web front-end, builtin support from Zed or IntelliJ, etc. Pretty neat.

My hope is that if we can centralize on SymmACP, or something like it, then we could move from everybody developing their own bespoke tools to an interoperable ecosystem of ideas that can build off of one another.

let mut SymmACP = ACP

SymmACP begins with ACP, so let's explain what ACP is. ACP is a wonderfully simple protocol that lets you abstract over CLI agents. Imagine if you were using an agentic CLI tool except that, instead of communication over the terminal, the CLI tool communicates with a front-end over JSON-RPC messages, currently sent via stdin/stdout.

flowchart LR
    Editor <-.->|JSON-RPC via stdin/stdout| Agent[CLI Agent]
  

When you type something into the GUI, the editor sends a JSON-RPC message to the agent with what you typed. The agent responds with a stream of messages containing text and images. If the agent decides to invoke a tool, it can request permission by sending a JSON-RPC message back to the editor. And when the agent has completed, it responds to the editor with an "end turn" message that says "I'm ready for you to type something else now".

sequenceDiagram
    participant E as Editor
    participant A as Agent
    participant T as Tool (MCP)
    
    E->>A: prompt("Help me debug this code")
    A->>E: request_permission("Read file main.rs")
    E->>A: permission_granted
    A->>T: read_file("main.rs")
    T->>A: file_contents
    A->>E: text_chunk("I can see the issue...")
    A->>E: text_chunk("The problem is on line 42...")
    A->>E: end_turn
  

Telling the agent what time it is

OK, let's tackle our first feature. If you've used a CLI agent, you may have noticed that they don't know what time it is - or even what year it is. This may sound trivial, but it can lead to some real mistakes. For example, they may not realize that some information is outdated. Or when they do web searches for information, they can search for the wrong thing: I've seen CLI agents search the web for "API updates in 2024" for example, even though it is 2025.

To fix this, many CLI agents will inject some extra text along with your prompt, something like <current-date date="2025-10-08" time="HH:MM:SS"/>. This gives the LLM the context it needs.

So how could use ACP to build that? The idea is to create a proxy. This proxy would wrap the original ACP server:

flowchart LR
    Editor[Editor/VSCode] <-->|ACP| Proxy[Datetime Proxy] <-->|ACP| Agent[CLI Agent]
  

This proxy will take every "prompt" message it receives and decorate it with the date and time:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("What day is it?")
    P->>A: prompt("<current-date .../> What day is it?")
    A->>P: text_chunk("It is 2025-10-08.")
    P->>E: text_chunk("It is 2025-10-08.")
    A->>P: end_turn
    P->>E: end_turn
  

Simple, right? And of course this can be used with any editor and any ACP-speaking tool.

Next feature: Injecting "personality" to the agent

Let's look at another feature that basically "falls out" from ACP: injecting personality. Most agents give you the ability to configure "context" in various ways - or what Claude Code calls memory. This is useful, but I and others have noticed that if what you want is to change how Claude "behaves" - i.e., to make it more collaborative - it's not really enough. You really need to kick off the conversation by reinforcing that pattern.

In Symposium, the "yiasou" prompt (also available as "hi", for those of you who don't speak Greek 😛) is meant to be run as the first thing in the conversation. But there's nothing an MCP server can do to ensure that the user kicks off the conversation with /symposium:hi or something similar. Of course, if Symposium were implemented as an ACP Server, we absolutely could do that:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("I'd like to work on my document")
    P->>A: prompt("/symposium:hi")
    A->>P: end_turn
    P->>A: prompt("I'd like to work on my document")
    A->>P: text_chunk("Sure! What document is that?") 
    P->>E: text_chunk("Sure! What document is that?") 
    A->>P: end_turn
    P->>E: end_turn
  

Proxies are a better version of hooks

Some of you may be saying, "hmm, isn't that what hooks are for?" And yes, you could do this with hooks, but there's two problems with that. First, hooks are non-standard, so you have to do it differently for every agent.

The second problem with hooks is that they're fundamentally limited to what the hook designer envisioned you might want. You only get hooks at the places in the workflow that the tool gives you, and you can only control what the tool lets you control. The next feature starts to show what I mean: as far as I know, it cannot readily be implemented with hooks the way I would want it to work.

Next feature: long-running, asynchronous tasks

Let's move on to our next feature, long-running asynchronous tasks. This feature is going to have to go beyond the current capabilities of ACP into the expanded "SymmACP" feature set.

Right now, when the server invokes an MCP tool, it executes in a blocking way. But sometimes the task it is performing might be long and complicated. What you would really like is a way to "start" the task and then go back to working. When the task is complete, you (and the agent) could be notified.

This comes up for me a lot with "deep research". A big part of my workflow is that, when I get stuck on something I don't understand, I deploy a research agent to scour the web for information. Usually what I will do is ask the agent I'm collaborating with to prepare a research prompt summarizing the things we tried, what obstacles we hit, and other details that seem relevant. Then I'll pop over to claude.ai or Gemini Deep Research and paste in the prompt. This will run for 5-10 minutes and generate a markdown report in response. I'll download that and give it to my agent. Very often this lets us solve the problem.2

This research flow works well but it is tedious and requires me to copy-and-paste. What I would ideally want is an MCP tool that does the search for me and, when the results are done, hands them off to the agent so it can start processing immediately. But in the meantime, I'd like to be able to continue working with the agent while we wait. Unfortunately, the protocol for tools provides no mechanism for asynchronous notifications like this, from what I can tell.

SymmACP += tool invocations + unprompted sends

So how would I do it with SymmACP? Well, I would want to extend the ACP protocol as it is today in two ways:

  1. I'd like the ACP proxy to be able to provide tools that the proxy will execute. Today, the agent is responsible for executing all tools; the ACP protocol only comes into play when requesting permission. But it'd be trivial to have MCP tools where, to execute the tool, the agent sends back a message over ACP instead.
  2. I'd like to have a way for the agent to initiate responses to the editor. Right now, the editor always initiatives each communication session with a prompt; but, in this case, the agent might want to send messages back unprompted.

In that case, we could implement our Research Proxy like so:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("Why is Rust so great?")
    P->>A: prompt("Why is Rust so great?")
    A->>P: invoke tool("begin_research")
    activate P
    P->>A: ok
    A->>P: "I'm looking into it!"
    P->>E: "I'm looking into it!"
    A->>P: end_turn
    P->>E: end_turn

    Note over E,A: Time passes (5-10 minutes) and the user keeps working...
    Note over P: Research completes in background
    
    P->>A: <research-complete/>
    deactivate P
    A->>P: "Research says Rust is fast"
    P->>E: "Research says Rust is fast"
    A->>P: end_turn
    P->>E: end_turn
  

What's cool about this is that the proxy encapsulates the entire flow: it knows how to do the research, and it manages notifying the various participants when the research completes. (Also, this leans on one detail I left out, which is that )

Next feature: tangent mode

Let's explore our next feature, Q CLI's /tangent mode. This feature is interesting because it's a simple (but useful!) example of history editing. The way /tangent works is that, when you first type /tangent, Q CLI saves your current state. You can then continue as normal but when you next type /tangent, your state is restored to where you were. This, as the name suggests, lets you explore a side conversation without polluting your main context.

The basic idea for supporting tangent in SymmACP is that the proxy is going to (a) intercept the tangent prompt and remember where it began; (b) allow the conversation to continue as normal; and then (c) when it's time to end the tangent, create a new session and replay the history up until the point of the tangent3.

SymACP += replay

You can almost implement "tangent" in ACP as it is, but not quite. In ACP, the agent always owns the session history. The editor can create a new session or load an older one; when loading an older one, the agent "replays" "replays" the events so that the editor can reconstruct the GUI. But there is no way for the editor to "replay" or construct a session to the agent. Instead, the editor can only send prompts, which will cause the agent to reply. In this case, what we want is to be able to say "create a new chat in which I said this and you responded that" so that we can setup the initial state. This way we could easily create a new session that contains the messages from the old one.

So how this would work:

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    E->>P: prompt("Hi there!")
    P->>A: prompt("Hi there!")

    Note over E,A: Conversation proceeds
    
    E->>P: prompt("/tangent")
    Note over P: Proxy notes conversation state
    P->>E: end_turn
    E->>P: prompt("btw, ...")
    P->>A: prompt("btw, ...")

    Note over E,A: Conversation proceeds
    
    E->>P: prompt("/tangent")
    
    P->>A: new_session
    P->>A: prompt("Hi there!")    
    Note over P,A: ...Proxy replays conversation...
  

Next feature: interactive walkthroughs

One of the nicer features of Symposium is the ability to do interactive walkthroughs. These consist of an HTML sidebar as well as inline comments in the code:

Walkthrough screenshot

Right now, this is implemented by a kind of hacky dance:

It works, but it's a giant Rube Goldberg machine.

SymmACP += Enriched conversation history

With SymmACP, we would structure the passthrough mechanism as a proxy. Just as today, it would provide an MCP tool to the agent to receive the walkthrough markdown. It would then convert that into the HTML to display on the side along with the various comments to embed in the code. But this is where things are different.

Instead of sending that content over IPC, what I would want to do is to make it possible for proxies to deliver extra information along with the chat. This is relatively easy to do in ACP as is, since it provides for various capabilities, but I think I'd want to go one step further

I would have a proxy layer that manages walkthroughs. As we saw before, it would provide a tool. But there'd be one additional thing, which is that, beyond just a chat history, it would be able to convey additional state. I think the basic conversation structure is like:

but I think it'd be useful to (a) be able to attach metadata to any of those things, e.g., to add extra context about the conversation or about a specific turn (or even a specific prompt), but also additional kinds of events. For example, tool approvals are an event. And presenting a walkthrough and adding annotations are an event too.

The way I imagine it, one of the core things in SymmACP would be the ability to serialize your state to JSON. You'd be able to ask a SymmACP paricipant to summarize a session. They would in turn ask any delegates to summarize and then add their own metadata along the way. You could also send the request in the other direction - e.g., the agent might present its state to the editor and ask it to augment it.

Enriched history would let walkthroughs be extra metadata

This would mean a walkthrough proxy could add extra metadata into the chat transcript like "the current walkthrough" and "the current comments that are in place". Then the editor would either know about that metadata or not. If it doesn't, you wouldn't see it in your chat. Oh well - or perhaps we do something HTML like, where there's a way to "degrade gracefully" (e.g., the walkthrough could be presented as a regular "response" but with some metadata that, if you know to look, tells you to interpret it differently). But if the editor DOES know about the metadata, it interprets it specially, throwing the walkthrough up in a panel and adding the comments into the code.

With enriched histories, I think we can even say that in SymmACP, the ability to load, save, and persist sessions itself becomes an extension, something that can be implemented by a proxy; the base protocol only needs the ability to conduct and serialize a conversation.

Final feature: Smarter tool delegation.

Let me sketch out another feature that I've been noodling on that I think would be pretty cool. It's well known that there's a problem that LLMs get confused when there are too many MCP tools available. They get distracted. And that's sensible, so would I, if I were given a phonebook-size list of possible things I could do and asked to figure something out. I'd probably just ignore it.

But how do humans deal with this? Well, we don't take the whole phonebook - we got a shorter list of categories of options and then we drill down. So I go to the File Menu and then I get a list of options, not a flat list of commands.

I wanted to try building an MCP tool for IDE capabilities that was similar. There's a bajillion set of things that a modern IDE can "do". It can find references. It can find definitions. It can get type hints. It can do renames. It can extract methods. In fact, the list is even open-ended, since extensions can provide their own commands. I don't know what all those things are but I have a sense for the kinds of things an IDE can do - and I suspect models do too.

What if you gave them a single tool, "IDE operation", and they could use plain English to describe what they want? e.g., ide_operation("find definition for the ProxyHandler that referes to HTTP proxies"). Hmm, this is sounding a lot like a delegate, or a sub-agent. Because now you need to use a second LLM to interpret that request - you probably want to do something like, give it a list of sugested IDE capabilities and the ability to find out full details and ask it to come up with a plan (or maybe directly execute the tools) to find the answer.

As it happens, MCP has a capability to enable tools to do this - it's called (somewhat oddly, in my opinion) "sampling". It allows for "callbacks" from the MCP tool to the LLM. But literally nobody implements it, from what I can tell.4 But sampling is kind of limited anyway. With SymmACP, I think you could do much more interesting things.

SymmACP.contains(simultaneous_sessions)

The key is that ACP already permits a single agent to "serve up" many simultaneous sessions. So that means that if I have a proxy, perhaps one supplying an MCP tool definition, I could use it to start fresh sessions - combine that with the "history replay" capability I mentioned above, and the tool can control exactly what context to bring over into that session to start from, as well, which is very cool (that's a challenge for MCP servers today, they don't get access to the conversation history).

sequenceDiagram
    participant E as Editor
    participant P as Proxy
    participant A as Agent
    
    A->>P: ide_operation("...")
    activate P
    P->>A: new_session
    activate P
    activate A
    P->>A: prompt("Using these primitive operations, suggest a way to do '...'")
    A->>P: ...
    A->>P: end_turn
    deactivate P
    deactivate A
    Note over P: performs the plan
    P->>A: result from tool
    deactivate P
  

Conclusion

Ok, this post sketched a variant on ACP that I call SymmACP. SymmACP extends ACP with

Most of these are modest extensions to ACP, in my opinion, and easily doable in a backwards fashion just by adding new capabilities. But together they unlock the ability for anyone to craft extensions to agents and deploy them in a composable way. I am super excited about this. This is exactly what I wanted Symposium to be all about.

It's worth noting the old adage: "with great power, comes great responsibility". These proxies and ACP layers I've been talking about are really like IDE extensions. They can effectively do anything you could do. There are obvious security concerns. Though I think that approaches like Microsoft's Wassette are key here - it'd be awesome to have a "capability-based" notion of what a "proxy layer" is, where everything compiles to WASM, and where users can tune what a given proxy can actually do.

I plan to start sketching a plan to drive this work in Symposium and elsewhere. My goal is to have a completely open and interopable client, one that can be based on any agent (including local ones) and where you can pick and choose which parts you want to use. I expect to build out lots of custom functionality to support Rust development (e.g., explaining and diagnosting trait errors using the new trait solver is high on my list…and macro errors…) but also to have other features like walkthroughs, collaborative interaction style, etc that are all language independent - and I'd love to see language-focused features for other langauges, especially Python and TypeScript (because "the new trifecta") and Swift and Kotlin (because mobile). If that vision excites you, come join the Symposium Zulip and let's chat!

Appendix: A guide to the agent protocols I'm aware of

One question I've gotten when discussing this is how it compares to the other host of protocols out there. Let me give a brief overview of the related work and how I understand its pros and cons:


  1. Everybody uses agents in various ways. I like Simon Willison's "agents are models using tools in a loop" definition; I feel that an "agentic CLI tool" fits that definition, it's just that part of the loop is reading input from the user. I think "fully autonomous" agents are a subset of all agents - many agent processes interact with the outside world via tools etc. From a certain POV, you can view the agent "ending the turn" as invoking a tool for "gimme the next prompt". ↩︎

  2. Research reports are a major part of how I avoid hallucination. You can see an example of one such report I commissioned on the details of the Language Server Protocol here; if we were about to embark on something that required detailed knowledge of LSP, I would ask the agent to read that report first. ↩︎

  3. Alternatively: clear the session history and rebuild it, but I kind of prefer the functional view of the world, where a given session never changes. ↩︎

  4. I started an implementation for Q CLI but got distracted - and, for reasons that should be obvious, I've started to lose interest. ↩︎

  5. Yes, you read that right. There is another ACP. Just a mite confusing when you google search. =) ↩︎

08 Oct 2025 8:54am GMT

This Week In Rust: This Week in Rust 620

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is tokio-netem, a toolbox of Tokio AsyncRead /AsyncWrite adapters to emulate latency, throttling, slicing, termination, forced shutdown, data injection and data corruption.

Thanks to Viacheslav Biriukov for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

398 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Largely a positive week. Big win coming from avoiding unnecessary work for debug log in #147293, and another one for rustdoc from optimized span representation for highlighter #147189. Lots of noisy results otherwise.

Triage done by @panstromek. Revision range: 8d72d3e1..1a3cdd34

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 2.0%] 10
Regressions ❌
(secondary)
0.4% [0.0%, 0.8%] 50
Improvements ✅
(primary)
-1.3% [-5.3%, -0.2%] 147
Improvements ✅
(secondary)
-1.3% [-12.7%, -0.1%] 111
All ❌✅ (primary) -1.2% [-5.3%, 2.0%] 157

6 Regressions, 3 Improvements, 6 Mixed; 8 of them in rollups 40 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Cargo

No Items entered Final Comment Period this week for Rust RFCs, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-10-08 - 2025-11-05 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

For me personally, the best thing about becoming successful at anything is you gain the ability to lift others up.

- Nell Shamrell-Harrington at RustConf (youtube video link, the rest of the talk is great, too!)

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

08 Oct 2025 4:00am GMT

07 Oct 2025

feedPlanet Mozilla

The Mozilla Blog: Firefox profiles: Private, focused spaces for all the ways you browse

Every part of your life has its own rhythm: work, school, family, personal projects. Beginning Oct. 14, we're rolling out profile management in Firefox so you can keep them separate and create distinct spaces - each with its own bookmarks, logins, history, extensions and themes. It's an easy way to stay organized, focused and private.

Firefox Profiles feature shown with an illustration of three foxes and a setup screen for creating and customizing browser profiles.

Spaces that lighten your load

Profiles don't just keep you organized; they also reduce data mixing and ease cognitive load. By keeping your different roles online neatly separate, you spend less mental energy juggling contexts and avoid awkward surprises (like your weekend plans popping up in a work presentation). And, like everything in Firefox, profiles are built on our strong privacy foundation.

We also worked with disabled people to make profiles not only compliant, but genuinely delightful to use for everyone. That collaboration shaped everything from the visual design (avatars, colors, naming) to the way profiles keep sensitive data (like medical information) private. It's an example of how designing for accessibility boundaries benefits all of us.

What makes profiles in Firefox different

Other browsers offer profiles mainly for convenience. Firefox goes further by making them part of our mission to put you in control of your online life.

Firefox Profile Manager showing Work and Personal profiles, with an option to create a new one, on a desktop with a forest background.

Profiles in Firefox aren't just a way to clean up your tabs. They're a way to set boundaries, protect your information and make the internet a little calmer. Because when your browser respects your focus and your privacy, it frees you up to do what actually matters - work, connect, create, explore - on your own terms.

Take control of your internet

Download Firefox

The post Firefox profiles: Private, focused spaces for all the ways you browse appeared first on The Mozilla Blog.

07 Oct 2025 2:11pm GMT

Niko Matsakis: The Handle trait

There's been a lot of discussion lately around ergonomic ref-counting. We had a lang-team design meeting and then a quite impactful discussion at the RustConf Unconf. I've been working for weeks on a follow-up post but today I realized what should've been obvious from the start - that if I'm taking that long to write a post, it means the post is too damned long. So I'm going to work through a series of smaller posts focused on individual takeaways and thoughts. And for the first one, I want to (a) bring back some of the context and (b) talk about an interesting question, what should we call the trait. My proposal, as the title suggests, is Handle - but I get ahead of myself.

The story thus far

For those of you who haven't been following, there's been an ongoing discussion about how best to have ergonomic ref counting:

This blog post is about "the trait"

The focus of this blog post is on one particular question: what should we call "The Trait". In virtually every design, there has been some kind of trait that is meant to identify something. But it's been hard to get a handle1 on what precisely that something is. What is this trait for and what types should implement it? Some things are clear: whatever The Trait is, Rc<T> and Arc<T> should implement it, for example, but that's about it.

My original proposal was for a trait named Claim that was meant to convey a "lightweight clone" - but really the trait was meant to replace Copy as the definition of which clones ought to be explicit2. Jonathan Kelley had a similar proposal but called it Capture. In RFC #3680 the proposal was to call the trait Use.

The details and intent varied, but all of these attempts had one thing in common: they were very operational. That is, the trait was always being defined in terms of what it does (or doesn't do) but not why it does it. And that I think will always be a weak grounding for a trait like this, prone to confusion and different interpretations. For example, what is a "lightweight" clone? Is it O(1)? But what about things that are O(1) with very high probability? And of course, O(1) doesn't mean cheap - it might copy 22GB of data every call. That's O(1).

What you want is a trait where it's fairly clear when it should and should not be implemented and not based on taste or subjective criteria. And Claim and friends did not meet the bar: in the Unconf, several new Rust users spoke up and said they found it very hard, based on my explanations, to judge whether their types ought to implement The Trait (whatever we call it). That has also been a persitent theme from the RFC and elsewhere.

"Shouldn't we call it share?" (hat tip: Jack Huey)

But really there is a semantic underpinning here, and it was Jack Huey who first suggested it. Consider this question. What are the differences between cloning a Mutex<Vec<u32>> and a Arc<Mutex<Vec<u32>>>?

One difference, of course, is cost. Cloning the Mutex<Vec<u32>> will deep-clone the vector, cloning the Arc will just increment a referece count.

But the more important difference is what I call "entanglement". When you clone the Arc, you don't get a new value - you get back a second handle to the same value.3

Entanglement changes the meaning of the program

Knowing which values are "entangled" is key to understanding what your program does. A big part of how the borrow checker4 achieves reliability is by reducing "entaglement", since it becomes a relative pain to work with in Rust.

Consider the following code. What will be the value of l_before and l_after?

let l_before = v1.len();
let v2 = v1.clone();
v2.push(new_value);
let l_after = v1.len();

The answer, of course, is "depends on the type of v1". If v1 is a Vec, then l_after == l_before. But if v1 is, say, a struct like this one:

struct SharedVec<T> {
    data: Arc<Mutex<Vec<T>>>
}

impl<T> SharedVec<T> {
    pub fn push(&self, value: T) {
        self.data.lock().unwrap().push(value);
    }

    pub fn len(&self) -> usize {
        self.data.lock().unwrap().len()
    }
}

then l_after == l_before + 1.

There are many types that act like a SharedVec: it's true for Rc and Arc, of course, but also for things like Bytes and channel endpoints like Sender. All of these are examples of "handles" to underlying values and, when you clone them, you get back a second handle that is indistinguishable from the first one.

We have a name for this concept already: handles

Jack's insight was that we should focus on the semantic concept (sharing) and not on the operational details (how it's implemented). This makes it clear when the trait ought to be implemented. I liked this idea a lot, although I eventually decided I didn't like the name Share. The word isn't specific enough, I felt, and users might not realize it referred to a specific concept: "shareable types" doesn't really sound right. But in fact there is a name already in common use for this concept: handles (see e.g. tokio::runtime::Handle).

This is how I arrived at my proposed name and definition for The Trait, which is Handle:5

/// Indicates that this type is a *handle* to some
/// underlying resource. The `handle` method is
/// used to get a fresh handle.
trait Handle: Clone {
    final fn handle(&self) -> Self {
        Clone::clone(self)
    }
}

We would lint and advice people to call handle

The Handle trait includes a method handle which is always equivalent to clone. The purpose of this method is to signal to the reader that the result is a second handle to the same underlying value.

Once the Handle trait exists, we should lint on calls to clone when the receiver is known to implement Handle and encourage folks to call handle instead:

impl DataStore {
    fn store_map(&mut self, map: &Arc<HashMap<...>>) {
        self.stored_map = map.clone();
        //                    -----
        //
        // Lint: convert `clone` to `handle` for
        // greater clarity.
    }
}

Compare the above to the version that the lint suggests, using handle, and I think you will get an idea for how handle increases clarity of what is happening:

impl DataStore {
    fn store_map(&mut self, map: &Arc<HashMap<...>>) {
        self.stored_map = map.handle();
    }
}

What it means to be a handle

The defining characteristic of a handle is that it, when cloned, results in a second value that accesses the same underlying value. This means that the two handles are "entangled", with interior mutation that affects one handle showing up in the other. Reflecting this, most handles have APIs that consist exclusively or almost exclusively of &self methods, since having unique access to the handle does not necessarily give you unique access to the value.

Handles are generally only significant, semantically, when interior mutability is involved. There's nothing wrong with having two handles to an immutable value, but it's not generally distinguishable from two copies of the same value. This makes persistent collections an interesting grey area: I would probably implement Handle for something like im::Vec<T>, particularly since something like a im::Vec<Cell<u32>> would make entaglement visible, but I think there's an argument against it.

Handles in the stdlib

In the stdlib, handle would be implemented for exactly one Copy type (the others are values):

// Shared references, when cloned (or copied),
// create a second reference:
impl<T: ?Sized> Handle for &T {}

It would be implemented for ref-counted pointers (but not Box):

// Ref-counted pointers, when cloned,
// create a second reference:
impl<T: ?Sized> Handle for Rc<T> {}
impl<T: ?Sized> Handle for Arc<T> {}

And it would be implemented for types like channel endpoints, that are implemented with a ref-counted value under the hood:

// mpsc "senders", when cloned, create a
// second sender to the same underlying channel:
impl<T: ?Sized> Handle for mpsc::Sender {}

Conclusion: a design axiom emerges

OK, I'm going to stop there with this "byte-sized" blog post. More to come! But before I go, let me layout what I believe to be a useful "design axiom" that we should adopt for this design:

Expose entanglement. Understanding the difference between a handle to an underlying value and the value itself is necessary to understand how Rust works.

The phrasing feels a bit awkward, but I think it is the key bit anyway.


  1. That. my friends, is foreshadowing. Damn I'm good. ↩︎

  2. I described Claim as a kind of "lightweight clone" but in the Unconf someone pointed out that "heavyweight copy" was probably a better description of what I was going for. ↩︎

  3. And, not coincidentally, the types where cloning leads to entanglement tend to also be the types where cloning is cheap. ↩︎

  4. and functional programming… ↩︎

  5. The "final" keyword was proposed by Josh Triplett in RFC 3678. It means that impls cannot change the definition of Handle::handle. There's been some back-and-forth on whether it ought to be renamed or made more general or what have you; all I know is, I find it an incredibly useful concept for cases like this, where you want users to be able to opt-in to a method being available but not be able to change what it does. You can do this in other ways, they're just weirder. ↩︎

07 Oct 2025 2:04pm GMT

06 Oct 2025

feedPlanet Mozilla

Firefox Nightly: Smarter Search, Smoother Tools – These Weeks in Firefox: Issue 190

Highlights

Context menu entry: Search Image with Google Lens

DevTools is displaying an editor widget

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

DevTools

WebDriver BiDi

Lint, Docs and Workflow

Information Management/Sidebar

Profile Management

Search and Navigation

Storybook/Reusable Components/Acorn Design System

Split button component

06 Oct 2025 8:11pm GMT

Mozilla Thunderbird: VIDEO: Conversation View

Welcome back to another edition of the Community Office Hours! This month, we're showing you our first steps towards a long awaited feature: a genuine Conversation View! Our guests are Alessandro Castellani, Director of Desktop and Mobile Apps and Geoff Lankow, Sr. Staff Software Engineer on the Desktop team. They recently attended a work week in Vancouver that brought together developers and designers to create our initial vision and plan to bring Conversation View from dream to reality. Before Geoff flew home, he joined Alessandro and us to discuss his backend database work that will make Conversation View possible. We also had a peek at the workweek itself, other features possible with our new database, and our tentative delivery timeline.

We'll be back next month with an Office Hours all about Exchange Support for email, which is landing soon in our monthly Release channel.

September Office Hours: Conversation View

Some of you might be asking, "what IS Conversation View?" Basically, it's a Gmail-like visualization of a message thread when reading emails. So, in contrast to the current threaded view, you have all the messages in a thread. This both includes your replies and any other messages that may have been moved to a different folder.

So, why hasn't Thunderbird been able to do this already? The short answer is that our code is old. Netscape Navigator old. Our current 'database,' Mork, makes a mail folder summary (an .msf file) per folder. These files are text-based unicode and are NOT human readable. In Thunderbird 3, we introduced Gloda, our Global Search and Indexer, to try and work around Mork's limitations. It indexes what's in the .msf file and stores the data in a SQLite file. But as you might already know, Gloda itself is clunky and slow.

Modern Solutions for Modern Problems

If we want Conversation View (and other features users now expect), we need to bring Thunderbird further into the 21st century. Hence, our work on a new database, which we're calling Panorama. It's a single SQLite database with all your messages. Panorama indexes emails as soon as they're received, and since it's SQLite, it's not only fast, but it can be read by so many tools.

Since all of your messages will be in a single SQLite database, we can do more than enable a true Conversation view. Panorama will improve global search, enable improved filters, and more. Needless to say, we're excited about all the possibilities!

Conversation View Workweek

To get these possibilities started, we decided to bring developers and designers together for a Conversation View Workweek in Vancouver in early September. This brought people out of Zoom calls, emails, and Matrix messages, and across the Pacific Ocean in Geoff's case, into one place to discuss technical and design challenges.

We've spoken previously about our design system and how we've collaborated between design and development on features like Account Hub. In-person collaboration, especially for something as complicated as a new database and message view, was invaluable. By the end of the week, developers and designers alike had plenty to show for their efforts.

Next Steps

Before you get too excited, the new database and Conversation view won't land until after next year's ESR release. There's a lot of work to do, including testing Panorama in a standalone space until we're ready to run Mork and Panorama alongside each other, along with the old and new code referencing each database. We need the migration to be seamless and easily reversible, and so we want to take the time to get this absolutely right.

Want to stay up to date on our progress? We recommend subscribing to our Planning and UX mailing lists, State of the Thunder videos and blog posts, and the meta bug on Bugzilla.

VIDEO (Also on Peertube):

Slides:

Resources:

The post VIDEO: Conversation View appeared first on The Thunderbird Blog.

06 Oct 2025 6:08pm GMT

The Mozilla Blog: Building a fairer future for digital advertising: Mozilla partners with Index Exchange

Black background featuring two white logos: ‘Mozilla Ads’ on the left and ‘Index Exchange’ on the right, separated by a thin vertical line.

Advertising can and should work better - for people, for publishers, and for brands. That belief is what drives Mozilla's growing investment in rebuilding digital advertising around trust, transparency and fairness.

For too long, the web's primary funding model has relied on hidden data collection and opaque ad systems that work around users instead of with them. Mozilla's approach is different: We're building an alternative that aligns commercial success with user respect, giving advertisers new ways to show up responsibly in environments people actually trust.

"Advertising funds the open internet, but it needs a new foundation," said Suba Vasudevan, COO of Mozilla.org and SVP at Mozilla Corp. "Advertisers have always cared about brand safety. The missing piece has been trust in the platforms where ads run. That's the gap Mozilla is closing; making the advertising environment itself something that both brands and users can trust. And we do this all while protecting the privacy of our users' data."

This week at Advertising Week New York 2025, Mozilla announced a key step in that journey - a partnership with Index Exchange, one of the world's largest independent ad exchanges. Together, we're proving that trusted environments can also deliver trusted performance.

"Our partnership with Mozilla demonstrates how programmatic can evolve to create stronger outcomes for brands and better experiences for consumers," said Lori Goode, CMO of Index Exchange. "By uniting Mozilla's trusted environment with Index's infrastructure, we're building a model of programmatic rooted in quality, accountability, and long-term value."

Creating a new model for responsible advertising

The collaboration between Mozilla and Index Exchange is part of a larger effort to evolve how advertising supports the open web. It's about expanding options for marketers who want to reach audiences in ways that are both effective and ethical - replacing tracking-heavy systems with transparent, trust-centered design.

Scale where it matters. For marketers committed to building on trusted platforms, curated PMP deals with Mozilla and Index Exchange offer a way to connect with engaged audiences in respectful, brand-safe environments - aligning performance goals with user trust in the fastest-growing programmatic channel (~88% of global spend).

No personal identifiers. Mozilla and Index Exchange ensure that no personal identifiers or cross-site tracking are ever used - reflecting our shared commitment to respect users and create ad experiences people can trust.

Future-ready monetization. Firefox research shows that even privacy-minded users welcome thoughtful personalization when it improves their experience - but only when delivered responsibly and with clear user control.

A unique audience opportunity. Firefox reaches hundreds of millions of people worldwide, offering marketers the chance to build connections in a trusted, brand-safe environment with engaged audiences often underrepresented on other platforms.

On stage at Advertising Week New York

Mozilla and Index Exchange will debut the partnership during a keynote conversation, "Adding Trust to Your Ad Buy: The Smartest Spend in Marketing Today," on the Advertising Week Innovation Stage, Monday, Oct. 6. The session will explore how advertisers can drive performance by investing in trust - not just in creative or campaigns, but in the platforms that power them.

At Mozilla, we've always believed that advertising, done responsibly, can help sustain the open web. This partnership is proof of that belief - a tangible example of how innovation and trust can go hand in hand, delivering value for advertisers and for the internet itself. For more about Mozilla Ads, visit: https://www.mozilla.org/en-US/advertising/.

The post Building a fairer future for digital advertising: Mozilla partners with Index Exchange appeared first on The Mozilla Blog.

06 Oct 2025 4:53pm GMT

02 Oct 2025

feedPlanet Mozilla

The Mozilla Blog: Anonym and Snap partner to unlock increased performance for advertisers

The Anonym wordmark and the Snap, Inc. logo are shown side by side.

An ads milestone in marketing reach without data risk.

The ad industry is shifting, and with it comes a clear need for advertisers to use data responsibly while still proving impact. Advertisers face a false choice between protecting privacy and proving performance. Anonym exists to prove they can have both - and this week marks a major milestone in that mission.

Today we announced a new partnership with Snap Inc., giving advertisers a way to use more of their first-party data safely and effectively. This collaboration shows what's possible when privacy and performance go hand in hand: Marketers can unlock real insights into how campaigns drive results, without giving up data control.

Unleashing first-party data that's often untapped

Unlocking value while maintaining privacy of advertisers' sensitive first-party (1P) data has long been a challenge for advertisers concerned with exposure or technical friction. We set out to change this equation, enabling brands to safely activate data sets to measure conversion lift and attribution.

With Snapchat campaigns, advertisers can now bring first-party data that's typically been inaccessible into play and understand how ads on the platform drive real-world actions - from product discovery to purchase. Instead of relying only on proxy signals or limited datasets, brands can generate more complete, incrementality-based insights on their Snapchat performance, gaining a clearer picture of the channel's true contribution to business outcomes.

"Marketers possess deep reserves of first-party data that too often sits idle because it's seen as difficult or risky to use," said Graham Mudd, Senior Vice President, Product, Mozilla and Anonym co-founder. "Our partnership with Snap gives advertisers the power to prove outcomes with confidence, and do it in a way that is both tightly controlled and insight-rich."

Snapchat audience scale: Reach meets relevance

With a reach of over 930 million monthly active users globally (MAUs), including 469 million daily active users - Snap's rapidly growing audience makes it a uniquely powerful marketing channel. This breadth of reach is especially appealing to advertisers who previously avoided activating sensitive data-knowing they can now connect securely with high-value Snapchatters at scale.

Our solution is designed for ease of use, requiring minimal technical resources and enabling advertisers to go from kickoff to measurement reporting within weeks. Our collaboration with Snap furthers the mission of lowering barriers to entry in advertising, and enables brands of all sizes to confidently activate their competitive insights on Snapchat.

"Snapchat is where people make real choices, and advertisers need simple, clear insights into how their campaigns perform," said Elena Bond, Head of Marketing Science, Snap Inc. "By working with Anonym, we're making advanced measurement accessible to more brands - helping them broaden their reach, uncover deeper insights, and prove results, all while maintaining strict control of their data."

How Anonym works: Simple, secure, scalable

Using end-to-end encryption, trusted execution environments (TEE), and differential privacy to guarantee protection and streamline compliance, Anonym helps advertisers connect with new, high-value customers and analyze campaign effectiveness without giving up data control. Strategic reach and actionable measurement are achieved with:

Anonym and Snap's collaboration coincides with Advertising Week New York 2025, where measurement and data innovation will be in sharp focus.

A teal lock icon next to the bold text "Anonym" on a black background.

Performance, powered by privacy

Learn more about Anonym

The post Anonym and Snap partner to unlock increased performance for advertisers appeared first on The Mozilla Blog.

02 Oct 2025 11:12pm GMT

Support.Mozilla.Org: Ask a Fox: A full week celebration of community power

From September 22-28, the Mozilla Support team ran our first-ever Mozilla - Ask a Fox virtual hackathon. In collaboration with the Thunderbird team, we invited contributors, community members, and staff to jump into the Mozilla Community Forums, lend a hand to Firefox and Thunderbird users, and experience the power of Mozillians coming together.

Rallying the Community

The idea was simple: we want to bring not only our long time community members, but newcomers and Mozilla staff together for one-week of focused engagement. The result was extraordinary.

Together, we showed just how responsive and effective our community can be when we rally around a common goal.

More Than Answering Forum Questions

Ask a Fox wasn't only about answering questions-it was about connection. Throughout the week, we hosted special AMAs with the WebCompat, Web Performance, and Thunderbird teams, giving contributors the chance to engage directly with product experts. We also ran two Community Get Together calls to gather, share stories, and celebrate the spirit of collaboration.

For some added fun, we also launched a and ⚡ emoji hunt accross our Knowledge Base articles.

Recognizing contributors

We're grateful for the incredible participation during the event and want to recognize the contributors who went above and beyond. Those who participated in our challenges should receive exclusive SUMO badges in their profile by now. And the following top five contributors for each product will soon receive a $25 swag voucher from us to shop our limited-edition Ask a Fox swag collection, available in the NA/EU swag store.

Firefox desktop (including Enterprise)

Congrats to Paul, Denyshon, Jonz4SUSE, @next, and jscher2000.

Firefox for Android

Congrats to Paul, TyDraniu, GerardoPcp04, Mad_Maks, and sjohnn.

Firefox for iOS

Congratulations to Paul, Simon.c.lord, TyDraniu, Mad_Maks, and Mozilla-assistent.

Thunderbird (including Thunderbird for Android)

Congratulations to Davidsk, Sfhowes, Mozilla98, MattAuSupport, and Christ1.

We also want to extend a warm welcome to newcomers who made impressive impact during the event: mozilla98, starretreat, sjohnn, Vexi, Mark, Mapenzi, cartdaniel437, hariiee1277, and thisisharsh7.

And finally, congratulations to Vincent, winner of the staff award for the highest number of replies during the week.


Ask a Fox was more than a campaign-it was a celebration of what makes Mozilla unique: a global community of people who care deeply about helping others and shaping a better web. Whether you answered one question or one hundred, your contribution mattered.

This event reminded us that when Mozillians come together, we can amplify our impact in powerful ways. And this is just the beginning-we're excited to carry this momentum forward, continue improving the Community Forums, and build an even stronger, more responsive Mozilla community for everyone.

02 Oct 2025 9:57am GMT

01 Oct 2025

feedPlanet Mozilla

The Mozilla Blog: Celebrate the power of browser choice with Firefox. Join us live.

Firefox is celebrating 21 years of Firefox by hosting four global events celebrating the power of browser choice this fall.

We are inviting people to join us in Berlin, Chicago, Los Angeles and Munich as part of Open What You Want, Firefox's campaign to celebrate choice and the freedom to show up exactly as you are - whether that's in your coffee order, the music you dance to, or the browser you use. These events are an opportunity to highlight why browser choice matters and why Firefox stands apart as the last major independent option.

Firefox is built differently with a history of defiance. It is built in a way to best push back against the defaults of Big Tech. Firefox is the only major browser not backed by a billionaire or built on Chromium's browser engine. Instead, Firefox is backed by a non-profit, and maintains and runs on Gecko, a flexible, independent, open-source browser engine.

So, it makes sense that we are celebrating differently too. We are inviting people to join us at four community-driven "House Blend" coffee rave events. What is a coffee rave? A caffeine-fueled day rave celebrating choice, freedom, and doing things your own way - online and off. These events are open to everyone and in partnership with local coffee shops.

Each event will have free coffee, exclusive merch, sets by two great, local DJs, a lot of dancing, and an emphasis on how individuals should get to shape their online experience and feel control online - and you can't feel in control without choice.

We are kicking off the celebrations this Saturday, Oct. 4 in both Chicago and Berlin, will move to Munich the following Saturday, Oct. 11 and will end in Los Angeles Saturday, Nov. 8, for Firefox's actual birthday weekend.

Berlin (RSVP here)
When: Saturday, Oct. 4, 2025 | 13:00 - 16:00 CEST
Where: Café Bravo, Auguststraße 69, 10117 Berlin-Mitte

Chicago (RSVP here)
When: Saturday, Oct. 4, 2025 | 10:00AM - 2:00PM CT
Where: Drip Collective, 172 N Racine Ave, Chicago Illinois

Munich (RSVP here)
When: Saturday, Oct. 11, 2025 | 13:00 - 16:00 CEST
Where: ORNO Café, Fraunhoferstraße 11, 80469 München

Los Angeles
When: Saturday, Nov. 8, 2025
More information to come

We hope you will join our celebration this year, in person at a coffee rave, or at one of our digital-first activations celebrating internet independence. As Firefox reflects on another year, it's a good reminder that the most important choice you can make online is your browser. And browser choice is something that we should all celebrate and not take for granted.

The post Celebrate the power of browser choice with Firefox. Join us live. appeared first on The Mozilla Blog.

01 Oct 2025 5:02pm GMT

The Mozilla Blog: Blast off! Firefox turns data power plays into a game

We're celebrating Firefox's 21st anniversary this November, marking more than two decades of building a web that reflects creativity, independence and trust. While other major browsers are backed by billionaires, Firefox exists to ensure that the internet works for you - not for those cashing in on your data.

That's the idea behind Billionaire Blast Off (BBO), an interactive experience where you design a fictional, over-the-top billionaire and launch them on a one-way trip to space. It's a playful way to flip Big Tech's power dynamics and remind people that choice belongs in our hands.

BBO lives online at billionaireblastoff.firefox.com, where you can build avatars, share memes and join in the joke. Offline, we're bringing the fun to TwitchCon, with life-size games and our card game Data War, where data is currency and space is the prize.

Cartoon man riding rocket through space holding Earth with colorful galaxy background.

Create your own billionaire avatar

Play Billionaire Blast Off

The billionaire playbook for your data, served with satire

The goal of Billionaire Blast Off isn't finger-wagging - it's satire you can play. It makes the hidden business of your data tangible, and instead of just reading about the problem, you get to laugh at it, remix it and send it into space.

The game is a safe, silly and shareable way to talk about something serious: who really holds the power over your data.

Two ways to join the fun online:

<figcaption class="wp-element-caption"> Customize your billionaire avatar at billionaireblastoff.firefox.com.</figcaption>

Next stop: TwitchCon

At TwitchCon, you'll find us sending billionaires into space (for real), playing Data War and putting the spotlight on the power of choice.

Visit the Firefox booth #2805 (near Exhibit Hall F) to play Data War, a fast-paced card game where players compete to send egomaniacal, tantrum-prone little billionaires on a one-way ticket to space.

Step into an AR holobox to channel your billionaire villain era, create a life-size avatar and make it perform for your amusement in 3D.

<figcaption class="wp-element-caption">Try out your billionaire in our AR holobox at TwitchCon booth #2805</figcaption>

On Saturday, Oct. 18, swing by the Firefox Lounge at the block party to snag some swag. Then stick around at 8:30 p.m. PT to cheer as we send billionaire avatars into space on a rocket built by Sent Into Space.

Online, the fun continues anytime at billionaireblastoff.firefox.com. Because when the billionaires leave, the web opens up for you.

The post Blast off! Firefox turns data power plays into a game appeared first on The Mozilla Blog.

01 Oct 2025 3:40pm GMT

This Week In Rust: This Week in Rust 619

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is blogr, a fast, lightweight static site generator.

Thanks to Gokul for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Rust

No calls for testing were issued this week by Rust language RFCs, Cargo or Rustup.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

473 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A relatively quiet week. Most of the improvements are to doc builds, driven by continued packing of the search index in rustdoc-search: stringdex update with more packing #147002 and simplifications to doc(cfg) in Implement RFC 3631: add rustdoc doc_cfg features #138907.

Triage done by @simulacrum. Revision range: ce4beebe..8d72d3e1

1 Regressions, 6 Improvements, 4 Mixed; 2 of them in rollups 29 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-10-01 - 2025-10-29 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I must personally extend my condolences to those who forgot they chose in the past to annoy their future self.

- @workingjubilee on github

Thanks to Riking for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

01 Oct 2025 4:00am GMT