12 Mar 2026

feedPlanet Mozilla

The Mozilla Blog: Under the hood: The AI powering Firefox’s Shake to Summarize

Three smartphone screens showing Firefox article summarized with Apple Intelligence on translation updates for Chinese, Japanese, and Korean users.

We recently released a feature in the Firefox iOS mobile app called "Shake to Summarize". The reception was remarkably positive, earning an honorable mention on Time Magazine's best inventions of 2025.

For anyone unfamiliar with Shake to Summarize, it's just what the name implies: when you're browsing a webpage, you can shake your phone to generate a short summary of the page's content.

The gesture is fun, the feature is useful, and the whole thing feels simple and natural.

From a technical standpoint, the application works just how you'd imagine: when a shake (or lightning bolt-icon press) is detected, we grab the web page content, pass it to an LLM for summarization, and then return the result to the user.

But with the LLM landscape being as vast as it is, there is a lot to consider when bringing even a relatively straightforward application to the market. In this post, we'll discuss the ins and outs of our approach to model selection. We will leave prompt development and quality testing for a future article.

Which model?

These days, there are many LLMs available, with a steady stream of new releases arriving almost every week.. Each release is paired with a slate of benchmark scores, showing the new model's superiority along one dimension or another. The pace of development has been fast and furious and billions of dollars have been spent inching the numbers higher and higher.

But what do these metrics mean in practice? At the end of the day, we are building a product for users. The most important metric for us is, "how useful are the summaries the model produces?" - something that isn't neatly captured by the benchmark scores. To select the best model for our applications, we need to run our own tests.

For us, the best model had to excel along several dimensions:

Keeping the above in mind, we selected the following models for our initial evaluation: Mistral Nemo, Mistral Small, Jamba 1.5 mini, Gemini flash 2.0 flash and Llama 4 Maverick - all of which were hosted on Vertex AI. Note: this project began in early 2025

Quality

Standard summary evaluation metrics such as BLEU and ROUGE rely on token overlap and do not correlate well with human judgement. Thus, we decided to use an LLM judge (GPT-4o) to evaluate our model candidates. We had each model generate summaries of the same set of webpages, and then asked the LLM judge to evaluate each summary on the following metrics:

Coherence: Does the summary read logically and clearly as a standalone text?

Consistency: Is the information in the summary accurate and faithful to the source? Are there any hallucinations?

Relevance: Does the summary focus on the most important content from the document?

Fluency: Is the summary grammatically correct, fluent, and well-written?

To get a single, comparable metric, we then averaged these scores.

From this analysis, we see that Google's Gemini 2.0 Flash, Meta's Llama 4 Maverick, and Mistral small are top performers - with Gemini consistently leading the pack. We see that the top three models are roughly equivalent on short passages up to around 2000 tokens (which is roughly the length of the average webpage), but performance separates more as passages get longer - particularly those containing over 5000 tokens*.

*We note that, due in part to this performance degradation, we summarize only pages that are shorter than this 5000 token threshold.

Speed

For speed, the two metrics we looked at were time to first token (i.e. how long do you have to wait before the model starts generating its response) and tokens-per-second (total tokens generated / total generation time, including encoding time).

In both of these tests, Mistral-Small and Gemini-2.0-flash are the clear winners. Both models are faster to begin generating output and produce tokens at a much faster clip than the other models we tested.

Cost

On Vertex AI serverless instances, as of November 2025, the cost for input tokens for our top 3 models are as follows. (See all Vertex AI pricing here):

Model Price / M input tokens Price / M output tokens
Gemini 2.5 Flash (2.0 no longer available) $0.30 $2.50
Llama4-Maverick $0.35 $1.15
Mistral Small $0.10 $0.30

It's clear that Mistral small over-performs from a quality and performance / dollar standpoint, costing one-third of the price or less per input token (which is where the bulk of our token usage is) compared to the other two models.

Open source

Our top priority is building a great user experience. We also believe that open source software is an integral part of building a healthy internet. When we can support open source while still delivering the highest quality experience, we will.

In this category, Llama4 Maverick and Mistral Small come out ahead. While neither is fully open source (no training code or data has been released), both models have open weights paired with liberal usage policies. Gemini 2.5 Flash, on the other hand, is a proprietary model.

Model selection

When we considered all of the above, we decided to go with Mistral-Small to power our feature: it's fast, it's inexpensive, it has open weights, and it produces high quality summaries. What's not to like?

Release and future directions

After selecting the model, we iterated on the prompt to ensure that we were delivering the best experience (see the upcoming blog post: Shake to Summarize: Prompt Engineering), and we released the solution in September of 2025.

This project was an early foray in building LLM-powered features into the browser. As such, the model selection process we developed here helped us chart the course for model selection in our later AI integrations. Notably, the soon-to-be released Smart Window required choosing not just one, but multiple models to power the application-giving users increased control over their experience.
Throughout this process, we learned that the "best" model isn't the one with highest benchmark scores. It's the one that fits the context in which it's used-aligning with the task, the budget, and Mozilla's commitment to open source.

The post Under the hood: The AI powering Firefox's Shake to Summarize appeared first on The Mozilla Blog.

12 Mar 2026 5:57pm GMT

Firefox Tooling Announcements: Happy BMO Push Day! (20260312.1)

Github Link

The following changes have been pushed to bugzilla.mozilla.org:

Discuss these changes in the BMO Matrix Room

1 post - 1 participant

Read full topic

12 Mar 2026 5:43pm GMT

The Mozilla Blog: The web should remain anonymous by default

Multiple white cursor arrows scattered across a bright green background.

The unique architecture of the web enables a much higher degree of user privacy than exists on other platforms. Many factors contribute to this, but an essential one is that you don't need to log in to start browsing. Sharing details about yourself with a website is an optional step you can take when you have reason to do so, rather than the price of admission.

These norms mirror those of a free society. You can walk down the street without wearing a name tag or prove who you are to passersby. You can enter a store without introducing yourself, and only open your wallet if you decide to buy something. You aren't hiding anything, but society shows restraint in what it asks and observes, which allows you to be casually anonymous. When this is the default, everyone can freely enjoy the benefits of privacy without having to go to great lengths to hide their identity - something that isn't practical for most people.

It's easy to take casual anonymity for granted, but it depends on a fragile equilibrium that is under constant threat.

One way to erode casual anonymity is with covert surveillance, like a snoop following you around town or listening to your phone calls. For more than a decade, Mozilla has worked hard to close technical loopholes - like third-party cookies and unencrypted protocols - used by third parties to learn much more about you than you intended to share with them. The work is far from done, but we're immensely proud of how much less effective this kind of surveillance has become.

But there's also a different kind of threat, which is that sites begin to explicitly reject the norm of casual anonymity and move to a model of "papers, please". This isn't a new phenomenon: walled gardens like Facebook and Netflix have long operated this way. However, several recent pressures threaten to tip the balance towards this model becoming much more pervasive.

First, increasing volume and sophistication of bot traffic - often powering and powered by AI - is overwhelming sites. Classic approaches to abuse protection are becoming less effective, leading sites to look for alternatives like invasive fingerprinting or requiring all visitors to log in.

Second, jurisdictions around the world are beginning to mandate age restrictions for certain categories of content, with many implementations requiring users to present detailed identity information in order to access often-sensitive websites.

Third, new standardized mechanisms for digital government identity make it much more practical for sites to demand hard identification and thus use it for all sorts of new purposes, which may be expedient for them but not necessarily in the interest of everyone's privacy.

All of these pressures stem from real problems that people are trying to solve, and ignoring them will not make them go away. Left unchecked, the natural trajectory here would be the end of casual anonymity. However, Mozilla exists to steer emerging technology and technical policy towards better outcomes. In that vein, we've identified promising technical approaches to address each of these three pressures while maintaining or even strengthening the privacy we enjoy online today.

A common theme across these approaches is the use of cryptography: some new, some old. For example, most people have at least one online relationship with an entity who knows them well (think banks, major platforms, etc). Zero-knowledge proof protocols can let other sites use that knowledge to identify visitors as real humans, not bots. Careful design of the protocols maintains privacy by preventing sites from learning any additional information beyond personhood.

We'll be sharing more about these approaches over the coming months. Some details are still evolving in collaboration with our partners in the ecosystem, but we are confident it is possible to address abuse, age assurance, and civic authentication without requiring the web to abandon casual anonymity.

The web is special and irreplaceable - let's work together to preserve what makes it great.

The post The web should remain anonymous by default appeared first on The Mozilla Blog.

12 Mar 2026 12:00pm GMT

The Rust Programming Language Blog: Announcing rustup 1.29.0

The rustup team is happy to announce the release of rustup version 1.29.0.

Rustup is the recommended tool to install Rust, a programming language that empowers everyone to build reliable and efficient software.

What's new in rustup 1.29.0

Following the footsteps of many package managers in the pursuit of better toolchain installation performance, the headline of this release is that rustup has been enabled to download components concurrently and unpack during downloads in operations such as rustup update or rustup toolchain and to concurrently check for updates in rustup check, thanks to a GSoC 2025 project. This is by no means a trivial change so a long tail of issues might occur, please report them if you have found any!

Furthermore, rustup now officially supports the following host platforms:

Also, rustup will start automatically inserting the right $PATH entries during rustup-init for the following shells, in addition to those already supported:

This release also comes with other quality-of-life improvements, to name a few:

Furthermore, @FranciscoTGouveia has joined the team. He has shown his talent, enthusiasm and commitment to the project since the first interactions with rustup and has played a significant role in bring more concurrency to it, so we are thrilled to have him on board and are actively looking forward to what we can achieve together.

Further details are available in the changelog!

How to update

If you have a previous version of rustup installed, getting the new one is as easy as stopping any programs which may be using rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

Rustup's documentation is also available in the rustup book.

Caveats

Rustup releases can come with problems not caused by rustup itself but just due to having a new release.

In particular, anti-malware scanners might block rustup or stop it from creating or copying files, especially when installing rust-docs which contains many small files.

Issues like this should be automatically resolved in a few weeks when the anti-malware scanners are updated to be aware of the new rustup release.

Thanks

Thanks again to all the contributors who made this rustup release possible!

12 Mar 2026 12:00am GMT

11 Mar 2026

feedPlanet Mozilla

This Week In Rust: This Week in Rust 642

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is sentencex, a fast sentence segmentation library.

Thanks to Santhosh Thottingal for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

483 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Almost no regressions this week, while there was a handful of performance improvements caused by the ongoing refactoring of the compiler query system. The largest one was from #153521.

Triage done by @kobzol. Revision range: ddd36bd5..3945997a

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.4%, 0.5%] 3
Regressions ❌
(secondary)
0.6% [0.1%, 1.2%] 8
Improvements ✅
(primary)
-0.9% [-2.5%, -0.1%] 110
Improvements ✅
(secondary)
-0.8% [-2.7%, -0.1%] 77
All ❌✅ (primary) -0.9% [-2.5%, 0.5%] 113

0 Regressions, 6 Improvements, 3 Mixed; 5 of them in rollups 31 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Compiler Team (MCPs only), Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-03-11 - 2026-04-08 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Happy "Clippy, you are very helpful" day for those who celebrates!

- Manpacket on functional.cafe

Despite a lamentable lack of suggestions, llogiq is exceedingly pleased with his choice.

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

11 Mar 2026 4:00am GMT

10 Mar 2026

feedPlanet Mozilla

Firefox Nightly: AI Controls – These Weeks in Firefox: Issue 196

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs

Smart Window

DevTools

WebDriver

Lint, Docs and Workflow

New Tab Page

Picture-in-Picture

Screenshots

Search and Navigation

Storybook/Reusable Components/Acorn Design System

Settings Redesign

10 Mar 2026 12:59am GMT

07 Mar 2026

feedPlanet Mozilla

Frederik Braun: Composing Sanitizer configurations

The HTML Sanitizer API allows multiple ways to customize the default allow list and this blog post aims to describe a few variations and tricks we came up with while writing the specification.

Safe and unsafe Configurations

Examples in this post will use configuration dictionaries. These dictionaries might be used …

07 Mar 2026 11:00pm GMT

06 Mar 2026

feedPlanet Mozilla

Frederik Braun: Perfect types with `setHTML()`

TLDR: Use require-trusted-types-for 'script'; trusted-types 'none'; in your CSP and nothing besides setHTML() works, essentially removing all DOM-XSS risks.

Background: Sanitizer API

I was guest at the ShopTalkShow Podcast to talk about setHTML() and the HTML Sanitizer API. Feel free to listen to the whole episode, if you want to …

06 Mar 2026 11:00pm GMT

The Mozilla Blog: Hardening Firefox with Anthropic’s Red Team

Pixel art lock icon on orange background, representing privacy and security.

For more than two decades, Firefox has been one of the most scrutinized and security-hardened codebases on the web. Open source means our code is visible, reviewable, and continuously stress-tested by a global community.

A few weeks ago, Anthropic's Frontier Red Team approached us with results from a new AI-assisted vulnerability-detection method that surfaced more than a dozen verifiable security bugs, with reproducible tests. Our engineers validated the findings and landed fixes ahead of the recently shipped Firefox 148.

For users, that means better security and stability in Firefox. Adding new techniques to our security toolkit helps us identify and fix vulnerabilities before they can be exploited in the wild.

An emerging technique, pressure-tested by Firefox engineers

AI-assisted bug reports have a mixed track record, and skepticism is earned. Too many submissions have meant false positives and an extra burden for open source projects. What we received from the Frontier Red Team at Anthropic was different.

Anthropic's team got in touch with Firefox engineers after using Claude to identify security bugs in our JavaScript engine. Critically, their bug reports included minimal test cases that allowed our security team to quickly verify and reproduce each issue.

Within hours, our platform engineers began landing fixes, and we kicked off a tight collaboration with Anthropic to apply the same technique across the rest of the browser codebase. In total, we discovered 14 high-severity bugs and issued 22 CVEs as a result of this work. All of these bugs are now fixed in the latest version of the browser.

In addition to the 22 security-sensitive bugs, Anthropic discovered 90 other bugs, most of which are now fixed. A number of the lower-severity findings were assertion failures, which overlapped with issues traditionally found through fuzzing, an automated testing technique that feeds software huge numbers of unexpected inputs to trigger crashes and bugs. However, the model also identified distinct classes of logic errors that fuzzers had not previously uncovered.

Anthropic has also published a technical write-up of their research process and findings, which we invite you to read here.

The scale of findings reflects the power of combining rigorous engineering with new analysis tools for continuous improvement. We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers' toolbox. Firefox has undergone some of the most extensive fuzzing, static analysis, and regular security review over decades. Despite this, the model was able to reveal many previously unknown bugs. This is analogous to the early days of fuzzing; there is likely a substantial backlog of now-discoverable bugs across widely deployed software.

Firefox was not selected at random. It was chosen because it is a widely deployed and deeply scrutinized open source project - an ideal proving ground for a new class of defensive tools. Mozilla has historically led in deploying advanced security techniques to protect Firefox users. In that same spirit, our team has already started integrating AI-assisted analysis into our internal security workflows to find and fix vulnerabilities before attackers do.

Building in the open for users

Firefox has always championed building publicly and working with our community to build a browser that puts users first. This work reflects Mozilla's long-standing commitment to applying emerging technologies thoughtfully and in service of user security.

The Frontier Red Team at Anthropic showed what collaboration in this space looks like in practice: responsibly disclosing bugs to maintainers, and working together to make them as actionable as possible. As AI accelerates both attacks and defenses, Mozilla will continue investing in the tools, processes, and collaborations that ensure Firefox keeps getting stronger and that users stay protected.

The post Hardening Firefox with Anthropic's Red Team appeared first on The Mozilla Blog.

06 Mar 2026 10:30am GMT

Jonathan Almeida: My Firefox for Android local build environment

The Firefox for Android app has always had a complicated build process - we're cramping a complex cross-platform browser engine and all the related components that make it work on Android into one package. In its current form, it lives in the Firefox mono-repo at mozilla-central (now mozilla-firefox using the git repository).

I wanted to document my "artifact-mode" environment here since it's worked quite successfully for me for many years with minor changes.

NOTE: After a fresh clone of the mono-repo, don't forget to first run and follow the prompts of ./mach bootstrap .

mozconfig

My mozconfig below is enabled for artifact mode, but occasionally I switch between various configurations. You can see those commented out, with these few extra notes:

# Build GeckoView/Firefox for Android:
ac_add_options --enable-application=mobile/android

# Targeting the following architecture.
# For regular phones, no --target is needed.
# For x86 emulators (and x86 devices, which are uncommon):
# ac_add_options --target=i686
# For newer phones or Apple silicon
ac_add_options --target=aarch64
# For x86_64 emulators (and x86_64 devices, which are even less common):
# ac_add_options --target=x86_64

# sccache will significantly speed up your builds by caching
# compilation results. The Firefox build system will download
# sccache automatically.
# This only works for non-artifact builds.
#ac_add_options --with-ccache=sccache

# Enable artifact builds; manager-mode.
ac_add_options --enable-artifact-builds

# Write build artifacts to..

## Full build dir
#mk_add_options MOZ_OBJDIR=./objdir-droid
#mk_add_options MOZ_OBJDIR=./objdir-desktop

## Artifact builds
mk_add_options MOZ_OBJDIR=./objdir-frontend

# Automatic clobbering; don't ask me.
mk_add_options AUTOCLOBBER=1

JAVA_HOME

Sometimes you might find yourself needing to run a (non-mach) command in the terminal. Those typically will need to invoke some parts of gradle for an Android build, so it's best to make sure those are using the same JDK as the bootstrapped one in the mono-repo. This avoids weird build errors where something that compiles in one place isn't working in another (like Android Studio).

The location for the JDKs are typically in ~/.mozbuild/jdk/, and if you've between around for ~6 months you end up with multiple versions after every JDK bump:

$ ls -l ~/.mozbuild/jdk/
drwxr-xr-x@ - jalmeida 15 Apr  2025 jdk-17.0.15+6
drwxr-xr-x@ - jalmeida 15 Jul  2025 jdk-17.0.16+8
drwxr-xr-x@ - jalmeida 21 Oct  2025 jdk-17.0.17+10
drwxr-xr-x@ - jalmeida 20 Jan 09:00 jdk-17.0.18+8
drwxr-xr-x@ - jalmeida 26 Feb 15:04 mozboot

You can find some way to point your latest JDK to one location or you can be lazy like me and pick the latest version to assign as your JAVA_HOME property by adding this to your shell's RC file:

export JAVA_HOME="$(ls -1dr -- $HOME/.mozbuild/jdk/jdk-* | head -n 1)/Contents/Home"

Android Studio

Similarly for Android Studio, let's do the same so that environment is identical. Head to, Settings | Build, Execution, Deployment | Build Tools | Gradle, and ensure that "Gradle JDK" path is set to JAVA_HOME.

Lately, the default seems to be for it to follow GRADLE_LOCAL_JAVA_HOME which is a property we can't easily override, so we have to manually set this ourselves.

Debugging

This section is for miscellaneous build error situations that come-up, but assuming mach build work and there are no known Android build changes, my solution has typically always been the same.

For example, the other day I fetched another engineers patch to test out locally1 as part of reviewing it where I faced the error message below:

Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
   > Internal compiler error. See log for more details

* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to generate a Build Scan (powered by Develocity).
> Get more help at https://help.gradle.org.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
   at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:135)
   at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:288)
   at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:133)
   at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:121)
   at org.gradle.api.internal.tasks.execution.ProblemsTaskPathTrackingTaskExecuter.execute(ProblemsTaskPathTrackingTaskExecuter.java:41)
   at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
   at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
   at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
   at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
   at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
   at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
   at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
   at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
   at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
   at org.gradle.execution.plan.DefaultNodeExecutor.executeLocalTaskNode(DefaultNodeExecutor.java:55)
   at org.gradle.execution.plan.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:34)
   at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
   at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
   at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:339)
   at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:84)
   at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:339)
   at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:328)
   at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
   at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
   at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
   at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: org.gradle.workers.internal.DefaultWorkerExecutor$WorkExecutionException: A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
   at org.gradle.workers.internal.DefaultWorkerExecutor$WorkItemExecution.waitForCompletion(DefaultWorkerExecutor.java:289)
   at org.gradle.internal.work.DefaultAsyncWorkTracker.lambda$waitForItemsAndGatherFailures$2(DefaultAsyncWorkTracker.java:130)
   at org.gradle.internal.Factories$1.create(Factories.java:33)
   at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withoutLocks$2(DefaultWorkerLeaseService.java:344)
   at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
   at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:342)
   at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:326)
   at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLock(DefaultWorkerLeaseService.java:331)
   at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:126)
   at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:92)
   at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForAll(DefaultAsyncWorkTracker.java:78)
   at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForCompletion(DefaultAsyncWorkTracker.java:66)
   at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:260)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
   at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:237)
   at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:220)
   at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:203)
   at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:170)
   at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
   at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
   at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
   at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
   at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
   at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
   at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
   at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
   at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
   at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
   at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
   at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
   at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
   at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
   at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
   at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
   at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
   at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:39)
   at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:28)
   at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
   at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
   at org.gradle.internal.Either$Right.fold(Either.java:176)
   at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
   at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
   at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
   at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
   at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
   at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
   at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
   at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
   at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
   at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
   at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
   at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
   at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
   at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
   at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
   at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:64)
   at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:35)
   at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:62)
   at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:40)
   at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:76)
   at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:45)
   at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:136)
   at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:66)
   at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:38)
   at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
   at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
   at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
   at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
   at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
   at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
   at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:297)
   at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
   at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
   at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
   at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
   at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
   at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
   at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
   at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
   at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
   at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
   at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:31)
   at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:64)
   at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:132)
   ... 30 more
Caused by: org.jetbrains.kotlin.gradle.tasks.FailedCompilationException: Internal compiler error. See log for more details
   at org.jetbrains.kotlin.gradle.tasks.TasksUtilsKt.throwExceptionIfCompilationFailed(tasksUtils.kt:22)
   at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:112)
   at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:75)
   at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:68)
   at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:64)
   at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:61)
   at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100)
   at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:61)
   at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44)
   at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
   at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
   at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
   at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41)
   at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:58)
   at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:176)
   at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:194)
   at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:127)
   at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:169)
   at org.gradle.internal.Factories$1.create(Factories.java:33)
   at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withLocksAcquired$0(DefaultWorkerLeaseService.java:269)
   at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
   at org.gradle.internal.work.DefaultWorkerLeaseService.withLocksAcquired(DefaultWorkerLeaseService.java:267)
   at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:259)
   at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:127)
   at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:132)
   at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:164)
   at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:133)
   ... 2 more

The full trace was long and didn't seem related to a code failure in the module itself. So I employed the solution, which is always the same:

  1. ./mach build
  2. In Android Studio, File > Sync Project with Gradle Files.

Yup, that's all. Very simple and boring.


1

With Jujutsu, this is the moz-phab command I use which has made it easier to manage review patches: moz-phab patch <patch-id> --no-branch --apply-to main@origin

Comments

With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.

Learn how this was implemented from the original source here.

<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/116197244320129422">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript> &>"'

06 Mar 2026 12:32am GMT

05 Mar 2026

feedPlanet Mozilla

Firefox Tooling Announcements: MozPhab 2.9.0 Released

Issues resolved in Moz-Phab 2.9.0:

Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.

1 post - 1 participant

Read full topic

05 Mar 2026 9:47pm GMT

The Mozilla Blog: Ajit Varma on Firefox’s new AI controls: ‘We believe in user choice’

This is an edited transcript of an episode of Outside the Fox, Firefox's flagship podcast, where we explore what's happening online and why it matters. Stay up to date by subscribing on YouTube, Apple Podcasts, Spotify, or your favorite podcast app.

On Outside the Fox, my co-host Kim Horcher and I spend a lot of time talking about big shifts on the web, but also the quieter product decisions that shape everyday internet life. In this episode, we sat down with Ajit Varma, Head of Firefox, to talk about AI controls, our philosophy behind product decisions, and building a browser around user choice.


Steve Flavin:
Ajit, welcome to the podcast. We've had you on before - but for those who might be new to the show, could you give us a little intro?

Ajit Varma:
Yeah, thanks for having me. It's always great to talk to you. As Steve mentioned, I'm the head of Firefox. I've been at Firefox for about a year now. And it's been a great year as we've really launched new features, and I feel like we've really gotten back to basics of creating the best browser.

Kim Horcher:
So let's get to it. Can you give us a quick overview of the new AI Controls in Firefox 148?

Ajit:
AI Controls is a simple one-stop destination where users who have preferences in how they use AI can easily make their choices. This could be to completely turn off AI, turn off all notifications about AI features, any future features, or have more fine-grained control if there's specific AI features that you might want. And then it's really trying to make it easy for users to create the browser experience that they prefer.

Kim:
Awesome, thank you for breaking that down. What I want to know is: why is Firefox launching this now? What problem is Firefox trying to solve?

Ajit:
Over the last year, we've started to introduce a few AI features into Firefox. And when we build features, we really listen to our community on how we can build a better product. And as we launched a few of these features, it was clear that for some users they did not want to use AI now or in the future. And the reasons vary a lot between people, but some are societal concerns, some just don't feel like the feature is the right feature for them. And so as we heard this feedback, it became clear to us that we could do a better job of helping these users turn things off if that's what they wanted. And so that was kind of the reason behind us prioritizing this work.

Steve:
We love that. I know that we've been working on these features for some time. And I've got to say, the implementation is super simple and straightforward. I also really appreciate how customizable it is. Can you elaborate a little bit on the specific features you can control in this new hub, and how you developed the UI?

Ajit:
This was a question that we spent a lot of time on, because there isn't one definition for AI. It means different things to different people. There are features that have existed in Firefox for a long time that people didn't really consider AI-the world evolves. And so we spent time talking to our users. We spent time looking at the technology behind the features that we launched, and we came up with a set of features that we think best aligned with people's concerns around AI.

So these are features like translations, which allow you to go to a website and translate the content into a native language of your choosing. We had a feature in our PDF editor that allowed creation of alternative text to help people with accessibility needs understand what the image was about. When you hovered over a link, we would provide summarizations of the content on that page as an optional feature.

And then there are features that we launched in the last year like tab groups that we wanted to make more intelligent by helping users automatically organize tab groups with fewer clicks and suggesting titles by looking at the content of tabs.

With all these features, there is now the ability to turn all of these off, but it'll also apply to future features as well. So in November, we talked about Smart Window, which is a new mode that allows even more AI innovation. But if you decide that you want to use this AI Control feature, then future features we build will also be turned off by this toggle, and we wouldn't notify a user about any of those upcoming or existing features as well.

This was very informed by feedback that we heard. We encourage feedback from our user base and anyone who uses Firefox. If you feel like there are other features that you expected or other ways we can make this easier, please send us that feedback and we'll continue to improve the feature based on it.

Kim:
Out of curiosity, what happens if a user chooses to block all features inside AI Controls? And is it reversible if I happen to change my mind?

Ajit:
Yeah. So, if you go to the page, there's a toggle at the top that you can flip on or off. If you choose to flip it off, then all the features that would be in this AI bucket would then be toggled off. So if you tried something in the past and you decide you don't want it, this is a single spot that would remove all those features.

But if you decide to use this feature and there's one feature that you want to turn back on - say translations because you're traveling - you can come back and just turn on a specific feature. Or if you decide that AI is something that you want, you can flip everything back on and you'll get notifications of upcoming features.

We've tried to make it a choose-your-own-adventure. It's not paywalling things or putting users through hoops. It is very user-friendly and gives people the ability to choose and control how they use AI, if at all.

Steve:
Something I'd love to touch on is this moment in the browser space.

People are talking about browsers again. That's cool. They've always been relevant-but in recent years, they've come to be viewed almost as a kind of utility. Whereas now, there's a lot of experimentation happening, particularly with AI browsers and AI functionality.

Across the industry, it's generating a lot of conversations, some might even say a lot of noise. What would you say is the key differentiator of Firefox's approach to AI?

Ajit:
First off, I'm so happy about the competition and the conversation. That goes back to Firefox's roots. We were one of the first browsers that provided competition to an entrenched competitor, and this is what ultimately moved the internet forward.

But it's important for us to talk about what makes Firefox differentiated. For us, that is about choice, control, and privacy. These values go to the core of the mission for Firefox, which is to help create a healthy and open internet.

When you look at other browsers, whether newly emerging ones or existing browsers changing into AI-first browsers, it's becoming apparent that some are not being created because there's a desire to create the best browser. Many companies are looking at how to take users' data, how to get more adoption for AI, how to create more entry points into that company's AI.

At Firefox, we have a singular mission: to create the best browser. We don't have billions of dollars spent on building an LLM that we need to force upon users. We believe in user choice. That can mean using on-device models. It can mean choosing the AI you want and not just the AI of the company who built the browser.

Steve:
That's exciting to hear. As all of us are navigating this new landscape together-as AI reshapes the web as we know it-how do you view Firefox's role? How can Firefox lead by example?

Ajit:
With any new technology, there are going to be pros and cons. Some we can anticipate, and some we'll adjust to as we see what users want.

We think there are AI features that can improve the browsing experience. Translations can create better connection and empathy. Accessibility features can make the internet more accessible to more people. We're looking very thoughtfully at how we launch AI features to make sure they create better experiences.

But we're also focused on features outside of AI. Over the last year, we've launched tab groups, vertical tabs, sidebar. We have customizable hotkeys and split view coming. And over the next few months, we have many privacy features launching.

I'd say this is probably the most exciting roadmap I've seen in years as we get back to the basics of creating the best browser. We're excited to hear from users, and build something that serves everyone's needs.


You can watch or listen to Outside the Fox on YouTube, Apple Podcasts, Spotify and other major podcast platforms.

The post Ajit Varma on Firefox's new AI controls: 'We believe in user choice' appeared first on The Mozilla Blog.

05 Mar 2026 5:06pm GMT

Tom Ritter: telemetry helps. you still get to turn it off

Phew, it's been a minute since I last wrote anything, hasn't it. And this blog design is pretty dated...

Let me start with this: it is your right to disable telemetry. I fully support that right, and in many cases I disable telemetry myself. If your threat model says "nope", or you simply don't like it, flip the switch. Your relationship with the software and the author of it is a great guide for whether you want to enable telemetry.

What I don't buy is the claim I keep seeing that telemetry is useless and doesn't actually help. I can only speak to Firefox telemetry, but I presume the lesson generalizes. Telemetry has paid for itself many times over on the technical side - stability, security, performance, and rollout safety. If you trust the publisher and want to help them improve the thing you use every day, turning on telemetry is the lowest-effort way to do it. If you don't trust them, or just don't want to... cool.

But be forewarned - if you're one of a very few people doing a very weird thing, we won't even know we need to support that thing. (More on that later.)

What I mean (and don't mean) by "telemetry"

Telemetry is a catch-all for measurements and signals a program sends home. In browsers that includes "technical and interaction data" (performance, feature usage, hardware basics), plus things like crash reports that are often controlled by a separate checkbox. To me, telemetry is things you send to the publisher and you don't directly receive anything in return.

In contrast, there are lots of other phone-home things I wouldn't call telemetry. Software update pings, for example. The publisher can derive data about this - in fact it's one of the only things Tor Browser 'collects' - but the purpose isn't to tell the publisher something, it's to get you the latest version and that's a direct benefit you gain. Firefox obviously has update pings, but it also has something called Remote Settings which is a tool to sync data to your browser for lots of other useful things. You phone home to get this data. Here's the list of collections, and here's a random one (it's overrides for the password autofill to fix certain websites). Overall it's stuff like graphics driver blocklists, addon blocklists, certificate blocklists, data for CRLite, exemptions to tracking protection to unbreak sites, and so on.

And then finally there are things that seem like gratuitous phoning home that I also don't consider telemetry. I don't know the status of all these features and if they still exist, or under what circumstances they happen, but these are things like pinging a known-good website to determine if you're under a captive portal, or roughtime to figure out if all your cert validation is going to break.

Now even for Telemetry - I'm not going to talk about product decisions like "is anyone clicking this button?" Those exist, sure, but they're not my world most days. I don't have any personal success stories from that world - I deal with technical telemetry - the kind that finds crashes and hangs, proves that risky security changes won't brick Nightly, and helps us pick the fastest safe implementation.

And I'm also not going to argue that you should trust Firefox's telemetry. I think you should make an informed decision - but if you're informed about what we collect (and all the mish-mash of data review approvals); how we collect it including 'regular telemetry' (discards your IP immediately), OHTTP (we never see your IP), Prio (privacy preserving calculations); and how we store it (automatic deletion of old data, segmented and unlinked datasets, etc) - and you still think we aren't doing enough to preserve your privacy... Well I can't argue with that. We aren't the absolute best in the world; we're far from the worst. And if we don't meet your threshold, turn it off.

But my point is: it's not pointless. It's not useless. It helps. It's shipped features you rely on.

As a super simple example you can easily poke at yourself - Mozilla's Background Hang Reporter (BHR) exists specifically to collect stacks during hangs on pre-release channels so engineers can find and fix the slow paths. That's telemetry.

Concrete wins from Firefox Telemetry (just from me)

This is a tiny slice from one developer. There are hundreds more across the project.

Killing eval in the parent process (1473549)

Eval is bad, right? It can lead to XSS attacks, and when your browser process is (partially) written with JavaScript - that can be a sandbox escape. We tried to eliminate eval in the parent (UI) process, shipped it to Nightly, and immediately broke Nightly. The entire test suite was green and Mozillians had dogfooded the feature for weeks... and it still blew up on real users with real customizations. We had to revert fast and spin a new build. It was a pretty big incident, and not a good day. So we re-did our entire approach here and put in several rounds of extensive telemetry.

That told us where eval was still happening in the wild, including Mozilla code paths we didn't have tests for and, crucially, a thriving community of Firefox tinkerers using userChromeJS and friends. Because telemetry surfaced those scripts, I could go talk to that community, explain the upcoming change, and work around the breakages. See the public thread on the firefox-scripts repo for a flavor of that conversation. There's no way we could have safely shipped this without telemetry, and certainly no way we could have preserved your ability to hack Firefox to do what you want.

Background Hang Reporter saved me from myself (1721840)

BHR data showed specific interactions where my code hung - no apparent reason, never would have guessed. I refactored, and the hang graphs dropped. That feedback loop doesn't exist without telemetry being on in pre-release.

Fission (site isolation) and data minimization (1708798)

Chrome has focused a lot on removing cross origin data from content processes, as well as the IPC security boundary for cross origin data retrieval. Coming from Tor Browser (where I am also a developer, although not too active) - I was also pretty concerned with personal user data unrelated to origin data. Stuff like your printer or device name. As part of Fission, I worked to eliminate both cross-origin data and personally identifiable things from the content process so a web process running a Spectre attack couldn't get those details. Telemetry helped us confirm we weren't breaking user workflows as we pulled those identifiers out.

Ending internet-facing jar: usage

Years ago Firefox allowed jar: URIs from web content, and the security model was... not great. Telemetry let us show that real-web usage was basically nonexistent, which made closing that attack surface from the web a no-brainer.

Same story brewing for XSLT

Chrome has been pushing to deprecate/remove XSLT in the browser due to security/maintenance risk and very low usage; I'm supportive. Usage telemetry is the only way we're able to justify removing a feature from the web.

Picking the fastest safe canvas noise (1972586)

For anti-fingerprinting canvas noise generation, I used telemetry to measure which implementation was actually fastest across CPUs: it's SHA-256 if you have SHA extensions; SipHash if you don't - or if the input is under ~2.5KB. That choice matters when you multiply it by billions of calls.

Font allowlist for anti-fingerprinting (Lists, 1795460)

Fonts are a huge fingerprinting vector. We built a font allowlist and font-visibility controls; by design, Firefox's fingerprinting protection avoids using your locally installed one-off fonts on the web. This dramatically shrinks the entropy of "which fonts do you have?" without breaking normal sites. While many browsers do this now, telemetry has helped us continue to improve these defenses and I'm pretty sure we're still the only one that has a font allowlist for Android.

Reality check on Resist Fingerprinting users

Folks who manually enable our "Resist Fingerprinting" preference (which we don't officially support, and I don't generally recommend - but hey, you do you) are very loud on Bugzilla. VERY loud. To the point where I've had a lot of managers and executives come telling me "Everyone is complaining about this breaking stuff, we really need to disable this so people can't accidentally turn it on." Telemetry let me show that despite being SO LOUD they're still a minute portion of the population. Management's question "Should we block it?" became "No." You're welcome.

That's just my lane. People I work closely with used telemetry to:

I could give more examples, but I think you get the idea.

"I use Foo browser because it disables telemetry."

Every major browser either implements telemetry or outsources the job to the upstream engine, and benefits from their having it. Period. Even Brave does telemetry, and they're quite public about their design (P3A): collected into buckets/histograms with privacy techniques like shuffling/thresholding. That's a perfectly respectable approach.

We can debate the efficacy or privacy properties of different telemetry designs. We can both stand aghast at overcollection of things that shouldn't be collected. We can debate whether it should be opt-out or opt-in. But only if we both start from the position that telemetry isn't philosophically bad, it can just be implemented badly.

Every Foo browser that brags about disabling telemetry is relying on their upstream source - whether it's Firefox or Chrome - to improve the Foo browser using someone else's telemetry - all while trying to take this moral high ground.

If you want to use Foo because it adds features you like, or you trust its publisher to choose defaults more than upstream - those are completely valid reasons to use it. But if the reason is "Telemetry is just a way for Firefox to spy on me", hopefully I've dented that perception.

05 Mar 2026 3:12pm GMT

The Rust Programming Language Blog: Announcing Rust 1.94.0

The Rust team is happy to announce a new version of Rust, 1.94.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.94.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.94.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.94.0 stable

Array windows

Rust 1.94 adds array_windows, an iterating method for slices. It works just like windows but with a constant length, so the iterator items are &[T; N] rather than dynamically-sized &[T]. In many cases, the window length may even be inferred by how the iterator is used!

For example, part of one 2016 Advent of Code puzzle is looking for ABBA patterns: "two different characters followed by the reverse of that pair, such as xyyx or abba." If we assume only ASCII characters, that could be written by sweeping windows of the byte slice like this:

fn has_abba(s: &str) -> bool {
    s.as_bytes()
        .array_windows()
        .any(|[a1, b1, b2, a2]| (a1 != b1) && (a1 == a2) && (b1 == b2))
}

The destructuring argument pattern in that closure lets the compiler infer that we want windows of 4 here. If we had used the older .windows(4) iterator, then that argument would be a slice which we would have to index manually, hoping that runtime bounds-checking will be optimized away.

Cargo config inclusion

Cargo now supports the include key in configuration files (.cargo/config.toml), enabling better organization, sharing, and management of Cargo configurations across projects and environments. These include paths may also be marked optional if they might not be present in some circumstances, e.g. depending on local developer choices.

# array of paths
include = [
    "frodo.toml",
    "samwise.toml",
]

# inline tables for more control
include = [
    { path = "required.toml" },
    { path = "optional.toml", optional = true },
]

See the full include documentation for more details.

TOML 1.1 support in Cargo

Cargo now parses TOML v1.1 for manifests and configuration files. See the TOML release notes for detailed changes, including:

For example, a dependency like this:

serde = { version = "1.0", features = ["derive"] }

... can now be written like this:

serde = {
    version = "1.0",
    features = ["derive"],
}

Note that using these features in Cargo.toml will raise your development MSRV (minimum supported Rust version) to require this new Cargo parser, and third-party tools that read the manifest may also need to update their parsers. However, Cargo automatically rewrites manifests on publish to remain compatible with older parsers, so it is still possible to support an earlier MSRV for your crate's users.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.94.0

Many people came together to create Rust 1.94.0. We couldn't have done it without all of you. Thanks!

05 Mar 2026 12:00am GMT

04 Mar 2026

feedPlanet Mozilla

Jonathan Almeida: Update jj bookmarks to the latest revision

Update: As of v0.39.0, tug is now built-in to jj as bookmark advance! :tada:

Got this one from another colleague as well but it seems like most folks use some version of this daily that it might be good to have this built-in.

Before I can jj git push my current bookmark to my remote, I need to update where my (tracked) bookmark is, to the latest change:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:00:22 451384bf <-- move 'main' here.
TIL: Update remote bookmark to the latest revision
◆  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main git_head() 9ad7ce11
TIL: Preserve image scale with ImageMagick
~

A quick one-liner jj tug does that for me:

@  ptuqwsty git@jonalmeida.com 2026-01-05 16:03:54 main* 6e7173b4
TIL: Update remote bookmark to the latest revision
◆  xoqwkuvu git@jonalmeida.com 2025-12-30 19:50:51 main@origin git_head() 9ad7ce11
TIL: Preserve image scale with ImageMagick
~

The alias is quite straight-forward:

[aliases]
# Update your bookmarks to your latest rev.
tug = ["bookmark", "move", "--from", "heads(::@ & bookmarks())", "--to", "@"]

04 Mar 2026 11:52pm GMT

This Week In Rust: This Week in Rust 641

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is office2pdf, a standalone library or binary to generate PDF from OOXML (docx, xlsx, etc.) files.

Thanks to One for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

414 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A positive week with a few nice improvements coming from query system cleanups.

Triage done by @panstromek. Revision range: eeb94be7..ddd36bd5

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.3%, 0.3%] 1
Regressions ❌
(secondary)
0.2% [0.0%, 0.3%] 3
Improvements ✅
(primary)
-0.8% [-2.1%, -0.1%] 141
Improvements ✅
(secondary)
-1.1% [-6.6%, -0.1%] 90
All ❌✅ (primary) -0.8% [-2.1%, 0.3%] 142

2 Regressions, 5 Improvements, 5 Mixed; 4 of them in rollups 30 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Compiler Team (MCPs only)

Language Reference

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-03-04 - 2026-04-01 🦀

Virtual
Asia
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

After all, Rust only became as good as it is by going through a rather drastic transformation. At one point it had a GC and Green Threads, famously. There's no substitute for making it exist and seeing how it does on a real problem.

- scottmcm on rust-users

Thanks to Jonas Fassbender for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

04 Mar 2026 5:00am GMT