The Firefox for Android app has always had a complicated build process - we're cramping a complex cross-platform browser engine and all the related components that make it work on Android into one package. In its current form, it lives in the Firefox mono-repo at mozilla-central (now mozilla-firefox using the git repository).
I wanted to document my "artifact-mode" environment here since it's worked quite successfully for me for many years with minor changes.
NOTE: After a fresh clone of the mono-repo, don't forget to first run and follow the prompts of ./mach bootstrap .
mozconfig
My mozconfig below is enabled for artifact mode, but occasionally I switch between various configurations. You can see those commented out, with these few extra notes:
I like to separate out my objdirs to avoid cache pollution between the different build types. I think you can get away without needing to specify this and an objdir for your build type and arch will be generated.
sccache speeds up the native portion of full builds after the first slow one, but it's a hit or miss if you fetch from the remote repository but don't need to rebuild as often.
I don't care to manually run the clobber step, and I don't truly appreciate why that isn't always automatically done.
# Build GeckoView/Firefox for Android:
ac_add_options --enable-application=mobile/android
# Targeting the following architecture.
# For regular phones, no --target is needed.
# For x86 emulators (and x86 devices, which are uncommon):
# ac_add_options --target=i686
# For newer phones or Apple silicon
ac_add_options --target=aarch64
# For x86_64 emulators (and x86_64 devices, which are even less common):
# ac_add_options --target=x86_64
# sccache will significantly speed up your builds by caching
# compilation results. The Firefox build system will download
# sccache automatically.
# This only works for non-artifact builds.
#ac_add_options --with-ccache=sccache
# Enable artifact builds; manager-mode.
ac_add_options --enable-artifact-builds
# Write build artifacts to..
## Full build dir
#mk_add_options MOZ_OBJDIR=./objdir-droid
#mk_add_options MOZ_OBJDIR=./objdir-desktop
## Artifact builds
mk_add_options MOZ_OBJDIR=./objdir-frontend
# Automatic clobbering; don't ask me.
mk_add_options AUTOCLOBBER=1
JAVA_HOME
Sometimes you might find yourself needing to run a (non-mach) command in the terminal. Those typically will need to invoke some parts of gradle for an Android build, so it's best to make sure those are using the same JDK as the bootstrapped one in the mono-repo. This avoids weird build errors where something that compiles in one place isn't working in another (like Android Studio).
The location for the JDKs are typically in ~/.mozbuild/jdk/, and if you've between around for ~6 months you end up with multiple versions after every JDK bump:
$ ls -l ~/.mozbuild/jdk/
drwxr-xr-x@ - jalmeida 15 Apr 2025 jdk-17.0.15+6
drwxr-xr-x@ - jalmeida 15 Jul 2025 jdk-17.0.16+8
drwxr-xr-x@ - jalmeida 21 Oct 2025 jdk-17.0.17+10
drwxr-xr-x@ - jalmeida 20 Jan 09:00 jdk-17.0.18+8
drwxr-xr-x@ - jalmeida 26 Feb 15:04 mozboot
You can find some way to point your latest JDK to one location or you can be lazy like me and pick the latest version to assign as your JAVA_HOME property by adding this to your shell's RC file:
export JAVA_HOME="$(ls -1dr -- $HOME/.mozbuild/jdk/jdk-* | head -n 1)/Contents/Home"
Android Studio
Similarly for Android Studio, let's do the same so that environment is identical. Head to, Settings | Build, Execution, Deployment | Build Tools | Gradle, and ensure that "Gradle JDK" path is set to JAVA_HOME.
Lately, the default seems to be for it to follow GRADLE_LOCAL_JAVA_HOME which is a property we can't easily override, so we have to manually set this ourselves.
Using the same Android SDK also helps speed things up and avoids source confusion. You can typically find it in ~/.mozbuild/android-sdk-macosx and update it at Settings | Languages & Frameworks | Android SDK.
Debugging
This section is for miscellaneous build error situations that come-up, but assuming mach build work and there are no known Android build changes, my solution has typically always been the same.
For example, the other day I fetched another engineers patch to test out locally1 as part of reviewing it where I faced the error message below:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
> A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
> Internal compiler error. See log for more details
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to generate a Build Scan (powered by Develocity).
> Get more help at https://help.gradle.org.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':components:feature-pwa:compileDebugKotlin'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:135)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:288)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:133)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:121)
at org.gradle.api.internal.tasks.execution.ProblemsTaskPathTrackingTaskExecuter.execute(ProblemsTaskPathTrackingTaskExecuter.java:41)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.DefaultNodeExecutor.executeLocalTaskNode(DefaultNodeExecutor.java:55)
at org.gradle.execution.plan.DefaultNodeExecutor.execute(DefaultNodeExecutor.java:34)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:339)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:84)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:339)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:328)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:459)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:376)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: org.gradle.workers.internal.DefaultWorkerExecutor$WorkExecutionException: A failure occurred while executing org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction
at org.gradle.workers.internal.DefaultWorkerExecutor$WorkItemExecution.waitForCompletion(DefaultWorkerExecutor.java:289)
at org.gradle.internal.work.DefaultAsyncWorkTracker.lambda$waitForItemsAndGatherFailures$2(DefaultAsyncWorkTracker.java:130)
at org.gradle.internal.Factories$1.create(Factories.java:33)
at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withoutLocks$2(DefaultWorkerLeaseService.java:344)
at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:342)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLocks(DefaultWorkerLeaseService.java:326)
at org.gradle.internal.work.DefaultWorkerLeaseService.withoutLock(DefaultWorkerLeaseService.java:331)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:126)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForItemsAndGatherFailures(DefaultAsyncWorkTracker.java:92)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForAll(DefaultAsyncWorkTracker.java:78)
at org.gradle.internal.work.DefaultAsyncWorkTracker.waitForCompletion(DefaultAsyncWorkTracker.java:66)
at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:260)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:237)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:220)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:203)
at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:170)
at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:42)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:75)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:68)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:38)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:69)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:46)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:39)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:28)
at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
at org.gradle.internal.Either$Right.fold(Either.java:176)
at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:75)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:53)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:35)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:64)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:35)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:62)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:40)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:76)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:45)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:136)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:66)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:38)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:297)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:44)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:31)
at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:64)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:132)
... 30 more
Caused by: org.jetbrains.kotlin.gradle.tasks.FailedCompilationException: Internal compiler error. See log for more details
at org.jetbrains.kotlin.gradle.tasks.TasksUtilsKt.throwExceptionIfCompilationFailed(tasksUtils.kt:22)
at org.jetbrains.kotlin.compilerRunner.GradleKotlinCompilerWork.run(GradleKotlinCompilerWork.kt:112)
at org.jetbrains.kotlin.compilerRunner.GradleCompilerRunnerWithWorkers$GradleKotlinCompilerWorkAction.execute(GradleCompilerRunnerWithWorkers.kt:75)
at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:68)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:64)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:61)
at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:61)
at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44)
at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41)
at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:58)
at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:176)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:194)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:127)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:169)
at org.gradle.internal.Factories$1.create(Factories.java:33)
at org.gradle.internal.work.DefaultWorkerLeaseService.lambda$withLocksAcquired$0(DefaultWorkerLeaseService.java:269)
at org.gradle.internal.work.ResourceLockStatistics$1.measure(ResourceLockStatistics.java:42)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocksAcquired(DefaultWorkerLeaseService.java:267)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:259)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:127)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:132)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:164)
at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:133)
... 2 more
The full trace was long and didn't seem related to a code failure in the module itself. So I employed the solution, which is always the same:
./mach build
In Android Studio, File > Sync Project with Gradle Files.
Yup, that's all. Very simple and boring.
1
With Jujutsu, this is the moz-phab command I use which has made it easier to manage review patches: moz-phab patch <patch-id> --no-branch --apply-to main@origin
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.
Learn how this was implemented from the original source here.
<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/116197244320129422">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript> &>"'
Hey everyone, we've been working on some exciting changes, and want to share them with you.
But first, let me introduce myself. I am Christos, the new Sr. Developer Relations engineer in Add-ons, and I'm excited to write my first post on the Add-ons engineering blog.
Deprecations and changes
To start, I'm looking at a couple of features that are going away: avoiding content script execution in extension contexts, decoupling file access from host permissions, and improving the display of pageAction SVG icon.
executeScript / registerContentScript in moz-extension documents
Deprecated: Firefox 149 Removed: Firefox 152
Starting in Firefox Nightly 149 and scheduled for Firefox 152, the scripting and tabs injection APIs no longer inject into moz-extension://documents. This change brings the API in line with broader efforts to discourage string-based code execution in extension contexts, alongside the default CSP that restricts script-src to extension URLs and the removal of remote source allowlisting in MV3 (bug 1581608).
Firefox emits a warning when this restriction is met, so you are aware of and can address any use of this process in your extensions. This is an example of the warning message:
Content Script execution in moz-extension document has been deprecated and it has been blocked
To work around this change, you can:
Import scripts directly in the extension page's HTML.
Use module imports or standard <script> tags in extension documents.
Restructure code to avoid dynamic code execution patterns. An extension can run code in its documents dynamically by registering a runtime.onMessage listener in the document's script, then sending a message to trigger execution of the required code.
File access becomes opt-in
Target: Firefox 152
Extensions requesting file://*/ or <all_urls> currently trigger the "Access your data for all websites" permission message, and when granted, can run content scripts in file:-URLs. From Firefox 152, file access in extensions requires an opt-in for all extensions, including those already installed (bug 2034168).
pageAction SVG icon CSS filter (automatic color scheme)
Removed: Firefox 152
Firefox has been automatically applying a greyscale and brightness CSS filter to pageAction (address bar button) SVG icons when a dark theme is active. This was intended to improve contrast, but it actually reduced contrast for multi-color icons and caused poor visibility for some extensions, such as Firefox Multi-Account Containers.
For icons that adapt to light and dark color schemes, you can now use @media (prefers-color-scheme: dark) in the SVG icon, or the MV3 action manifest key, and specify theme_icons.
Here is an example of how to use a `prefers-color-scheme` media query in a pageAction SVG icon to control how the icon adapts to dark mode:
Use of prefers-color-scheme media queries is also allowed in MV2 browserAction and MV3 action SVG icons as an alternative to the theme_icons manifest properties.
Now to the new stuff. Here, you get the ability to use popups without user activation, initial support for the new tab split view feature, and WebAuthn RP ID assertion.
openPopup without user activation (Firefox Desktop)
Available: Firefox 149 Desktop
action.openPopup() and browserAction.openPopup() no longer require a user gesture on Firefox Desktop. You can open your extension's popup programmatically, e.g., in response to a native-messaging event, an alarm, or a background-script condition.
This change is part of the ongoing cross-browser alignment work in the WebExtensions Community Group to harmonize popup behavior across engines.
Example
Before (Firefox < 149): must hang off a user gesture, e.g., a context menu click:
browser.menus.create({
id: "nudge",
title: "Open popup",
contexts: ["all"],
});
browser.menus.onClicked.addListener((info) => {
if (info.menuItemId === "nudge") {
browser.action.openPopup(); // user clicked the menu → allowed
}
});
After (Firefox ≥ 149) - same intent, no user gesture needed, fires from a timer:
browser.alarms.create("nudge", { delayInMinutes: 1 });
browser.alarms.onAlarm.addListener((alarm) => {
if (alarm.name === "nudge") {
browser.action.openPopup(); // works without a click
}
});
It's the same call with the same result, but only the trigger changes from a user-action handler to any background event.
It's the same call with the same result, but only the trigger changes from a user-action handler to any background event.
splitViewId in the tabs API
Available: Firefox 149
Firefox 149 introduces a new read-onlysplitViewIdproperty on the tabs.Tab object to expose Firefox's new split view feature (where two tabs are displayed side-by-side in one window). Split views are treated as one unit, and Web Extensions treat them the same way.
In Firefox 150, extensions can swap tabs within a split view. This update also resolves a confusing issue where using the user interface to reverse tab order incorrectly reports the tabs.onMoved event with inaccurate values. Additionally, Firefox introduces unsplitting behavior for web extensions: when tabs.move() is called with split-view tabs positioned separately (non-adjacently) in the array. Now, after the call, Firefox removes the split view rather than keeping the tabs locked together.
Here is an example of using the new splitViewId property.
// Log whenever a tab joins or leaves a split view.
browser.tabs.onUpdated.addListener((tabId, changeInfo) => {
if (!("splitViewId" in changeInfo)) return;
if (changeInfo.splitViewId === browser.tabs.SPLIT_VIEW_ID_NONE) {
console.log(`Tab ${tabId} left its split view`);
} else {
console.log(`Tab ${tabId} joined split view ${changeInfo.splitViewId}`);
}
});
// Firefox desktop also supports a filter to limite onUpdated events:
// }, { properties: ["splitViewId"] });
Firefox 151 enables extensions to move split views in tab groups. More improvements are coming, such as the ability to create split views from extensions (bug 2016928).
WebAuthn RP ID assertion
Available: Firefox 150
Previously, web extensions couldn't use WebAuthn credentials registered on their company's website or mobile apps. When extensions tried to set a custom Relying Party ID (RP ID) in navigator.credentials.create() or navigator.credentials.get(), Firefox rejected it with "SecurityError: The operation is insecure."
With Firefox 150, Extensions can now assert aWebAuthn RP IDfor any domain they have host permissions for
when calling navigator.credentials.create() or navigator.credentials.get(). This applies to both the publicKey.rp.id field during credential creation and the publicKey.rpId field during authentication.
A critical detail for server-side validation: When relying party servers validate credentials created by extensions, they must account for different origin formats across browsers. In Chrome, the origin follows the pattern chrome-extension://extensionid, which matches the extension's location.origin. Firefox 150 introduces a new stable origin format: moz-extension://hash, where the hash is a 64-character SHA-256 representation of the extension ID (using characters a-p to represent hex values). Importantly, this hash-based origin is the same all users, unlike Firefox's existing UUID-based moz-extension:// URLs used for extension documents.
To extract the origin from a credential for validation:
let clientData = JSON.parse(new TextDecoder().decode(
publicKeyCredential.response.clientDataJSON
));
console.log(clientData.origin);
It's been a very busy couple of months as we've reworked processes & priorities and established a roadmap for both iOS and Android. We are determining how best we can coordinate with the community, and think that our roadmap for the year has a good balance of fixes and features. Today, I want to talk about our contributors and pull requests, Notifications in the Android app, progress in the iOS app, and an overview of our roadmap for both apps this year.
Contributors & Pull Requests
We are so grateful for the support and code contributions of many members, whether building items on our roadmap, improving the user experience, or, of course, translating. As we work on our roadmap priorities, we will make time to review PRs and will discuss them weekly, and prioritize those that help solve issues and bugs or align with our roadmap items. Please be patient with our Pull Request pipeline. Typically, in working with the community, we try to react very quickly.
Roadmap
For Android, we've chosen the items on our roadmap because we think these will be the highest-impact features and bring the most value to everyone. Our focus this year is to simplify and modernize the Android codebase. This means reworking some of the architecture. This will be super helpful for us to move more quickly and will reduce complex bugs. The app has an older codebase, and like many older ones, it has its challenges. We have three full-time Android engineers and several community contributors, and we hope to better position ourselves to move quickly. At a high level, Android is focusing on the rearchitecture, a better Message List experience, and Message Reader screens. We are also simplifying how users can connect to Thunder Mail as we open it up.
Notifications
One thing that is at the top of my mind right now, too, is Push Notifications, specifically changes that Google has made to background processes, which affect our Notifications. We are looking into what we can do to solve this, so know that it has become a top priority for us. I've been asked, "Why is it so hard for Thunderbird to get Push Notifications right?" and I wanted to speak to some of the challenges we have. Most apps' Notifications are triggered by their own web services, which then send Notifications through Apple or Google, who pass them to users. But email is different. In an email client, we typically don't own our own backend services, but other companies do (Microsoft, Google, Hotmail, Yahoo, Proton, etc.). And they can have their own flavors of SMTP - how we get the emails, and no specific Push Notification implementation.
So we have a work around: polling those providers ever X minutes asking for new emails, and triggering local notifications - but we can't hook into a native Push Notification process like your banking app for example. This is under the IMAP implementation. The JMAP implementation (think modern email protocols) has something in place we can more readily consume. Another challenge is how the battery is affected by how often we poll the providers, and we need specific permissions from Google to run this process in the background. Those permissions changed recently which is why Notifications are having issues.
I've simplified some pieces here, but hopefully that gives you an idea of some of the complexity and tradeoffs that we are working with. With all of that said, this is veryimportant to us, and is our users' biggest pain point. It is becoming our biggest need for a fix. I'll give an update on where that sits within the roadmap next progress report when we have explored what solutions we can provide.
iOS Progress
For the iOS roadmap, everything is moving along well. We have been wrapping up most of our IMAP & SMTP tickets, and we are moving into the Account Data pieces to manage accounts and authorizations. We will also be having a new member join us in the next couple of weeks. This will add some speed, but we've made good progress in getting the inner pieces together - what I consider the most complex parts. As we move to more standard mobile backend pieces and more standard UI, we leave the world of unknown unknowns, and will be picking up steam.
At a high level our iOS roadmap is build out these screens:
Account Setup and Drawer
Messages: List, Reader, Compose, Search
And have these pieces in place:
IMAP
SMTP
MIME
OAuth
Encryption
Email Composition
And our target is still end of the year for the iOS release.
Thank You!
Again we are so grateful to you, our community, for your support, and we are excited for this next quarter as we start to see the fruits of our labors.
The Sync Storage team has landed official PostgreSQL support for Firefox Sync.
Historically, Sync has only officially supported Google Spanner as a storage backend, with MySQL working unofficially. That has been a pretty high barrier to entry for people self-hosting their own services.
With PostgreSQL support, we hope to make self-hosting more approachable and continue supporting people who want the agency of hosting their data on infrastructure they control.
There is updated documentation for running it with Docker, including a one-shot docker compose setup:
If you've been interested in self-hosting Sync but were put off by the storage requirements, take another look. If you run into bugs or have feedback, please file issues here:
I want Phabricator emails to have a Gmail label so I can know which patches had me as a reviewer that then had follow-up comments from other folks.
This is useful for me when I review a patch and then I need to respond back to discussions in a more timely manner in comment threads that I've created.
It's difficult to do this today similar to Bugzilla Gmail filters because there are fewer identifiers that the more simplistic Gmail filter parameters can help with.
Today I learnt that there is an X-Phabricator-Stamps header in those Phabricator emails that let's you identify you as a the reviewer in a patch. So using that information, I wrote the Google script below to run every minute and avoid re-processing the same email twice.
A couple variables were added to the top and some console.logs are sprinkled around for my own debugging.
Code
var REVIEWER = "jonalmeida";
var LABEL_NAME = "Phabricator/Comments";
var BODY_MATCH = "commented on this revision.";
var SENDER = "phabricator@mozilla.com";
/**
* Run once manually to install the per-minute trigger.
*/
function install() {
uninstall();
ScriptApp.newTrigger('processInbox')
.timeBased()
.everyMinutes(1)
.create();
}
/**
* Run once manually to remove the trigger.
*/
function uninstall() {
ScriptApp.getProjectTriggers().forEach(function(t) {
ScriptApp.deleteTrigger(t);
});
PropertiesService.getScriptProperties().deleteProperty('lastRun');
}
/**
* Every run, we try to avoid processing the same email twice because
* there is no API trigger to run a script on every new email received.
*/
function processInbox() {
var props = PropertiesService.getScriptProperties();
var lastRun = parseInt(props.getProperty('lastRun') || '0');
var now = Math.floor(Date.now() / 1000);
// On first run, look back 2 minutes
if (lastRun === 0) {
lastRun = now - 120;
}
var label = GmailApp.getUserLabelByName(LABEL_NAME);
if (!label) {
label = GmailApp.createLabel(LABEL_NAME);
}
console.log("last run: " + lastRun);
var threads = GmailApp.search("from:" + SENDER + " after:" + lastRun);
console.log("threads to process: " + threads.length);
for (var i = 0; i < threads.length; i++) {
var thread = threads[i];
var messages = thread.getMessages();
console.log("messages to process: " + messages.length);
for (var j = 0; j < messages.length; j++) {
if (hasReviewerStamp(messages[j])) {
thread.addLabel(label);
console.log(thread.getFirstMessageSubject());
break;
}
}
}
props.setProperty('lastRun', String(now));
}
function hasReviewerStamp(message) {
var raw = message.getRawContent();
var match = raw.match(/^X-Phabricator-Stamps:\s*(.+)$/m);
if (!match) {
return false;
}
var stamps = match[1].trim().split(/\s+/);
return (stamps.indexOf("reviewer(@" + REVIEWER + ")") > -1) && raw.indexOf(BODY_MATCH) > -1;
}
/**
* For debugging - see the list of labels you can search which
* differs from what is used in the Gmail UI filter.
*/
function listAllLabels() {
console.log("All labels");
var labels = GmailApp.getUserLabels();
for (var i = 0; i < labels.length; i++) {
console.log(labels[i].getName());
}
}
Dear reader. I am sure you have read a lot of blog posts about AI in the past weeks or months. And now I too am writing. Mostly to help me cope with what my kind of hacker people would call out as hypocrisy or cognitive dissonance.
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
This week was a bit all over the place, but the largest regressions were either already fixed or they are being investigated. There were also a couple of nice perf. wins.
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
We recently released the telemetry alerting beta, and announced it in the blog post here! This blog post will dive into the details of how it works across Treeherder, and Mozdetect. At a high level, MozDetect handles the change point detection for telemetry probes, and Treeherder handles storing the detections, and producing the emails/bugs for these.
MozDetect
All of the existing, and any future change detection point techniques used for telemetry alerting are built in MozDetect. Having these live outside of Treeherder gives a low-barrier to entry for adding new features, and testing existing ones without having to set up everything needed for alerting in Treeherder. It's built as a python module that is run through uv. This makes it very easy for anyone to run the code because of uv's excellent python version, and dependency management. How to work with the code in this repository is outlined here, along with how to add your own techniques to it (note the access to mozdata through gcloud is required for this).
Detectors are split into two parts: (i) a detector that performs a comparison between two groups, and (ii) a detector that performs detection on a time series (using the detector from (i)). Our default detection technique, called cdf_squared lives here. The timeseries_detector_name is the name that will be used to access the detector from the telemetry probe side through the change_detection_technique field. The only method that absolutely needs to be implemented by these is the detect_changes method and it must return a list of Detection objects. These detection objects contain all the necessary information for producing an alert. There is also an optional_detection_info field that can contain additional things like attachments that would be added to Bugzilla bugs, and additional_data that can hold JSON data for storage in the DB. The cumulative distribution function (CDF) squared technique uses these to store the CDF before and after the detection along with a graph of these as an attachment for the Bugzilla bug.
Example of a CDF graph that is provided in bugs.
CDF Squared Detection Technique
The CDF squared technique detects changes in time-series histogram data by comparing CDFs between consecutive windows. It takes two CDFs, each representing the distribution of measurements over a time window, and computes the sum of squared differences between the two CDFs at each bin. The sign of the summed linear difference is then used to assign a direction to the squared difference score so that the output encodes whether the distribution moved to higher values (right shift) or lower values (left shift).
For time-series detection, this base comparison is applied in a rolling fashion across the full history of data. Each day's 7-day smoothed CDF is compared against the next one, producing a continuous signal of squared CDF differences over time. A Butterworth low-pass filter is then applied to that signal to remove high-frequency noise while preserving genuine trend changes. Finally, scipy's find_peaks function is used to locate statistically significant peaks and valleys in the filtered signal using a dynamic alert threshold based on the historical data. Information is extracted from those areas and then used to build the detection information needed for the alert generation process.
Alerting
Our alerting tooling lives in the Treeherder codebase. It's run through our PerfSheriff Bot (called Sherlock) and runs once per day. When a detection is produced from MozDetect, a telemetry alert is added to the database and then the TelemetryAlertManager is called to handle it. The manager's tasks are split into 6 ordered phases:
Update alerts with changes from Bugzilla. This step ensures that any changes that happen in the bugs filed are mirrored into our database. Currently, we only track resolution changes here.
Comment on existing bugs. This step is for updating existing bugs with information from new alerts. This step is not currently being used. In the future, this could be used to inform probe owners that a probe which doesn't produce bugs has produced an alert in the same time range.
File new bugs for alerts. This step handles filing bugs for any new alerts on probes set up for producing bugs.
Modify existing bugs with new alerts. This step handles any modifications needed to existing bugs based on the new bugs that were created. Currently, the "See Also" field is modified for existing bugs to include the new bugs.
Produce emails for new alerts. This step handles producing emails for any alerts set up to produce emails.
Housekeeping. This step handles redoing any failures that happen above in either the current run or past runs. Currently, it's being used to retry bug modifications and sending emails when we encounter a failure there. This excludes retrying bug filling since we delete the alert in that case and retry it the next time the alert is generated.
After the housekeeping step, the manager is done for the day and runs again on the next day to handle any updates and new alerts. Contrary to how alerting works for performance tests in CI, this process is fully automated and requires no human input at any point.
Setting up telemetry probes for alerting happens on the mozilla-central side in their probe schema using the new monitor field in the metadata section (example for email alerts, example for bug alerts). The telemetry alerting documentation has information about how to do this. We then use an index.json file from the telemetry dictionary to gather all the probes that should be alerting. The information there is supplemented by more granular information later in the pipeline to gather things like the time unit used for the probe to be able to better format the Bugzilla bug table.
Once a telemetry probe is set up for alerting and is found by our system, the owners (those listed in the email notification fields) will begin either receiving emails or have bugs produced for them. These can also be viewed by everyone on this dashboard.
Getting the project to this point involved work from people across multiple teams here at Mozilla. Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder-related changes.
If you hit any issues with the telemetry alerting system, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
We're happy to announce that the Telemetry Alerting beta is now open to everyone!
Monitoring for changes in telemetry probes that you own can be difficult to do on a regular and continuous basis. With telemetry alerting, that changes today! You can now quickly set up your timing distribution probes for automated monitoring on Windows with notifications through email or a Bugzilla bug.
To get started, if you only need email alerts, simply add monitor: True to the metadata section of your probe (example).
Example of an email alert.
If you would prefer to receive Bugzilla bugs when a change is detected, set the monitor field like so (example):
More information about telemetry alerting, and how to set up a probe can be found here in the documentation. There's also a dashboard that can show you all of the existing telemetry alerts along with some detection information. For now, we only support change detection on Windows for `timing_distribution` probes (see here for other desktop platforms, and android).
Please note that this is an open beta and we are actively looking for feedback on this system. If you hit any issues, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder changes.
For a more detailed look at how this works, see this blog post.
Mobile browsing hasn't kept up with how people actually use their phones.
Right now, even basic tasks can feel harder than they should. Finding what you need can mean scrolling through ads and filler content, keeping track of too many tabs, or thinking twice about how private your connection is.
A mobile browser should do more - and we're raising the bar. Firefox is rolling out a set of updates that build on our most popular desktop features and adapt them for how you browse on-the-go. Here's what's out now, and what's coming next.
When you're following a recipe, reading a product review, or deciding whether a long article is worth your time, getting to the useful part can take longer than it should.
With Shake to Summarize, you can shake or tap your phone to generate a quick summary of the page. Currently available for iOS users in English,we're expanding availability to all iOS users in German, French, Spanish, Portuguese, Italian and Japanese starting with Firefox 150 on April 21. We'll also soon be making Shake to Summarize available to Android users in English, so they too can get to the key points of any article in seconds.
AI features are becoming a more common part of browsers - but not everyone wants the same experience. Firefox gives you a say in how they're used. With AI Controls, you can turn AI features off entirely, enable only the ones you want, or adjust things over time. Rolling out on Android and iOS beginning May 21.
Firefox's free built-in VPN covers up to 50 gigabytes of your browsing in Firefox each month, across desktop and mobile devices. It adds a layer of protection to your browsing activity by masking your IP address - especially useful when you're on public Wi-Fi. Unlike many "free VPNs" that rely on ads or selling user data to generate revenue, Firefox is built with a different model: no selling your browsing data, no injecting ads into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service. Rolling out on Android soon.
Tab Groups have been among the most-requested mobile features from our Mozilla community, and they're coming on mobile soon. You'll be able to group related tabs to stay organized, whether you're comparing restaurants, planning a trip or saving articles to read later.
We're also building toward smart groupings, where Firefox can automatically suggest tab groups for you. Rolling out on Android soon.
More updates, built around how you browse on mobile
Your phone comes with a browser. That doesn't mean it has to stay your default
"Firefox exists to give people a better way to experience the web, and that has to be just as true on mobile as it is on desktop," said Ajit Varma, head of Firefox. "For many people, their phone is their primary way of getting online, and they deserve a browser that's fast, intuitive and built around their needs. That's why we're investing in mobile more than ever before. We're building for the millions of people who choose Firefox every day, and giving even more people a reason to do the same."
Firefox is building a mobile experience designed around how people browse - with tools that help you move faster, stay organized and stay in control.
These updates begin rolling out in April with more on the way.
Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.
As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week's release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.
As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up.
Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn't finished, but we've turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.
Until now, the industry has largely fought security to a draw. Vendors of critical internet-exposed software like Firefox take security extremely seriously and have teams of people who get out of bed every morning thinking about how to keep users safe. Nevertheless, we've all long quietly acknowledged that bringing exploits to zero was an unrealistic goal. Instead, we aimed to make them so expensive that only actors with functionally unlimited budgets can afford them, and that the cost of burning such an expensive asset disincentivizes those actors against casual use.
This is because security to date has been offensively-dominant: the attack surface isn't infinite, but it's large enough to be difficult to defend comprehensively with the tools we've had available. This gives attackers an asymmetric advantage, since they only need to find one chink in the armor.
We use defense-in-depth to apply multiple layers of overlapping defenses, but no layer is bulletproof. Firefox runs each website in a separate process sandbox, but attackers try to combine bugs in the rendering code with bugs in the sandbox to escape to a more privileged context. We've led the industry in building and adopting Rust, but we still can't afford to stop everything to rewrite decades of C++ code, especially since Rust only mitigates certain (very common) classes of vulnerabilities.
We pair defense-in-depth engineering with an internal red team tasked with staying on the leading edge of automated analysis techniques. Until recently, these have largely been dynamic analysis techniques like fuzzing. Fuzzing is quite fruitful in practice, but some parts of the code are harder to fuzz than others, leading to uneven coverage.
Elite security researchers find bugs that fuzzers can't largely by reasoning through the source code. This is effective, but time-consuming and bottlenecked on scarce human expertise. Computers were completely incapable of doing this a few months ago, and now they excel at it. We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't.
This can feel terrifying in the immediate term, but it's ultimately great news for defenders. A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug. Closing this gap erodes the attacker's long-term advantage by making all discoveries cheap.
Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher. Some commentators predict that future AI models will unearth entirely new forms of vulnerabilities that defy our current comprehension, but we don't think so. Software like Firefox is designed in a modular way for humans to be able to reason about its correctness. It is complex, but not arbitrarily complex1.
The defects are finite, and we are entering a world where we can finally find them all.
1 There's a risk that codebases begin to surpass human comprehension as a result of more AI in the development process, scaling bug complexity along with (or perhaps faster than) discovery capability. Human-comprehensibility is an essential property to maintain, especially in critical software like browsers and operating systems.
I'm very excited to announce the first release of the Symposium project as well as its inclusion in the Rust Foundation's Innovation Lab. Symposium's goal is to let everyone in the Rust community participate in making agentic development better. The core idea is that crate authors should be able to vend skills, MCP servers, and other extensions, in addition to code. The Symposium tool then installs those extensions automatically based on your dependencies. After all, who knows how to use a crate better than the people who maintain it?
If you want to read more details about how Symposium works, I refer you to the announcement post from Jack Huey on the main Symposium blog. This post is my companion post, and it is focused on something more personal - the reasons that I am working on Symposium.
I believe in extensibility everywhere
The short version is that I believe in extensibility everywhere. Right now, the Rust language does a decent job of being extensible: you can write Rust crates that offer new capabilities that feel built-in, thanks to proc-macros, traits, and ownership. But we're just getting started at offering extensibility in other tools, and I want us to hurry up!
I want crate authors to be able to supply custom diagnostics. I want them to be able to supply custom lints. I want them to be able to supply custom optimizations. I want them to be able to supply custom IDE refactorings. And, as soon as I started messing around with agentic development, I wanted extensibility there too.
Symposium puts crate authors in charge
The goal of Symposium is to give crate authors, and the broader Rust community, the ability to directly influence the experience of people writing Rust code with agents. Rust is a really popular target language for agents because the type system provides strong guardrails and it generates efficient code - and I predict it's only going to become more popular.
Despite Rust's popularity as an agentic coding target, the Rust community right now are basically bystanders when it comes to the experience of people writing Rust with agents; I want us to have a means of influencing it directly.
Enter Symposium. With Symposium, Crate authors can package up skills etc and then Symposium will automatically make them available for your agent. Symposium also takes care of bridging the small-but-very-real gaps between agents (e.g., each has their own hook format, and some of them use .agents/skills and some use .claude/skills, etc).
Example: the assert-struct crate
Let me give you an example. Consider the assert-truct crate, recently created by Carl Lerche. assert-struct lets you write convenient assertions that test the values of specific struct fields:
This crate is neat, but of course, no models are going to know how to use it - it's not part of their training set. They can figure it out by reading the docs, but that's going to burn more tokens (expensive, slow, consumes carbon), so that's not a great idea.
…but wouldn't it be better the crate could teach the agent itself?
With Symposium, teaching your agent how to use your dependencies should not be necessary. Instead, your crates can publish their own skills or other extensions.
The way this works is that the assert-struct crate defines the skill once, centrally, in its own repository1. Then there is a separate file in Symposium's central recommendations repository with a pointer to the assert-struct repository. Any time that the assert-struct repository updates that skill, the updates are automatically synchronized for you. Neat! (You can also embed skills directly in the rr repository, but then updating them requires a PR to that repo.)
Currently we allow skill content to be defined in a decentralized fashion but we require that a plugin be added to our central recommendations repository. This is a temporary limitation. We eventually expect to allow crate authors to adds skills and plugins in a fully decentralized fashion.
We chose to limit ourselves to a centralized repository early on for three reasons:
Even when decentralized support exists, a centralized repository will be useful, since there will always be crates that choose not to provide that support.
Having a central list of plugins will make it easy to update people as we evolve Symposium.
Having a centralized repository will help protect against malicious skills[^threat] while we look for other mechanisms, since we can vet the crates that are added and easily scan their content.
What if I want to add skills for crates private to my company? I don't want to put those in the central repository!
No problem, you can add a custom plugin source.
Are you aware of the negative externalities of LLMs?
Extensibility: because everybody has something to offer
Fundamentally, the reason I am working on Symposium is that I believe everybody has something unique to offer. I see the appeal of strongly opinionated systems that reflect the brilliant vision of a particular person. But to me, the most beautiful systems are the ones that everybody gets to build together4. This is why I love open source. This is why I love emacs5. It's why I love VSCode's extension system, which has so many great gems6.
To me, Symposium is a double win in terms of empowerment. First, it makes agents extensible, which is going to give crate authors more power to support their crates. But it also helps make agentic programming better, which I believe will ultimately open up programming to a lot more people. And that is what it's all about.
Actually as of this posting, the assert-struct skill is embedded directly in the recommendations repo. But I opened a PR to put it on assert-struct and I'll port it over once it lands. ↩︎
Within Amazon, it's been amazing to watch how many people who never thought of themselves as software developers are starting to build software. Considering the challenges the software industry has with representation, I find this very encouraging. Diverse teams are stronger, better teams!↩︎
None of this is to say I don't believe in good defaults; there's a reason I use Zed and VSCode these days, and not emacs, much as I love it in concept. ↩︎
OMG. One of my friends college wrote this amazing essay some time back on emacs. Next time you're doomscrolling on the toilet or whatever, pop over to this essay instead. Fair warning, it's long, so it'll take you a while to read, but I think it nails what people love about emacs. ↩︎
These days I'm really enjoying Zed, but I have to say, I really miss kahole/edamagit! Which of course is inspired by the magit emacs package. ↩︎
WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we've done as part of the Firefox 150 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 150, Khalid AlHaddad contributed several improvements:
Added the emulation.setNetworkConditions command, which supports the type: offline at the moment. Using this, you can emulate offline mode either on specific browsing contexts, on user contexts (a.k.a. containers) or globally.
The Rust team is happy to announce a new version of Rust, 1.95.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.95.0 with:
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.95.0 stable
cfg_select!
Rust 1.95 introduces a cfg_select! macro that acts roughly similar to a compile-time match on cfgs. This fulfills the same purpose as the popular cfg-if crate, although with a different syntax. cfg_select! expands to the right-hand side of the first arm whose configuration predicate evaluates to true. Some examples:
Rust 1.88 stabilized let chains. Rust 1.95 brings that capability into match expressions, allowing for conditionals based on pattern matching.
match value {Some(x)ifletOk(y)=compute(x)=>{// Both `x` and `y` are available here
println!("{}, {}", x, y);}_=>{}}
Note that the compiler will not currently consider the patterns matched in if let guards as part of the exhaustiveness evaluation of the overall match, just like if guards.
Rust 1.95 removes support on stable for passing a custom target specification to rustc. This should not affect any Rust users using a fully stable toolchain, as building the standard library (including just core) already required using nightly-only features.
We're also gathering use cases for custom targets on the tracking issue as we consider whether some form of this feature should eventually be stabilized.
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.95.0
Many people came together to create Rust 1.95.0. We couldn't have done it without all of you. Thanks!