10 Apr 2026

feedPlanet Mozilla

Andreas Farre: How to make Firefox builds1 17% faster2

In the previous post, I mentioned that buildcache has some unique properties compared to ccache and sccache. One of them is its Lua plugin system, which lets you write custom wrappers for programs that aren't compilers in the traditional sense. With Bug 2027655 now merged, we can use this to cache Firefox's WebIDL binding code generation.

What's the WebIDL step?

When you build Firefox, one of the earlier steps runs python3 -m mozbuild.action.webidl to generate C++ binding code from hundreds of .webidl files. It produces thousands of output files: headers, cpp files, forward declarations, event implementations, and so on. The step isn't terribly slow on its own, but it runs on every clobber build, and the output is entirely deterministic given the same inputs. That makes it a perfect candidate for caching.

The problem was that the compiler cache was never passed to this step. Buildcache was only wrapping actual compiler invocations, not the Python codegen.

The change

The fix in Bug 2027655 is small. In dom/bindings/Makefile.in, we now conditionally pass $(CCACHE) as a command wrapper to the py_action call:

WEBIDL_CCACHE=
ifdef MOZ_USING_BUILDCACHE
WEBIDL_CCACHE=$(CCACHE)
endif

webidl.stub: $(codegen_dependencies)
        $(call py_action,webidl $(relativesrcdir),$(srcdir),,$(WEBIDL_CCACHE))
        @$(TOUCH) $@

The py_action macro in config/makefiles/functions.mk is what runs Python build actions. The ability to pass a command wrapper as a fourth argument was also introduced in this bug. When buildcache is configured as the compiler cache, this means the webidl action is invoked as buildcache python3 -m mozbuild.action.webidl ... instead of just python3 -m mozbuild.action.webidl .... That's all buildcache needs to intercept it.

Note the ifdef MOZ_USING_BUILDCACHE guard. This is specific to buildcache because ccache and sccache don't have a mechanism for caching arbitrary commands. Buildcache does, through its Lua wrappers.

The Lua wrapper

Buildcache's Lua plugin system lets you write a script that tells it how to handle a program it doesn't natively understand. The wrapper for WebIDL codegen, webidl.lua, needs to answer a few questions for buildcache:

With that information, buildcache can hash the inputs, check the cache, and either replay the cached outputs or run the real command and store the results.

The wrapper uses buildcache's direct_mode capability, meaning it hashes input files directly rather than relying on preprocessed output. This is the right approach here since we're not dealing with a C preprocessor but with a Python script that reads .webidl files.

Numbers

Here are build times for ./mach build on Linux, comparing compiler cachers. Each row shows a clobber build with an empty cache (cold), followed by a clobber build with a filled cache (warm):

tool cold warm with plugin
none 5m35s n/a n/a
ccache 5m42s 3m21s n/a
sccache 9m38s 2m49s n/a
buildcache 5m43s 1m27s 1m12s

The "with plugin" column is buildcache with the webidl.lua wrapper active. It shaves another 15 seconds1, bringing the total down to 1m12s2. Not a revolutionary improvement on its own, but it demonstrates the mechanism. The WebIDL step is just the first Python action to get this treatment; there are other codegen steps in the build that could benefit from the same approach.

More broadly, these numbers show buildcache pulling well ahead on warm builds. Going from a 5m35s clean build to a 1m12s cached rebuild is a nice improvement to the edit-compile-test cycle.

These are single runs on one machine, not rigorous benchmarks, but the direction is clear enough.

Setting it up

If you're already using buildcache with mach, the Makefile change is available when updating to today's central. To enable the Lua wrapper, clone the buildcache-wrappers repo and point buildcache at it via lua_paths in ~/.buildcache/config.json:

{
"lua_paths": ["/path/to/buildcache-wrappers/mozilla"],
"max_cache_size": 10737418240,
"max_local_entry_size": 2684354560
}

Alternatively, you can set the BUILDCACHE_LUA_PATH environment variable. A convenient place to do that is in your mozconfig:

mk_add_options "export BUILDCACHE_LUA_PATH=/path/to/buildcache-wrappers/mozilla/"

The large max_local_entry_size (2.5 GB) is needed because some Rust crates produce very large cache entries.

What's next

The Lua plugin system is the interesting part here. The WebIDL wrapper is a proof of concept, but the same technique applies to any deterministic build step that takes known inputs and produces known outputs. There are other codegen actions in the Firefox build that could get the same treatment, and I plan to explore those next.

Notes
  1. For a clobber build with a warm cache

  2. On my machine

10 Apr 2026 12:00am GMT

09 Apr 2026

feedPlanet Mozilla

The Mozilla Blog: Old habits die hard: Microsoft tries to limit our options, this time with AI

Black-and-white close-up of a hand using a device beside oversized cursor icons.

Microsoft recently announced it's pulling back Copilot from several of its core Windows apps - Photos, Notepad, the Snipping Tool, and Widgets. Rolling back these forced AI integrations is the right move, but this is just the most recent example of Microsoft going too far without user consent.

Copilot was pushed onto users

Over the past year, Copilot wasn't offered to Windows users - it was installed on them. The M365 Copilot app began auto-installing on any Windows device running Microsoft 365 desktop apps, with no prompt and no consent. A new physical keyboard key was added to laptops that launched Copilot by default, with no simple way to remap it. By default, Copilot was pinned to the taskbar starting with Windows 11 PCs. And, going a step further, Microsoft planned to embed it into three of the most fundamental surfaces for the operating system: the Windows notification center, the Settings app, and File Explorer.

Then came the user backlash.

When Microsoft says it now wants to be "intentional" about Copilot, they're really admitting that they made repeated choices to serve their business over their customers.

This isn't the first time - Microsoft has a pattern of deceptive design patterns

The pattern of behavior here isn't new. Independent research commissioned by Mozilla has documented how Microsoft uses design and distribution tactics to override user choice - from deliberately complicated processes for changing your default browser, to UI that routes users back to Microsoft's Edge browser even after they've explicitly chosen something else.

Since Mozilla published that research, Microsoft has continued to escalate its use of dark patterns to force behaviors that help the bottom line, not people's lives. Here are a few examples from the rollout of Windows 11 that have continued to strip users of their choice:

The Copilot rollout followed the same playbook we've come to expect from Microsoft: use automatic installs, physical hardware, and default settings to force behaviors. In the most recent instance, they allowed their AI to learn and gather data as quickly as possible before people had a choice.

What 'genuinely useful' AI integration actually looks like

We, like Microsoft and basically every tech company, have been asking ourselves the same question: What does it mean for AI to be genuinely useful? For us, the answer is simple. AI should work on your terms, not ours. Firefox's goal is to create AI enhancements that are made for people, not just because they can increase profit.

We've rolled out AI-enhanced features that make browsing smarter, faster, and more personalized, such as translations that stay local on your device to help you browse the web in your preferred language, alt text in PDFs to add accessibility descriptions to images in PDF pages and tab grouping which suggests related tabs and group names.

But we also know users deserve a choice. We built our answer into Firefox 148, introducing a centralized AI Controls panel in your browser settings including a single "Block AI Enhancements" switch that turns off every AI feature at once. Each option is also individually controllable.

The premise is simple: You should decide whether AI is part of your browsing experience at all. Not Big Tech. Not Mozilla. You.

And critically, your preferences also persist across browser updates, which means AI tools won't silently re-enable themselves after a major upgrade. No reinstalling. No opting out again after the fact. It's designed for people who care about what's happening on their computer but shouldn't have to become a systems administrator to stay in control of it.

The stakes are bigger than one rollback

When a company with Microsoft's reach continues to control users - and only walks it back when the noise gets loud enough - it shapes what people expect from technology. It tells people that their only real move is to complain until, hopefully, the company relents. It also makes it harder for alternatives to compete when a company uses its reach and control to steer people back into its own products.

We don't think that's the internet we have to accept. People have been clear about what they want when it comes to this era of the internet. They want to feel like they're in control of their own devices and their own data. That's the internet we're trying to build.

The post Old habits die hard: Microsoft tries to limit our options, this time with AI appeared first on The Mozilla Blog.

09 Apr 2026 5:03pm GMT

The Mozilla Blog: 0DIN is open-sourcing AI security and the hard-earned knowledge behind it

Retro-futuristic scientist using an open-source AI scanner to analyze floating vintage technology and digital data streams in space.
Image generated by Nano Banana 2 in response to a request for a "Retro-futuristic collage of a scientist using an open-source AI scanner to analyze floating vintage tech and digital data streams."

We're launching across the developer and security community this week on Product Hunt and Hacker News. If you've been following AI security, we'd love your support and your feedback.

At Mozilla, open source has never been just a licensing choice. It's a conviction: the internet gets healthier when tools and knowledge circulate freely, when anyone can audit what's running, extend what exists, and build on what came before. That's why we built Firefox in the open. It's why we've kept building that way ever since.

0DIN, Mozilla's AI security team, is working from the same premise. This week we're releasing the 0DIN AI Security Scanner as open source software under the Apache 2.0 license, along with 179 community probes covering 35 vulnerability families, plus six specialty probes drawn exclusively from our bug bounty library.

The scanner, and the intelligence behind it

The 0DIN Scanner isn't another benchmark suite built from textbook examples. We're seeding it with probes drawn directly from our bug bounty program, where security researchers compete to find novel techniques to manipulate, extract data from, and subvert AI systems. As new vulnerabilities are discovered and disclosed through that program, we'll continue adding probes to the open-source library over time.

That loop, from researcher discovery to packaged reusable test, is what separates 0DIN Scanner from generic tooling. It's high impact intelligence on jailbreaks, updated frequently as our researchers find new techniques.

Built on NVIDIA's GARAK open-source framework, the 0DIN Scanner adds a graphical interface, automated scan scheduling, cross-model comparative analysis, and enterprise-grade reporting. It runs against frontier models, open source LLMs, chatbots and anything with a prompt interface. Security teams can see attack success rates, a vulnerability breakdown, and a comparison against the frontier models that attackers are also probing every day.

Six of those bug bounty probes are named here for the first time: Placeholder Injection, Incremental Table Completion, Technical Field Guide, Chemical Compiler Debug, Correction, and Hex Recipe Book. Each represents a real technique that worked against production AI systems before we closed the loop.

These probes are scored using JEF (Jailbreak Evaluation Framework), our open-source library for measuring prohibited content output, which is also seeing major updates this week.

The code is at github.com/0din-ai/ai-scanner. Fork it, extend it, build on it.

Knowing your risk before attackers do

Not every organization has a red team or the bandwidth to run adversarial testing. Many companies are deploying AI in production right now without a clear picture of where they're exposed. To help close that gap, we're offering free security assessments for enterprise AI deployments.

The assessment delivers an attack success rate against your systems, a breakdown across prompt injection, jailbreaks, and data extraction categories, and a benchmark comparison against major frontier models. The process takes a few minutes to setup with scan duration varying based on the number of probes chosen. If you're actively deploying AI and haven't tested it under adversarial conditions, this is a good place to start.

For teams that don't want to manage the open source scanner on their own, we also offer a managed Enterprise edition with access to nearly 500 pre-disclosure probes from the bug bounty program, giving organizations advance notice of emerging techniques before they're publicly known.

Why open source, and why now

AI is moving fast enough that no single team will solve this alone. There are too many threats, too many models, too much attack surface. Keeping our tools locked away would make 0DIN marginally stronger while leaving the broader internet weaker.

The researchers who submitted findings through our bug bounty program earned bounties for their work. We're releasing a meaningful portion of that intelligence as open source and we'll keep doing so as new vulnerabilities are discovered and disclosed. That's the deal Mozilla has always offered: we build in the open, the community helps make it better, and the web gets a little healthier for it.

Get involved

The post 0DIN is open-sourcing AI security and the hard-earned knowledge behind it appeared first on The Mozilla Blog.

09 Apr 2026 4:35pm GMT