09 Apr 2026

feedPlanet Mozilla

The Mozilla Blog: Old habits die hard: Microsoft tries to limit our options, this time with AI

Black-and-white close-up of a hand using a device beside oversized cursor icons.

Microsoft recently announced it's pulling back Copilot from several of its core Windows apps - Photos, Notepad, the Snipping Tool, and Widgets. Rolling back these forced AI integrations is the right move, but this is just the most recent example of Microsoft going too far without user consent.

Copilot was pushed onto users

Over the past year, Copilot wasn't offered to Windows users - it was installed on them. The M365 Copilot app began auto-installing on any Windows device running Microsoft 365 desktop apps, with no prompt and no consent. A new physical keyboard key was added to laptops that launched Copilot by default, with no simple way to remap it. By default, Copilot was pinned to the taskbar starting with Windows 11 PCs. And, going a step further, Microsoft planned to embed it into three of the most fundamental surfaces for the operating system: the Windows notification center, the Settings app, and File Explorer.

Then came the user backlash.

When Microsoft says it now wants to be "intentional" about Copilot, they're really admitting that they made repeated choices to serve their business over their customers.

This isn't the first time - Microsoft has a pattern of deceptive design patterns

The pattern of behavior here isn't new. Independent research commissioned by Mozilla has documented how Microsoft uses design and distribution tactics to override user choice - from deliberately complicated processes for changing your default browser, to UI that routes users back to Microsoft's Edge browser even after they've explicitly chosen something else.

Since Mozilla published that research, Microsoft has continued to escalate its use of dark patterns to force behaviors that help the bottom line, not people's lives. Here are a few examples from the rollout of Windows 11 that have continued to strip users of their choice:

The Copilot rollout followed the same playbook we've come to expect from Microsoft: use automatic installs, physical hardware, and default settings to force behaviors. In the most recent instance, they allowed their AI to learn and gather data as quickly as possible before people had a choice.

What 'genuinely useful' AI integration actually looks like

We, like Microsoft and basically every tech company, have been asking ourselves the same question: What does it mean for AI to be genuinely useful? For us, the answer is simple. AI should work on your terms, not ours. Firefox's goal is to create AI enhancements that are made for people, not just because they can increase profit.

We've rolled out AI-enhanced features that make browsing smarter, faster, and more personalized, such as translations that stay local on your device to help you browse the web in your preferred language, alt text in PDFs to add accessibility descriptions to images in PDF pages and tab grouping which suggests related tabs and group names.

But we also know users deserve a choice. We built our answer into Firefox 148, introducing a centralized AI Controls panel in your browser settings including a single "Block AI Enhancements" switch that turns off every AI feature at once. Each option is also individually controllable.

The premise is simple: You should decide whether AI is part of your browsing experience at all. Not Big Tech. Not Mozilla. You.

And critically, your preferences also persist across browser updates, which means AI tools won't silently re-enable themselves after a major upgrade. No reinstalling. No opting out again after the fact. It's designed for people who care about what's happening on their computer but shouldn't have to become a systems administrator to stay in control of it.

The stakes are bigger than one rollback

When a company with Microsoft's reach continues to control users - and only walks it back when the noise gets loud enough - it shapes what people expect from technology. It tells people that their only real move is to complain until, hopefully, the company relents. It also makes it harder for alternatives to compete when a company uses its reach and control to steer people back into its own products.

We don't think that's the internet we have to accept. People have been clear about what they want when it comes to this era of the internet. They want to feel like they're in control of their own devices and their own data. That's the internet we're trying to build.

The post Old habits die hard: Microsoft tries to limit our options, this time with AI appeared first on The Mozilla Blog.

09 Apr 2026 5:03pm GMT

The Mozilla Blog: 0DIN is open-sourcing AI security and the hard-earned knowledge behind it

Retro-futuristic scientist using an open-source AI scanner to analyze floating vintage technology and digital data streams in space.
Image generated by Nano Banana 2 in response to a request for a "Retro-futuristic collage of a scientist using an open-source AI scanner to analyze floating vintage tech and digital data streams."

We're launching across the developer and security community this week on Product Hunt and Hacker News. If you've been following AI security, we'd love your support and your feedback.

At Mozilla, open source has never been just a licensing choice. It's a conviction: the internet gets healthier when tools and knowledge circulate freely, when anyone can audit what's running, extend what exists, and build on what came before. That's why we built Firefox in the open. It's why we've kept building that way ever since.

0DIN, Mozilla's AI security team, is working from the same premise. This week we're releasing the 0DIN AI Security Scanner as open source software under the Apache 2.0 license, along with 179 community probes covering 35 vulnerability families, plus six specialty probes drawn exclusively from our bug bounty library.

The scanner, and the intelligence behind it

The 0DIN Scanner isn't another benchmark suite built from textbook examples. We're seeding it with probes drawn directly from our bug bounty program, where security researchers compete to find novel techniques to manipulate, extract data from, and subvert AI systems. As new vulnerabilities are discovered and disclosed through that program, we'll continue adding probes to the open-source library over time.

That loop, from researcher discovery to packaged reusable test, is what separates 0DIN Scanner from generic tooling. It's high impact intelligence on jailbreaks, updated frequently as our researchers find new techniques.

Built on NVIDIA's GARAK open-source framework, the 0DIN Scanner adds a graphical interface, automated scan scheduling, cross-model comparative analysis, and enterprise-grade reporting. It runs against frontier models, open source LLMs, chatbots and anything with a prompt interface. Security teams can see attack success rates, a vulnerability breakdown, and a comparison against the frontier models that attackers are also probing every day.

Six of those bug bounty probes are named here for the first time: Placeholder Injection, Incremental Table Completion, Technical Field Guide, Chemical Compiler Debug, Correction, and Hex Recipe Book. Each represents a real technique that worked against production AI systems before we closed the loop.

These probes are scored using JEF (Jailbreak Evaluation Framework), our open-source library for measuring prohibited content output, which is also seeing major updates this week.

The code is at github.com/0din-ai/ai-scanner. Fork it, extend it, build on it.

Knowing your risk before attackers do

Not every organization has a red team or the bandwidth to run adversarial testing. Many companies are deploying AI in production right now without a clear picture of where they're exposed. To help close that gap, we're offering free security assessments for enterprise AI deployments.

The assessment delivers an attack success rate against your systems, a breakdown across prompt injection, jailbreaks, and data extraction categories, and a benchmark comparison against major frontier models. The process takes a few minutes to setup with scan duration varying based on the number of probes chosen. If you're actively deploying AI and haven't tested it under adversarial conditions, this is a good place to start.

For teams that don't want to manage the open source scanner on their own, we also offer a managed Enterprise edition with access to nearly 500 pre-disclosure probes from the bug bounty program, giving organizations advance notice of emerging techniques before they're publicly known.

Why open source, and why now

AI is moving fast enough that no single team will solve this alone. There are too many threats, too many models, too much attack surface. Keeping our tools locked away would make 0DIN marginally stronger while leaving the broader internet weaker.

The researchers who submitted findings through our bug bounty program earned bounties for their work. We're releasing a meaningful portion of that intelligence as open source and we'll keep doing so as new vulnerabilities are discovered and disclosed. That's the deal Mozilla has always offered: we build in the open, the community helps make it better, and the web gets a little healthier for it.

Get involved

The post 0DIN is open-sourcing AI security and the hard-earned knowledge behind it appeared first on The Mozilla Blog.

09 Apr 2026 4:35pm GMT

Andreas Farre: BuildCache now works with mach

I'm happy to announce that buildcache is now a first-class compiler cache in mach. This has been a long time coming, and I'm excited to finally see it land.

For those unfamiliar, buildcache is a compiler cache that can drastically cut down your rebuild times by caching compilation results. It's similar to ccache, but even more so sccache, in that it supports C/C++ out of the box, as well as Rust. It has some nice unique properties of its own though, which we'll look at more closely in following posts.

Getting started

Setting it up is straightforward. Just add the following to your mozconfig:

ac_add_options --with-ccache=buildcache

Then build as usual:

./mach build

That's it.

Give it a try

If you run into any issues, please file a bug and tag me. I'd love to hear how it works out for people, and any rough edges you might hit.

09 Apr 2026 12:00am GMT