08 Jan 2026
Planet Mozilla
Matthew Gaudet: Non-Traditional Profiling
Also known as "you can just put whatever you want in a jitdump you know?"
When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.
There's a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.
Of course, the basic range you'd expect to name is "function foo() was compiled to bytes 0x1000-0x1400"
Suppose you get that working. You might get a profile that looks like this one.

This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.
Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly 'template' style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.
What if we looked more closely at that?
Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it's not used. I went through and added AutoCreatedBy to each of the LIR op's codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the 'function name' for that sequence of instructions generated while the AutoCreatedBy was live.
That gets us this profile

While it doesn't look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree

For example, we spend 1.9% of the profiled time doing for a single self-hosted function 'visitHasShape', which is basically:
masm.loadObjShapeUnsafe(obj, output);
masm.cmpPtrSet(Assembler::Equal, output, ImmGCPtr(ins->mir()->shape()),
output);
Which is not particularly complicated.
Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.
Woah. Ok, now we've got something quite different, if really hard to interpret

Even more interesting (easier to interpret) is the inverted call tree:

So across the whole program, we're spending basically 5% of the time doing guardShape. I think that's a super interesting slicing of the data.
Is it actionable? I don't know yet. I haven't opened any bugs really on this yet; a lot of the highlighted code is stuff where it's not clear that there is a faster way to do what's being done, outside of engine architectural innovation.
The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I'm sure there's more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it's low on the benchmarks I checked yo).
So yeah. You can just... put stuff in your JIT dump file.
Edited to Add: I should mention this code is nowhere. Given I don't entirely know how actionable this ends up being, and the code quality is subpar, I haven't even pushed this code. Think of this as an inspiration, not a feature announcement.
08 Jan 2026 9:46pm GMT
The Mozilla Blog: Owners, not renters: Mozilla’s open source AI strategy

The future of intelligence is being set right now, and the path we're on leads somewhere I don't want to go. We're drifting toward a world where intelligence is something you rent - where your ability to reason, create, and decide flows through systems you don't control, can't inspect, and didn't shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you're given.
I think we can do better. Making that happen is now central to what Mozilla is doing.
What we did for the web
Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible - dropping Internet Explorer's market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.
There's a reason the browser is called a "user agent." It was designed to be on your side - blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.
Now AI is becoming the new intermediary. It's what I've started calling "Layer 8" - the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.
The question we have to ask is straightforward: Whose side will your new user agent be on?
Why closed systems are winning (for now)
We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you're a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing - it all comes bundled together in a package that just works. I understand the appeal firsthand, because I've made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.
The open-source AI ecosystem is a different story. It's powerful and advancing rapidly, but it's also deeply fragmented - models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don't have to spare. This is the core challenge we face, and it's important to name it clearly: What we're dealing with isn't a values problem where developers are choosing convenience over principle. It's a developer experience problem. And developer experience problems can be solved.
The ground is already shifting
We've watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway - not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn't match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.
AI has the potential to follow the same path - but only if someone builds it. And several shifts are already reshaping the landscape:
- Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks - and they run on hardware that organizations already own;
- The economics are changing too. As enterprises feel the constraints of closed dependencies, self-hosting is starting to look like sound business rather than ideological commitment (companies like Pinterest have attributed millions of dollars in savings to migrating to open-source AI infrastructure);
- Governments want control over their supply chain. Governments are becoming increasingly unwilling to depend on foreign platforms for capabilities they consider strategically important, driving demand for sovereign systems; and,
- Consumer expectations keep rising. People want AI that responds instantly, understands their context, and works across their tools without locking them into a single platform.
The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn't win by being more principled than the alternatives. Openness wins when it becomes the better deal - cheaper, more capable, and just as easy to use
Where the cracks are forming
If openness is going to win, it won't happen everywhere at once. It will happen at specific tipping points - places where the defaults haven't yet hardened, where a well-timed push can change what becomes normal. We see four.

The first is developer experience. Developers are the ones who actually build the future - every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that's where most of the building is happening. But developers don't want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they'll build the open ecosystem themselves.
The second is data. For a decade, the assumption has been that data is free to scrape - that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it's used and a share in the value it creates. We're moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there's still a chance to build it right.
The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.
The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open - through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.
What an open stack could look like
Today's dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next - data improves models, models improve applications, applications generate more data that only the platform can use. It's a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don't build on the platform; you build inside it.
There's another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.
We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:
- Open developer interfaces at the top. SDKs, guardrails, workflows, and orchestration that don't lock you into a single vendor;
- Open data standards underneath. Provenance, consent, and portability built in by default, so you know where your training data came from and who has rights to it;
- An open model ecosystem below that. Smaller, specialized, interchangeable models that you can inspect, tune to your values, and run where you need them; and
- Open compute infrastructure at the foundation. Distributed and federated hardware across cloud and edge, not routed through a handful of hyperscn/lallers.
Pieces of this stack already exist - good ones, built by talented people. The task now is to fill in the gaps, connect what's there, and make the whole thing as easy to use as the closed alternatives. That's the work.
Why open source matters here
If you've followed Mozilla, you know the Manifesto. For almost 20 years, it's guided what we build and how - not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:
- Human agency. In a world of AI agents, it's more important than ever that technology lets people shape their own experiences - and protects privacy where it matters most;
- Decentralization and open source. An open, accessible internet depends on innovation and broad participation in how technology gets created and used. The success of open-source AI, built around transparent community practices, is critical to making this possible; and
- Balancing commercial and public benefit. The direction of AI is being set by commercial players. We need strong public-benefit players to create balance in the overall ecosystem.
Open-source AI is how these principles become real. It's what makes plurality possible - many intelligences shaped by many communities, not one model to rule them all. It's what makes sovereignty possible - owning your infrastructure rather than renting it. And it's what keeps the door open for public-benefit alternatives to exist alongside commercial ones.
What we'll do in 2026
The window to shape these defaults is still open, but it won't stay open forever. Here's where we're putting our effort - not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.
Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack - model routing, evaluation, guardrails, memory, orchestration - into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.
Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.
Learn from real deployments. Strategy that isn't grounded in practical experience is just speculation, so we're deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.
Invest in the ecosystem. We're not just building; we're backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can't do everything ourselves, and we shouldn't try. The goal is to put resources behind the people and teams already doing the work.
Show up for the community. The open-source AI ecosystem is vast, and it's hard to know what's working, what's hype, and where the real momentum is building. We want to be useful here. We're launching a newsletter to track what's actually happening in open AI. We're running meetups and hackathons to bring builders together. We're fielding developer surveys to understand what people actually need. And at MozFest this year, we're adding a dedicated developer track focused on open-source AI. If you're doing important work in this space, we want to help it find the people who need to see it.
Are you in?
Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it - we just want to help it succeed. There's a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.
We kept the web open not by asking anyone's permission, but by building something that worked better than the alternatives. We're ready to do that again.
So: Are you in?
If you're a developer building toward an open source AI future, we want to work with you. If you're a researcher, investor, policymaker, or founder aligned with these goals, let's talk. If you're at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist - that keeps everyone honest.
The future of intelligence is being set now. The question is whether you'll own it, or rent it.
We're launching a newsletter to track what's happening in open-source AI - what's working, what's hype, and where the real momentum is building. Sign up here to follow along as we build.
Read more here about our emerging strategy, and how we're rewiring Mozilla for the era of AI.
The post Owners, not renters: Mozilla's open source AI strategy appeared first on The Mozilla Blog.
08 Jan 2026 7:05pm GMT
Firefox Add-on Reviews: 2025 Staff Pick Add-ons
While nearly half of all Firefox users have installed an add-on, it's safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…
Falling Snow Animated Theme
Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.
Privacy Badger
The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.
Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage "supercookies," canvas fingerprinting, and other sneaky tracking methods.
Adaptive Tab Bar Color
Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you're visiting.
It's beautifully simple and sublime. No setup required, but you're free to make subtle adjustments to color contrast patterns and assign specific colors for websites.
Rainy Spring Sakura by MaDonna
Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura's bucolic mix of calming colors.
It's like instant Zen mode for Firefox.
Return YouTube Dislike
Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.
Other Firefox users seem to agree…
"Does exactly what the name suggests. Can't see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool."
Firefox user OFG
"i have never smashed 5 stars faster."
Firefox user 12918016
<figcaption class="wp-element-caption">Return YouTube Dislike re-enables a beloved feature.</figcaption>LeechBlock NG
Block time-wasting websites with LeechBlock NG - easily one of our staff-favorite productivity tools.
Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features:
- Block entire websites or just portions (e.g. allow YouTube video pages but block the homepage)
- Block websites based on time of day, day of the week, or both
- Time limit customization (e.g. only 1 hour of Reddit per day)
DarkSpaceBlue
Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.
LanguageTool - Grammar and Spell Checker
Improve your prose anywhere you write on the web. LanguageTool - Grammar and Spell Checker will make you a better writer in 25+ languages.
Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:
- Offers alternate phrasing for brevity and clarity
- Recognizes common misuses of similar sounding words (e.g. there/their, your/you're)
- Works with all web-based email and social media
- Provides synonyms for overused words
<figcaption class="wp-element-caption">LanguageTool can help with subtle syntax improvements. </figcaption>Sink It for Reddit!
Imagine a more focused and free feeling Reddit - that's Sink It for Reddit!
Some of our staff-favorite features include:
- Custom content muting (e.g. ad blocking, remove app install and login prompts)
- Color-coded comments
- Streamlined navigation
- Adaptive dark mode
Sushi Nori
Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori.
08 Jan 2026 2:59pm GMT