16 Jan 2026

feedPlanet Mozilla

Mozilla GFX: Experimental High Dynamic Range video playback on Windows in Firefox Nightly 148

Modern computer displays have gained more colorful capabilities in recent years with High Dynamic Range (HDR) being a headline feature. These displays can show vibrant shades of red, purple and green that were outside the capability of past displays, as well as higher brightness for portions of the displayed videos.

We are happy to announce that Firefox is gaining support for HDR video on Windows, now enabled in Firefox Nightly 148. This is experimental for the time being, as we want to gather feedback on what works and what does not across varied hardware in the wild before we deploy it for all Firefox users broadly. HDR video has already been live on macOS for some time now, and is being worked on for Wayland on Linux.

To get the full experience, you will need an HDR display, and the HDR feature needs to be turned on in Windows (Settings -> Display Settings) for that display. This release also changes how HDR video looks on non-HDR displays in some cases: this used to look very washed out, but it should be improved now. Feedback on whether this is a genuine improvement is also welcome. Popular streaming websites may be checking for this HDR capability, so they may now offer HDR video content to you, but only if HDR is enabled on the display.

We are actively working on HDR support for other web functionality such as WebGL, WebGPU, Canvas2D and static images, but have no current estimates on when those features will be ready: this is a lot of work, and relevant web standards are still in flux.

Note for site authors: Websites can use the CSS video-dynamic-range functionality to make separate HDR and SDR videos available for the same video element. This functionality detects if the user has the display set to HDR, not necessarily whether the display is capable of HDR mode. Displaying an HDR video on an SDR display is expected to work reasonably but requires more testing - we invite feedback on that.

Notes and limitations:

16 Jan 2026 2:40am GMT

14 Jan 2026

feedPlanet Mozilla

The Mozilla Blog: How founders are meeting the moment: Lessons from Mozilla Ventures’ 2025 portfolio convening

Mozilla Ventures Convening 2025 Report book cover with green geometric design on black background

At Mozilla, we've long believed that technology can be built differently - not only more openly, but more responsibly, more inclusively, and more in service of the people who rely on it. As AI reshapes nearly every layer of the internet, those values are being tested in real time.

Our 2025 Mozilla Ventures Portfolio Convening Report captures how a new generation of founders is meeting that moment.

At the Mozilla Festival 2025 in Barcelona, from Nov. 7-9, we brought together 50 founders from 30 companies across our portfolio to grapple with some of the most pressing questions in technology today: How do we build AI that is trustworthy and governable? How do we protect privacy at scale? What does "better social" look like after the age of the global feed? And how do we ensure that the future of technology is shaped by people and communities far beyond today's centers of power?

Over three days of panels, talks, and hands-on sessions, founders shared not just what they're building, but what they're learning as they push into new terrain. What emerged is a vivid snapshot of where the industry is heading - and the hard choices required to get there.

Open source as strategy, not slogan

A major theme emerging across conversations with our founders was that open source is no longer a "nice to have." It's the backbone of trust, adoption, and long‑term resilience in AI, and a critical pillar for the startup ecosystem. But these founders aren't naïve about the challenges. Training frontier‑scale models costs staggering sums, and the gravitational pull of a few dominant labs is real. Yet companies like Union.ai, Jozu, and Oumi show that openness can still be a moat - if it's treated as a design choice, not a marketing flourish.

Their message is clear: open‑washing won't cut it. True openness means clarity about what's shared -weights, data, governance, standards - and why. It means building communities that outlast any single company. And it means choosing investors who understand that open‑source flywheels take time to spin up.

Community as the real competitive edge

Across November's sessions, founders returned to a simple truth: community is the moat. Flyte's growth into a Linux Foundation project, Jozu's push for open packaging standards, and Lelapa's community‑governed language datasets all demonstrate that the most durable advantage isn't proprietary code - it's shared infrastructure that people trust.

Communities harden technology, surface edge cases, and create the kind of inertia that keeps systems in place long after competitors appear. But they also require care: documentation, governance, contributor experience, and transparency. As one founder put it, "You can't build community overnight. It's years of nurturing."

Ethics as infrastructure

One of the most powerful threads came from Lelapa AI, which reframes data not as raw material to be mined but as cultural property. Their licensing model, inspired by Māori data sovereignty, ensures that African languages - and the communities behind them - benefit from the value they create. This is openness with accountability, a model that challenges extractive norms and points toward a more equitable AI ecosystem.

It's a reminder that ethical design isn't a layer on top of technology - it's part of the architecture.

The real competitor: fear

Founders spoke candidly about the biggest barrier to adoption: fear. Enterprises default to hyperscalers because no one gets fired for choosing the biggest vendor. Overcoming that inertia requires more than values. It requires reliability, security features, SSO, RBAC, audit logs - the "boring" but essential capabilities that make open systems viable in real organizations.

In other words, trust is built not only through ideals but through operational excellence.

A blueprint for builders

Across all 16 essays, a blueprint started to emerge for founders and startups committed to building responsible technology and open source AI:

Taken together, the 16 essays in this report point to something larger than any single technology or trend. They show founders wrestling with how AI is governed, how trust is earned, how social systems can be rebuilt at human scale, and how innovation looks different when it starts from Lagos or Johannesburg instead of Silicon Valley.

The future of AI doesn't have to be centralized, extractive or opaque. The founders in this portfolio are proving that openness, trustworthiness, diversity, and public benefit can reinforce one another - and that competitive companies can be built on all four.

We hope you'll dig into the report, explore the ideas these founders are surfacing, and join us in backing the people building what comes next.

The post How founders are meeting the moment: Lessons from Mozilla Ventures' 2025 portfolio convening appeared first on The Mozilla Blog.

14 Jan 2026 5:00pm GMT

This Week In Rust: This Week in Rust 634

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

[ES] Command Pattern in Rust: When intent doesn't need to be an object

Miscellaneous

Crate of the Week

This week's crate is diesel-guard, a linter against dangerous Postgres migrations.

Thanks to Alex Yarotsky for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

539 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Fairly quiet week, most changes due to new features which naturally carry some overhead for existing programs. Overall though a small improvement.

Triage done by @simulacrum. Revision range: 7c04f5d2..840245e9

3 Regressions, 1 Improvement, 4 Mixed; 2 of them in rollups 31 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Compiler Team (MCPs only)

Rust

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines. Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-01-14 - 2026-02-11 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have written in dozens of computer languages, including specialized ones that were internal to Pixar (including one I designed). I spent decades writing C and C++. I wrote bit-slice microcode, coded for SIMD before many folks outside of Pixar had it.

I wrote the first malloc debugger that would stop your debugger at the source code line that was the problem. Unix workstation manufacturers had to do an unexpected release when this revealed all of the problems in their C libraries.

I am a better programmer in Rust for anything low-level or high-performance. It just keeps me from making an entire class of mistakes that were too easy to make in any language without garbage-collection.

Over the long term, anything that improves quality is going to win. There is a lot of belly-aching by folks who are too in love with what they've been using for decades, but it is mostly substance-free. Like people realizing that code marked "unsafe" is, surprise, unsafe. And that unsafe can be abused.

- Bruce Perens on LinkedIn

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

14 Jan 2026 5:00am GMT

The Rust Programming Language Blog: What does it take to ship Rust in safety-critical?

This is another post in our series covering what we learned through the Vision Doc process. In our first post, we described the overall approach and what we learned about doing user research. In our second post, we explored what people love about Rust. This post goes deep on one domain: safety-critical software.

When we set out on the Vision Doc work, one area we wanted to explore in depth was safety-critical systems: software where malfunction can result in injury, loss of life, or environmental harm. Think vehicles, airplanes, medical devices, industrial automation. We spoke with engineers at OEMs, integrators, and suppliers across automotive (mostly), industrial, aerospace, and medical contexts.

What we found surprised us a bit. The conversations kept circling back to a single tension: Rust's compiler-enforced guarantees support much of what Functional Safety Engineers and Software Engineers in these spaces spend their time preventing, but once you move beyond prototyping into the higher-criticality parts of a system, the ecosystem support thins out fast. There is no MATLAB/Simulink Rust code generation. There is no OSEK or AUTOSAR Classic-compatible RTOS written in Rust or with first-class Rust support. The tooling for qualification and certification is still maturing.

Quick context: what makes software "safety-critical"

If you've never worked in these spaces, here's the short version. Each safety-critical domain has standards that define a ladder of integrity levels: ISO 26262 in automotive, IEC 61508 in industrial, IEC 62304 in medical devices, DO-178C in aerospace. The details differ, but the shape is similar: as you climb the ladder toward higher criticality, the demands on your development process, verification, and evidence all increase, and so do the costs.1

This creates a strong incentive for decomposition: isolate the highest-criticality logic into the smallest surface area you can, and keep everything else at lower levels where costs are more manageable and you can move faster.

We'll use automotive terminology in this post (QM through ASIL D) since that's where most of our interviews came from, but the patterns generalize. These terms represent increasing levels of safety-criticality, with QM being the lowest and ASIL D being the highest. The story at low criticality looks very different from the story at high criticality, regardless of domain.

Rust is already in production for safety-critical systems

Before diving into the challenges, it is worth noting that Rust is not just being evaluated in these domains. It is deployed and running in production.

We spoke with a principal firmware engineer working on mobile robotics systems certified to IEC 61508 SIL 2:

"We had a new project coming up that involved a safety system. And in the past, we'd always done these projects in C using third party stack analysis and unit testing tools that were just generally never very good, but you had to do them as part of the safety rating standards. Rust presented an opportunity where 90% of what the stack analysis stuff had to check for is just done by the compiler. That combined with the fact that now we had a safety qualified compiler to point to was kind of a breakthrough." -- Principal Firmware Engineer (mobile robotics)

We also spoke with an engineer at a medical device company deploying IEC 62304 Class B software to intensive care units:

"All of the product code that we deploy to end users and customers is currently in Rust. We do EEG analysis with our software and that's being deployed to ICUs, intensive care units, and patient monitors." -- Rust developer at a medical device company

"We changed from this Python component to a Rust component and I think that gave us a 100-fold speed increase." -- Rust developer at a medical device company

These are not proofs of concept. They are shipping systems in regulated environments, going through audits and certification processes. The path is there. The question is how to make it easier for the next teams coming through.

Rust adoption is easiest at QM, and the constraints sharpen fast

At low criticality, teams described a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what you ship. One architect at an automotive OEM told us:

"We can use any crate [from crates.io] [..] we have to take care to prepare the software components for production usage." -- Architect at Automotive OEM

But at higher levels, third-party dependencies become difficult to justify. Teams either rewrite, internalize, or strictly constrain what they use. An embedded systems engineer put it bluntly:

"We tend not to use 3rd party dependencies or nursery crates [..] solutions become kludgier as you get lower in the stack." -- Firmware Engineer

Some teams described building escape hatches, abstraction layers designed for future replacement:

"We create an interface that we'd eventually like to have to simplify replacement later on [..] sometimes rewrite, but even if re-using an existing crate we often change APIs, write more tests." -- Team Lead at Automotive Supplier (ASIL D target)

Even teams that do use crates from crates.io described treating that as a temporary accelerator, something to track carefully and remove from critical paths before shipping:

"We use crates mainly for things in the beginning where we need to set up things fast, proof of concept, but we try to track those dependencies very explicitly and for the critical parts of the software try to get rid of them in the long run." -- Team lead at an automotive software company developing middleware in Rust

In aerospace, the "control the whole stack" instinct is even stronger:

"In aerospace there's a notion of we must own all the code ourselves. We must have control of every single line of code." -- Engineering lead in aerospace

This is the first big takeaway: a lot of "Rust in safety-critical" is not just about whether Rust compiles for a target. It is about whether teams can assemble an evidence-friendly software stack and keep it stable over long product lifetimes.

The compiler is doing work teams used to do elsewhere

Many interviewees framed Rust's value in terms of work shifted earlier and made more repeatable by the compiler. This is not just "nice," it changes how much manual review you can realistically afford. Much of what was historically process-based enforcement through coding standards like MISRA C and CERT C becomes a language-level concern in Rust, checked by the compiler rather than external static analysis or manual review.

"Roughly 90% of what we used to check with external tools is built into Rust's compiler." -- Principal Firmware Engineer (mobile robotics)

We heard variations of this from teams dealing with large codebases and varied skill levels:

"We cannot control the skill of developers from end to end. We have to check the code quality. Rust by checking at compile time, or Clippy tools, is very useful for our domain." -- Engineer at a major automaker

Even on smaller teams, the review load matters:

"I usually tend to work on teams between five and eight. Even so, it's too much code. I feel confident moving faster, a certain class of flaws that you aren't worrying about." -- Embedded systems engineer (mobile robotics)

Closely related: people repeatedly highlighted Rust's consistency around error handling:

"Having a single accepted way of handling errors used throughout the ecosystem is something that Rust did completely right." -- Automotive Technical Lead

For teams building products with 15-to-20-year lifetimes and "teams of teams," compiler-enforced invariants scale better than "we will just review harder."

Teams want newer compilers, but also stability they can explain

A common pattern in safety-critical environments is conservative toolchain selection. But engineers pointed out a tension: older toolchains carry their own defect history.

"[..] traditional wisdom is that after something's been around and gone through motions / testing then considered more stable and safer [..] older compilers used tend to have more bugs [and they become] hard to justify" -- Software Engineer at an Automotive supplier

Rust's edition system was described as a real advantage here, especially for incremental migration strategies that are common in automotive programs:

"[The edition system is] golden for automotive, where incremental migration is essential." -- Software Engineer at major Automaker

In practice, "stability" is also about managing the mismatch between what the platform supports and what the ecosystem expects. Teams described pinning Rust versions, then fighting dependency drift:

"We can pin the Rust toolchain, but because almost all crates are implemented for the latest versions, we have to downgrade. It's very time-consuming." -- Engineer at a major automaker

For safety-critical adoption, "stability" is operational. Teams need to answer questions like: What does a Rust upgrade change, and what does it not change? What are the bounds on migration work? How do we demonstrate we have managed upgrade risk?

Target support matters in practical ways

Safety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. Teams described friction around targets like QNX, where upstream Rust support exists but with limitations (for example, QNX 8.0 support is currently no_std only).2

This connects to Rust's target tier policy: the policy itself is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime."

"I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using. That's simply not acceptable. If you want to invest in some technology, you want to have a certain reliability." -- Senior software engineer at a major automaker

core is the spine, and it sets expectations

In no_std environments, core becomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit.

A lot of Rust's safety leverage lives there: Option and Result, slices, iterators, Cell and RefCell, atomics, MaybeUninit, Pin. But we also heard a consistent shape of gaps: many embedded and safety-critical projects want no_std-friendly building blocks (fixed-size collections, queues) and predictable math primitives, but do not want to rely on "just any" third-party crate at higher integrity levels.

"Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core." -- Principal Firmware Engineer (mobile robotics)

Async is appealing, but the long-run story is not settled

Some safety-critical-adjacent systems are already heavily asynchronous: daemons, middleware frameworks, event-driven architectures. That makes Rust's async story interesting.

But people also expressed uncertainty about ecosystem lock-in and what it would take to use async in higher-criticality components. One team lead developing middleware told us:

"We're not sure how async will work out in the long-run [in Rust for safety-critical]. [..] A lot of our software is highly asynchronous and a lot of our daemons in the AUTOSAR Adaptive Platform world are basically following a reactor pattern. [..] [C++14] doesn't really support these concepts, so some of this is lack of familiarity." -- Team lead at an automotive software company developing middleware in Rust

And when teams look at async through an ISO 26262 lens, the runtime question shows up immediately:

"If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262." -- Team lead at an automotive software company developing middleware in Rust

Async is not "just a language feature" in safety-critical contexts. It pulls in runtime choices, scheduling assumptions, and, at higher integrity levels, the question of what it would mean to certify or qualify the relevant parts of the stack.

Recommendations

Find ways to help the safety-critical community support their own needs. Open source helps those who help themselves. The Ferrocene Language Specification (FLS) shows this working well: it started as an industry effort to create a specification suitable for safety-qualification of the Rust compiler, companies invested in the work, and it now has a sustainable home under the Rust Project with a team actively maintaining it.3

Contrast this with MC/DC coverage support in rustc. Earlier efforts stalled due to lack of sustained engagement from safety-critical companies.4 The technical work was there, but without industry involvement to help define requirements, validate the implementation, and commit to maintaining it, the effort lost momentum. A major concern was that the MC/DC code added maintenance burden to the rest of the coverage infrastructure without a clear owner. Now in 2026, there is renewed interest in doing this the right way: companies are working through the Safety-Critical Rust Consortium to create a Rust Project Goal in 2026 to collaborate with the Rust Project on MC/DC support. The model is shared ownership of requirements, with primary implementation and maintenance done by companies with a vested interest in safety-critical, done in a way that does not impede maintenance of the rest of the coverage code.

The remaining recommendations follow this pattern: the Safety-Critical Rust Consortium can help the community organize requirements and drive work, with the Rust Project providing the deep technical knowledge of Rust Project artifacts needed for successful collaboration. The path works when both sides show up.

Establish ecosystem-wide MSRV conventions. The dependency drift problem is real: teams pin their Rust toolchain for stability, but crates targeting the latest compiler make this difficult to sustain. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce this friction. This would require coordination between the Rust Project (potentially the release team) and the broader ecosystem, with the Safety-Critical Rust Consortium helping to articulate requirements and adoption patterns.

Turn "target tier policy" into a safety-critical onramp. The friction we heard is not about the policy being unclear, it is about translating "tier" into practical decisions. A short, target-focused readiness checklist would help: Which targets exist? Which ones are no_std only? What is the last known tested OS version? What are the top blockers? The raw ingredients exist in rustc docs, release notes, and issue trackers, but pulling them together in one place would lower the barrier. Clearer, consolidated information also makes it easier for teams who depend on specific targets to contribute to maintaining them. The Safety-Critical Rust Consortium could lead this effort, working with compiler team members and platform maintainers to keep the information accurate.

Document "dependency lifecycle" patterns teams are already using. The QM story is often: use crates early, track carefully, shrink dependencies for higher-criticality parts. The ASIL B+ story is often: avoid third-party crates entirely, or use abstraction layers and plan to replace later. Turning those patterns into a reusable playbook would help new teams make the same moves with less trial and error. This seems like a natural fit for the Safety-Critical Rust Consortium's liaison work.

Define requirements for a safety-case friendly async runtime. Teams adopting async in safety-critical contexts need runtimes with appropriate quality and process artifacts for standards like ISO 26262. Work is already happening in this space.5 The Safety-Critical Rust Consortium could lead the effort to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.

Treat interop as part of the safety story. Many teams are not going to rewrite their world in Rust. They are going to integrate Rust into existing C and C++ systems and carry that boundary for years. Guidance and tooling to keep interfaces correct, auditable, and in sync would help. The compiler team and lang team could consider how FFI boundaries are surfaced and checked, informed by requirements gathered through the Safety-Critical Rust Consortium.

"We rely very heavily on FFI compatibility between C, C++, and Rust. In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." -- Embedded systems engineer (mobile robotics)

Conclusion

To sum up the main points in this post:

We make six recommendations: find ways to help the safety-critical community support their own needs, establish ecosystem-wide MSRV conventions, create target-focused readiness checklists, document dependency lifecycle patterns, define requirements for safety-case friendly async runtimes, and treat C/C++ interop as part of the safety story.

Get involved

If you're working in safety-critical Rust, or you want to help make it easier, check out the Rust Foundation's Safety-Critical Rust Consortium and the in-progress Safety-Critical Rust coding guidelines.

Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful. The goal is to make Rust's strengths more accessible in environments where correctness and safety are not optional.

  1. If you're curious about how rigor scales with cost in ISO 26262, this Feabhas guide gives a good high-level overview.

  2. See the QNX target documentation for current status.

  3. The FLS team was created under the Rust Project in 2025. The team is now actively maintaining the specification, reviewing changes and keeping the FLS in sync with language evolution.

  4. See the MC/DC tracking issue for context. The initial implementation was removed due to maintenance concerns.

  5. Eclipse SDV's Eclipse S-CORE project includes an Orchestrator written in Rust for their async runtime, aimed at safety-critical automotive software.

14 Jan 2026 12:00am GMT

Tarek Ziadé: The Economics of AI Coding: A Real-World Analysis

My whole stream in the past months has been about AI coding. From skeptical engineers who say it creates unmaintainable code, to enthusiastic (or scared) engineers who say it will replace us all, the discourse is polarized. But I've been more interested in a different question: what does AI coding actually cost, and what does it actually save?

I recently had Claude help me with a substantial refactoring task: splitting a monolithic Rust project into multiple workspace repositories with proper dependency management. The kind of task that's tedious, error-prone, and requires sustained attention to detail across hundreds of files. When it was done, I asked Claude to analyze the session: how much it cost, how long it took, and how long a human developer would have taken.

The answer surprised me. Not because AI was faster or cheaper (that's expected), but because of how much faster and cheaper.

The Task: Repository Split and Workspace Setup

The work involved:

This is real work. Not a toy problem, not a contrived benchmark. The kind of multi-day slog that every engineer has faced: important but tedious, requiring precision but not creativity.

The Numbers

AI Execution Time

Total: approximately 3.5 hours across two sessions

AI Cost

Total tokens: 72,146 tokens

Estimated marginal cost: approximately $4.95

This is the marginal execution cost for this specific task. It doesn't include my Claude subscription, the time I spent iterating on prompts and reviewing output, or the risk of having to revise or fix AI-generated changes. For a complete accounting, you'd also need to consider those factors, though for this task they were minimal.

Human Developer Time Estimate

Conservative estimate: 2-3 days (16-24 hours)

This is my best guess based on experience with similar tasks, but it comes with uncertainty. A senior engineer deeply familiar with this specific codebase might work faster. Someone encountering similar patterns for the first time might work slower. Some tasks could be partially templated or parallelized across a team.

Breaking down the work:

  1. Planning and research (2-4 hours): Understanding codebase structure, planning dependency strategy, reading PyO3/Maturin documentation
  2. Code migration (4-6 hours): Copying files, updating all import statements, fixing compilation errors, resolving workspace conflicts
  3. Build system setup (2-3 hours): Writing Makefile, configuring Cargo.toml, setting up pyproject.toml, testing builds
  4. CI/CD configuration (2-4 hours): Writing GitHub Actions workflows, testing syntax, debugging failures, setting up matrix builds
  5. Documentation updates (2-3 hours): Updating multiple documentation files, ensuring consistency, writing migration guides
  6. Testing and debugging (3-5 hours): Running test suites, fixing unexpected failures, verifying tests pass, testing on different platforms
  7. Git operations and cleanup (1-2 hours): Creating branches, writing commit messages, final verification

Even if we're generous and assume a very experienced developer could complete this in 8 hours of focused work, the time and cost advantages remain substantial. The economics don't depend on the precise estimate.

The Bottom Line

These numbers compare execution time and per-task marginal costs. They don't capture everything (platform costs, review time, long-term maintenance implications), but they illustrate the scale of the difference for this type of systematic refactoring work.

Why AI Was Faster

The efficiency gains weren't magic. They came from specific characteristics of how AI approaches systematic work:

No context switching fatigue. Claude maintained focus across three repositories simultaneously without the cognitive load that would exhaust a human developer. No mental overhead from jumping between files, no "where was I?" moments after a break.

Instant file operations. Reading and writing files happens without the delays of IDE loading, navigation, or search. What takes a human seconds per file took Claude milliseconds.

Pattern matching without mistakes. Updating thousands of import statements consistently, without typos, without missing edge cases. No ctrl-H mistakes, no regex errors that you catch three files later.

Parallel mental processing. Tracking multiple files at once without the working memory constraints that force humans to focus narrowly.

Documentation without overhead. Generating comprehensive, well-structured documentation in one pass. No switching to a different mindset, no "I'll document this later" debt.

Error recovery. When workspace conflicts or dependency issues appeared, Claude fixed them immediately without the frustration spiral that can derail a human's momentum.

Commit message quality. Detailed, well-structured commit messages generated instantly. No wrestling with how to summarize six hours of work into three bullet points.

What Took Longer

AI wasn't universally faster. Two areas stood out:

Initial codebase exploration. Claude spent time systematically understanding the structure before implementing. A human developer might have jumped in faster with assumptions (though possibly paying for it later with rework).

User preference clarification. Some back-and-forth on git dependencies versus crates.io, version numbering conventions. A human working alone would just make these decisions implicitly based on their experience.

These delays were minimal compared to the overall time savings, but they're worth noting. AI coding isn't instantaneous magic. It's a different kind of work with different bottlenecks.

The Economics of Coding

Let me restate those numbers because they still feel surreal:

For this type of task, these are order-of-magnitude improvements over solo human execution. And they weren't achieved through cutting corners or sacrificing immediate quality. The tests passed, the documentation was comprehensive, the commits were well-structured, the code compiled cleanly.

That said, tests passing and documentation existing are necessary but not sufficient signals of quality. Long-term maintainability, latent bugs that only surface later, or future refactoring friction are harder to measure immediately. The code is working, but it's too soon to know if there are subtle issues that will emerge over time.

This creates strange economics for a specific class of work: systematic, pattern-based refactoring with clear success criteria. For these tasks, the time and cost reductions change how we value engineering effort and prioritize maintenance work.

I used to avoid certain refactorings because the payoff didn't justify the time investment. Clean up import statements across 50 files? Update documentation after a restructure? Write comprehensive commit messages? These felt like luxuries when there was always more pressing work.

But at $5 marginal cost and 3.5 hours for this type of systematic task, suddenly they're not trade-offs anymore. They're obvious wins. The economics shift from "is this worth doing?" to "why haven't we done this yet?"

What This Doesn't Mean

Before the "AI will replace developers" crowd gets too excited, let me be clear about what this data doesn't show:

This was a perfect task for AI. Systematic, pattern-based, well-scoped, with clear success criteria. The kind of work where following existing patterns and executing consistently matters more than creative problem-solving or domain expertise.

AI did not:

The task was pure execution. Important execution, skilled execution, but execution nonetheless. A human developer would have brought the same capabilities to the table, just slower and at higher cost.

Where This Goes

I keep thinking about that 85-90% time reduction for this specific type of task. Not simple one-liners where AI already shines, but systematic maintenance work with high regularity, strong compiler or test feedback, and clear end states.

Tasks with similar characteristics might include:

Many maintenance tasks are messier: ambiguous semantics, partial test coverage, undocumented invariants, organizational constraints. The economics I observed here don't generalize to all refactoring work. But for the subset that is systematic and well-scoped, the shift is significant.

All the work that we know we should do but often defer because it doesn't feel like progress. What if the economics shifted enough for these specific tasks that deferring became the irrational choice?

I'm not suggesting AI replaces human judgment. Someone still needs to decide what "good" looks like, validate the results, understand the business context. But if the execution of systematic work becomes 10x cheaper and faster, maybe we stop treating certain categories of technical debt like unavoidable burdens and start treating them like things we can actually manage.

The Real Cost

There's one cost the analysis didn't capture: my time. I wasn't passive during those 3.5 hours. I was reading Claude's updates, reviewing file changes, answering questions, validating decisions, checking test results.

I don't know exactly how much time I spent, but it was less than the 3.5 hours Claude was working. Maybe 2 hours of active engagement? The rest was Claude working autonomously while I did other things.

So the real comparison isn't 3.5 AI hours versus 16-24 human hours. It's 2 hours of human guidance plus 3.5 hours of AI execution versus 16-24 hours of human solo work. Still a massive win, but different from pure automation.

This feels like the right model: AI as an extremely capable assistant that amplifies human direction rather than replacing human judgment. The economics work because you're multiplying effectiveness, not substituting one for the other.

Final Thoughts

Five dollars marginal cost. Three and a half hours. For systematic refactoring work that would have taken me days and cost hundreds or thousands of dollars in my time.

These numbers make me think differently about certain kinds of work. About how we prioritize technical debt in the systematic, pattern-based category. About what "too expensive to fix" really means for these specific tasks. About whether we're approaching some software maintenance decisions with outdated economic assumptions.

I'm still suspicious of broad claims that AI fundamentally changes how we work. But I'm less suspicious than I was. When the economics shift this dramatically for a meaningful class of tasks, some things that felt like pragmatic trade-offs start to look different.

The tests pass. The documentation is up to date. And I paid less than the cost of a fancy coffee drink.

Maybe the skeptics and the enthusiasts are both right. Maybe AI doesn't replace developers and maybe it does change some things meaningfully. Maybe it just makes certain kinds of systematic work cheap enough that we can finally afford to do them right.

What About Model and Pricing Changes?

One caveat worth noting: these economics depend on Claude Sonnet 4.5 at January 2026 pricing. Model pricing can change, model performance can regress or improve with updates, tool availability can shift, and organizational data governance constraints might limit what models you can use or what tasks you can delegate to them.

For individuals and small teams, this might not matter much in the short term. For larger organizations making long-term planning decisions, these factors matter. The specific numbers here are a snapshot, not a guarantee.

References

14 Jan 2026 12:00am GMT

13 Jan 2026

feedPlanet Mozilla

Firefox Nightly: Phasing Out the Older Version of Firefox Sidebar in 2026

Over a year ago, we introduced an updated version of the sidebar that offers easy access to multiple tools - bookmarks, history, tabs from other devices, and a selection of chatbots - all in one place. As the new version has gained popularity and we plan our future work, we have made a decision to retire the older version in 2026.

Old sidebar version

Updated sidebar version

We know that changes like this can be disruptive - especially when they affect established workflows you rely on every day. While use of the older version has been declining, it remains a familiar and convenient tool for many - especially long-time Firefox users who have built workflows around it.

Unfortunately, supporting two versions means dividing the time and attention of a very small team. By focusing on a single updated version, we can fix issues more quickly, incorporate feedback more efficiently, and deliver new features more consistently for everyone. For these reasons, in 2026, we will focus on improving the updated sidebar to provide many of the conveniences of the older version, then transition everyone to the updated version.

Here's what to expect:

Our goal is to make our transition plans transparent and implement suggested improvements that are feasible within the new interaction model, while preserving the speed and flexibility that long-time sidebar users value. Several implemented and planned improvements to the updated sidebar were informed by your feedback, and we expect that to continue throughout the transition:

If you'd like to share what functionality you've been missing in the new sidebar and what challenges you've experienced when you tried to adopt it, please share your thoughts in this Mozilla Connect thread or file a bug in Bugzilla's Sidebar component, so your feedback can continue shaping Firefox.

13 Jan 2026 10:57pm GMT

Firefox Developer Experience: Firefox WebDriver Newsletter 147

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we've done as part of the Firefox 147 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 147, two WebDriver bugs were fixed by contributors:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

WebDriver BiDi

Marionette

13 Jan 2026 5:18pm GMT

Firefox Tooling Announcements: Engineering Effectiveness Newsletter (Q4 2025 Edition)

Highlights

Contributors

Detailed Project Updates

AI for Development

Bugzilla

Build System and Mach Environment

Firefox-CI, Taskcluster and Treeherder

Lint, Static Analysis and Code Coverage

PDF.js

Firefox Translations

Phabricator, moz-phab, and Lando

Version Control

Thanks for reading and see you next month!

1 post - 1 participant

Read full topic

13 Jan 2026 1:51pm GMT

Advancing WebRTC: Firefox WebRTC 2025

In an increasingly siloed internet landscape, WebRTC directly connects human voices and faces. The technology powers Audio/Video calling, conferencing, live streaming, telehealth, and more. We strive to make Firefox the client that best serves humans during those experiences.

Expanding Simulcast Support

Simulcast allows a single WebRTC video to be simultaneously transmitted at differing qualities. Some codecs can efficiently encode the streams simultaneously. Each viewer can receive the video stream that gives them the best experience for their viewing situation, whether that be using a phone with a small screen and shaky cellular link, or a desktop with a large screen and wired broadband connection. While Firefox has supported a more limited set of simulcast scenarios for some time, this year we put a lot of effort into making sure that even more of our users using even more services can get those great experiences.

We have added simulcast capabilities for H.264 and AV1. This along with adding support for the dependency descriptor header (and H.264 support), increases the number of services that can take advantage of simulcast while using Firefox.

Codec Support

Dovetailing the simulcast support, we now support more codecs doing more things on more platforms! This includes turning on AV1 support by default, and adding temporal layer support for H.264. Additionally there were a number of behind the scenes changes made. For our users, this means that they have a more uniform experience across devices.

Media Capture

We have improved camera resolution and frame-rate adaptation on all platforms, as well as OS-integrated improved screen capture on macOS. Users will have a smoother experience when joining calls with streams that are better suited to their devices. This means having smoother video and a consistent aspect ratio.

DataChannel

Improving reliability, performance, and compatibility of our DataChannel implementation has been a focus this year. DataChannels can now be run on workers keeping data processing off of the main thread. This was enabled by a major refactoring effort, migrating our implementation to dcsctp.

Web Compatibility

We targeted a number of areas where we could improve compatibility with the broad web of services that our users rely on.

Bug 1329847 Implement RTCDegradationPreference related functions

Bug 1894137 Implement RTCRtpEncodingParameters.codec

Bug 1371391 Implement remaining mandatory fields in RTCIceCandidatePairStats

Bug 1525241 Implement RTCCertificate.getFingerprints method

Bug 1835077 Support RTCEncodedAudioFrameMetadata.contributingSources

Bug 1972657 SendKeyFrameRequest Should Not Reject Based on Transceiver State

Summary

2025 has been an exciting and busy year for WebRTC in Firefox. We have broadly improved web compatibility throughout the WebRTC technology stack, and we are looking forward to another impactful year in 2026.

The post Firefox WebRTC 2025 appeared first on Advancing WebRTC.

13 Jan 2026 3:03am GMT

12 Jan 2026

feedPlanet Mozilla

The Mozilla Blog: Mozilla welcomes Amy Keating as Chief Business Officer

Headshot image used in a chief business officer announcement, showing a smiling executive on a bright green background.

Mozilla is pleased to announce that Amy Keating has joined Mozilla as Chief Business Officer (CBO).

In this role, Amy will work across the Mozilla family of organizations - spanning products, companies, investments, grants, and new ventures - to help ensure we are not only advancing our mission but also financially sustainable and operationally rigorous. The core of this job: making investments that push the internet in a better direction.

Keating takes on this role at a pivotal moment for Mozilla and for the responsible technology ecosystem. As Mozilla pursues a new portfolio strategy centered on building an open, trustworthy alternative to today's closed and concentrated AI ecosystem, the organization has embraced a double bottom line economic model: one that measures success through mission impact and commercial performance. Delivering on that model requires disciplined business leadership at the highest level.

"Mozilla's mission has never been more urgent - but mission alone isn't enough to bring about the change we want to see in the world," said Mark Surman, President of the Mozilla Foundation. "To build real alternatives in AI and the web, we need to be commercially successful, sustainable, and able to invest at scale. Our double bottom line depends on it. Amy is a proven, visionary business leader who understands how to align values with viable, ambitious business strategy. She will help ensure Mozilla can grow, thrive, and influence the entire marketplace."

This role is a return to Mozilla for Keating, who previously was Mozilla Corporation's Chief Legal Officer. Keating has also served on the Boards of Mozilla Ventures and the Mozilla Foundation. Most recently, Keating held senior leadership roles at Glean and Planet Labs, and previously spent nearly a decade across Google and Twitter. She returns to Mozilla with 20 years of professional experience advising and operating in technology organizations. In these roles - and throughout her career - she has focused on building durable businesses grounded in openness, community, and long-term impact.

"Mozilla has always been creative, ambitious, and deeply rooted in community," said Amy Keating. "I'm excited to return at a moment when the organization is bringing its mission and its assets together in new ways - and to help build the operational and business foundation that allows our teams and portfolio organizations to thrive."

As Chief Business Officer, Amy brings an investment and growth lens to Mozilla, supporting Mozilla's portfolio of mission-driven companies and nonprofits, identifying investments in new entities aligned with the organization's strategy, and helping to strengthen Mozilla's leadership creating an economic counterbalance to the players now dominating a closed AI ecosystem.

This work is critical not only to Mozilla's own sustainability, but to its ability to influence markets and shape the future of AI and the web in the public interest.

"I'm here to move with speed and clarity," said Keating, "and to think and act at the scale of our potential across the Mozilla Project."


Read more here about Mozilla's next era. Read here about Mozilla's new CTO, Raffi Krikorian.

The post Mozilla welcomes Amy Keating as Chief Business Officer appeared first on The Mozilla Blog.

12 Jan 2026 7:31pm GMT

Eitan Isaacson: MacOS Accessibility with pyax

'pyax inspect' in action on the Firefox new tab page

In our work on Firefox MacOS accessibility we routinely run into highly nuanced bugs in our accessibility platform API. The tree structure, an object attribute, the sequence of events, or the event payloads, is just off enough that we see a pronounced difference in how an AT like VoiceOver behaves. When we compare our API against other browsers like Safari or Chrome, we notice small differences that have out-sized user impacts.

In cases like that, we need to dive deep. XCode's Accessibility Inspector shows a limited subset of the API, but web engines implement a much larger set of attributes that are not shown in the inspector. This includes an advanced, undocumented, text API. We also need a way to view and inspect events and their payloads so we can compare the sequence to other implementations.

Since we started getting serious about MacOS accessibility in Firefox in 2019 we have hobbled together an adhoc set of Swift and Python scripts to examine our work. It slowly started to coalesce and formalize into a python client library for MacOS accessibility called pyax.

Recently, I put some time into making pyax not just a Python library, but a nifty command line tool for quick and deep diagnostics. There are several sub commands I'll introduce here. And I'll leave the coolest for last, so hang on.

pyax tree

This very simply dumps the accessibility tree of the given application. But hold on, there are some useful flags you can use to drill down to the issue you are looking for:

--web

Only output the web view's subtree. This is useful if you are troubleshooting a simple web page and don't want to be troubled with the entire application.

--dom-id

Dump the subtree of the given DOM ID. This obviously is only relevant for web apps. It allows you to cut the noise and only look at the part of the page/app you care about.

--attribute

By default the tree dumper only shows you a handful of core attributes. Just enough to tell you a bit about the tree. You can include more obscure attributes by using this argument.

--all-attributes

Print all known attributes of each node.

--list-attributes

List all available attributes on each node in the tree. Sometimes you don't even know what you are looking for and this could help.

Implementation note: An app can provide an attribute without advertising its availability, so don't rely on this alone.

--list-actions

List supported actions on each node.

--json

Output the tree in a JSON format. This is useful with --all-attributes to capture and store a comprehensive state of the tree for comparison with other implementations or other deep dives.

pyax observe

This is a simple event logger that allows you to output events and their payloads. It takes most of the arguments above, like --attribute, and --list-actions.

In addition:

--event

Observe specific events. You can provide this argument multiple times for more than one event.

--print-info

Print the bundled event info.

pyax inspect

For visually inclined users, this command allows them to hover over the object of interest, click, and get a full report of its attributes, subtree, or any other useful information. It takes the same arguments as above, and more! Check out --help.

Getting pyax

Do pip install pyax[highlight] and its all yours. Please contribute with code, documentation, or good vibes (keep you vibes separate from the code).

12 Jan 2026 12:00am GMT

08 Jan 2026

feedPlanet Mozilla

Matthew Gaudet: Non-Traditional Profiling

Also known as "you can just put whatever you want in a jitdump you know?"

When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.

There's a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.

Of course, the basic range you'd expect to name is "function foo() was compiled to bytes 0x1000-0x1400"

Suppose you get that working. You might get a profile that looks like this one.

This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.

Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly 'template' style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.

What if we looked more closely at that?

Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it's not used. I went through and added AutoCreatedBy to each of the LIR op's codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the 'function name' for that sequence of instructions generated while the AutoCreatedBy was live.

That gets us this profile

While it doesn't look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree

For example, we spend 1.9% of the profiled time doing for a single self-hosted function 'visitHasShape', which is basically:

masm.loadObjShapeUnsafe(obj, output);
  masm.cmpPtrSet(Assembler::Equal, output, ImmGCPtr(ins->mir()->shape()),
                 output);

Which is not particularly complicated.

Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.

Woah. Ok, now we've got something quite different, if really hard to interpret

Even more interesting (easier to interpret) is the inverted call tree:

So across the whole program, we're spending basically 5% of the time doing guardShape. I think that's a super interesting slicing of the data.

Is it actionable? I don't know yet. I haven't opened any bugs really on this yet; a lot of the highlighted code is stuff where it's not clear that there is a faster way to do what's being done, outside of engine architectural innovation.

The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I'm sure there's more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it's low on the benchmarks I checked yo).

So yeah. You can just... put stuff in your JIT dump file.

Edited to Add: I should mention this code is nowhere. Given I don't entirely know how actionable this ends up being, and the code quality is subpar, I haven't even pushed this code. Think of this as an inspiration, not a feature announcement.

08 Jan 2026 9:46pm GMT

The Mozilla Blog: Owners, not renters: Mozilla’s open source AI strategy

Abstract black halftone cloud illustration on a pink background, representing cloud computing or digital infrastructure.

The future of intelligence is being set right now, and the path we're on leads somewhere I don't want to go. We're drifting toward a world where intelligence is something you rent - where your ability to reason, create, and decide flows through systems you don't control, can't inspect, and didn't shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you're given.

I think we can do better. Making that happen is now central to what Mozilla is doing.

What we did for the web

Twenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible - dropping Internet Explorer's market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.

There's a reason the browser is called a "user agent." It was designed to be on your side - blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.

Now AI is becoming the new intermediary. It's what I've started calling "Layer 8" - the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.

The question we have to ask is straightforward: Whose side will your new user agent be on?

Why closed systems are winning (for now)

We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you're a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing - it all comes bundled together in a package that just works. I understand the appeal firsthand, because I've made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.

The open-source AI ecosystem is a different story. It's powerful and advancing rapidly, but it's also deeply fragmented - models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don't have to spare. This is the core challenge we face, and it's important to name it clearly: What we're dealing with isn't a values problem where developers are choosing convenience over principle. It's a developer experience problem. And developer experience problems can be solved.

The ground is already shifting

We've watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway - not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn't match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.

AI has the potential to follow the same path - but only if someone builds it. And several shifts are already reshaping the landscape:

The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn't win by being more principled than the alternatives. Openness wins when it becomes the better deal - cheaper, more capable, and just as easy to use

Where the cracks are forming

If openness is going to win, it won't happen everywhere at once. It will happen at specific tipping points - places where the defaults haven't yet hardened, where a well-timed push can change what becomes normal. We see four.

The first is developer experience. Developers are the ones who actually build the future - every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that's where most of the building is happening. But developers don't want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they'll build the open ecosystem themselves.

The second is data. For a decade, the assumption has been that data is free to scrape - that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it's used and a share in the value it creates. We're moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there's still a chance to build it right.

The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.

The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open - through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.

What an open stack could look like

Today's dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next - data improves models, models improve applications, applications generate more data that only the platform can use. It's a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don't build on the platform; you build inside it.

There's another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.

We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:

Pieces of this stack already exist - good ones, built by talented people. The task now is to fill in the gaps, connect what's there, and make the whole thing as easy to use as the closed alternatives. That's the work.

Why open source matters here

If you've followed Mozilla, you know the Manifesto. For almost 20 years, it's guided what we build and how - not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:

Open-source AI is how these principles become real. It's what makes plurality possible - many intelligences shaped by many communities, not one model to rule them all. It's what makes sovereignty possible - owning your infrastructure rather than renting it. And it's what keeps the door open for public-benefit alternatives to exist alongside commercial ones.

What we'll do in 2026

The window to shape these defaults is still open, but it won't stay open forever. Here's where we're putting our effort - not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.

Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack - model routing, evaluation, guardrails, memory, orchestration - into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.

Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.

Learn from real deployments. Strategy that isn't grounded in practical experience is just speculation, so we're deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.

Invest in the ecosystem. We're not just building; we're backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can't do everything ourselves, and we shouldn't try. The goal is to put resources behind the people and teams already doing the work.

Show up for the community. The open-source AI ecosystem is vast, and it's hard to know what's working, what's hype, and where the real momentum is building. We want to be useful here. We're launching a newsletter to track what's actually happening in open AI. We're running meetups and hackathons to bring builders together. We're fielding developer surveys to understand what people actually need. And at MozFest this year, we're adding a dedicated developer track focused on open-source AI. If you're doing important work in this space, we want to help it find the people who need to see it.

Are you in?

Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it - we just want to help it succeed. There's a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.

We kept the web open not by asking anyone's permission, but by building something that worked better than the alternatives. We're ready to do that again.

So: Are you in?

If you're a developer building toward an open source AI future, we want to work with you. If you're a researcher, investor, policymaker, or founder aligned with these goals, let's talk. If you're at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist - that keeps everyone honest.

The future of intelligence is being set now. The question is whether you'll own it, or rent it.

We're launching a newsletter to track what's happening in open-source AI - what's working, what's hype, and where the real momentum is building. Sign up here to follow along as we build.

Read more here about our emerging strategy, and how we're rewiring Mozilla for the era of AI.

The post Owners, not renters: Mozilla's open source AI strategy appeared first on The Mozilla Blog.

08 Jan 2026 7:05pm GMT

Firefox Add-on Reviews: 2025 Staff Pick Add-ons

While nearly half of all Firefox users have installed an add-on, it's safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…

Falling Snow Animated Theme

Enjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.

Privacy Badger

The flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.

Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage "supercookies," canvas fingerprinting, and other sneaky tracking methods.

Adaptive Tab Bar Color

Turn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you're visiting.

It's beautifully simple and sublime. No setup required, but you're free to make subtle adjustments to color contrast patterns and assign specific colors for websites.

Rainy Spring Sakura by MaDonna

Created by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura's bucolic mix of calming colors.

It's like instant Zen mode for Firefox.

Return YouTube Dislike

Do you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.

Other Firefox users seem to agree…

"Does exactly what the name suggests. Can't see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool."

Firefox user OFG

"i have never smashed 5 stars faster."

Firefox user 12918016

<figcaption class="wp-element-caption">Return YouTube Dislike re-enables a beloved feature.</figcaption>

LeechBlock NG

Block time-wasting websites with LeechBlock NG - easily one of our staff-favorite productivity tools.

Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features:

DarkSpaceBlue

Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.

LanguageTool - Grammar and Spell Checker

Improve your prose anywhere you write on the web. LanguageTool - Grammar and Spell Checker will make you a better writer in 25+ languages.

Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:

<figcaption class="wp-element-caption">LanguageTool can help with subtle syntax improvements. </figcaption>

Sink It for Reddit!

Imagine a more focused and free feeling Reddit - that's Sink It for Reddit!

Some of our staff-favorite features include:

Sushi Nori

Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori.

08 Jan 2026 2:59pm GMT

07 Jan 2026

feedPlanet Mozilla

Mozilla Localization (L10N): Mozilla Localization in 2025

A Year in Data

As is tradition, we're wrapping up 2025 for Mozilla's localization efforts and offering a sneak peek at what's in store for 2026 (you can find last year's blog post here).

Pontoon's metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.

The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.

As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).

Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla's localization over the last 12 months - or plan to do so in 2026. There is always space for new contributors!

Pontoon Development

A significant part of the work on Pontoon in 2025 isn't immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.

One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.

In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.

Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.

Finally, we've been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we're actively working toward feature parity with Transvision to better support automation and integrations.

Community

Our main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.

Group dinner for the localization event in BerlinThis doesn't mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla's localization work possible.

Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we've helped reinforce the message that investing in people - not just tools - is essential to the long-term health of Mozilla's localization ecosystem.

What's coming in 2026

As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.

This excerpt comes from last year's blog post, and while it took longer than expected, the good news is that we're finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.

In parallel, we "silently" launched the Mozilla Language Portal, a unified hub that reflects Mozilla's unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.

On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon's translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.

We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.

Last but not least, we plan to keep investing in our community. While we don't know yet what that will look like in practice, keep an eye on this blog for updates.

If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!

Thank you!

As we look toward 2026, we're grateful for the people who make Mozilla's localization possible. Through shared effort and collaboration, we'll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.

07 Jan 2026 1:51pm GMT

Ludovic Hirlimann: Are mozilla's fork any good?

To answer that question, we first need to understand how complex, writing or maintaining a web browser is.

A "modern" web browser is :

Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.

In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure.

Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.

When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.

The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.

Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that for a fact because both Camino and Thunderbird are embeding gecko.

In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :

  1. Keep the web standards open, so anyone can implement a web browser / web services.
  2. to have enough traffic to be able to pay all the engineers working on gecko.

Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"

I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :

If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.

Now, I've set a tone above that foresees a dark future for open web technologies. What Can you do to keep the web open and with some privacy focus?

  1. Keep using Mozilla Nightly
  2. Give servo a try

[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.

[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.

[3] Is a computer language that transforms HTML into something that can interact with the person using the web browser. See https://developer.mozilla.org/en-US/docs/Glossary/JavaScript

[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.

[5]https://math.answers.com/math-and-arithmetic/How_many_lines_of_code_in_mozillafirefox

[6] Testing implies, testing the code and also having engineers or users using the unfinished product to see that it doesn't regress. Testing Mozilla, is explained at https://ehsanakhgari.org/wp-content/uploads/talks/test-mozilla/

[7] Read a release equals a version. Version 1.5 is a release, as is version 3.0.1.

[8] https://en.wikipedia.org/wiki/Browser_wars

07 Jan 2026 1:26pm GMT