22 Jan 2026

feedPlanet Mozilla

Jonathan Almeida: Test sites for browser developers

Working on the mobile Firefox team gives you the opportunity to touch on many different parts of the browser space. You often need to test the interaction between web content and the application integration's to another component, say for example, a site registering for a WebPush subscription and Firefox using Firebase Cloud Messaging to deliver the encrypted message to the end-user. Hunting around for an example to validate everything fine and dandy takes time.

Sometimes a simple test site for your use case is helpful for initial validation or comparison against other browsers.

Below is a list of tests that I've used in the past with example behaviours (in no particular order):

Make your own

If you need to make your own, try to write out the code yourself so you can understand the reduced test case. If it's not straight-forward, try using the Testcase Reducer by Thomas Wisniewski.

Comments

With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.

Learn how this was implemented from the original source here.

<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/115937256635328128">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript> &>"'

22 Jan 2026 9:28pm GMT

Mozilla Localization (L10N): L10n Report: January Edition 2026

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Happy New Year!

What's new or coming up in Firefox desktop

Preferences updates for 148

A new set of strings intended for inclusion in the preferences page of 148 landed recently in Pontoon on January 16. These strings, focused around controls of AI features, landed ahead of the UX and functionality implementation so are not currently testable. These should be testable within the coming week in Nightly and Beta.

Split view coming in 149

A new feature, called "split view", is coming to Firefox 149. This feature and its related strings have already started landing at the end of 2025. You can test the feature now in Nightly, just right click a tab and select "Add Split View". (If the option isn't showing in your Nightly, then open about:config and ensure "browser.tabs.splitView.enabled" is set to true.

Split view menuExample of split view with Transvision and Pontoon

What's new or coming up in mobile

Android onboarding testing updates

It is now possible to test the onboarding experience in Firefox for Android without using a simulator or wiping your existing data. We are currently waiting for engineers to update the default configuration to align with the onboarding experience in Firefox 148 and newer. We hope this update will land in time for the release of 148, and we will communicate the change via Pontoon as soon as that's available.

In the meantime, please review the updated testing documentation to see how to trigger the onboarding flow. Note that some UI elements will display string identifiers instead of translations until the configuration is updated.

Firefox for iOS localization screenshots

We heard your feedback about the screenshot process for Firefox for iOS. Thanks to everyone who answered the survey at the end of last year.

Screenshots are now available as a gallery for each locale. There is no longer a need to download and decompress a local zip file. You can browse the current screenshots for your locale, and use the links at the top to review the full history or compare changes between runs (generated roughly every two weeks).

Links in Firefox for iOS screenshots gallery

A reminder that links to testing environments and instructions are always available from the project header in Pontoon.

What's new or coming up in web projects

Firefox.com

We're planning some changes to how content is managed on firefox.com, and these updates will have an impact on our existing localization workflows. Once the details are finalized, we'll share more information and notify you directly in Pontoon.

What's new or coming up in Pontoon

Pontoon infrastructure update

Behind the scenes, Pontoon has recently completed a major migration from Heroku to Google Cloud Platform. While this change should be largely invisible to localizers in day-to-day use, it brings noticeable improvements in performance, reliability, and scalability, helping ensure a smoother experience as contributor activity continues to grow. Huge thanks go to our Cloud Engineering partners for supporting this effort over the past months and helping make this important milestone possible.

Friends of the Lion

Image by Elio Qoshi

Image by Elio Qoshi

Since relaunching the contributor spotlight blog series, we've published two more stories highlighting the people behind our localization work.

We featured Robb, a professional translator from Romania, whose love for words and her desire to help her mom keep up with modern technology has grown into a day-to-day commitment to making products and technology accessible in language that everyday people can understand.

We also spotlighted Andika from Indonesia, a long-time open source contributor who joined the localization community to ensure Firefox and other products feel natural and accessible for Indonesian-speaking users. His steady, long-term commitment to quality speaks volumes about the impact of thoughtful localization.

We'll be continuing this series and are always looking for contributors to feature. You can help us find the next localizer to spotlight by nominating one of your fellow community members. We'd love to hear from you!

Know someone in your l10n community who's been doing a great job and should appear here? Contact us and we'll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

22 Jan 2026 6:48pm GMT

The Mozilla Blog: How Mozilla builds now

Headshot of Peter Rojas, Senior Vice President of New Products at Mozilla, wearing a gray sweater and smiling against a white background.

Mozilla has always believed that technology should empower people.

That belief shaped the early web, when browsers were still new and the idea of an open internet felt fragile. Today, the technology is more powerful, more complex, and more opaque, but the responsibility is the same. The question isn't whether technology can do more. It's whether it helps people feel capable, informed, and in control.

As we build new products at Mozilla today, that question is where we start.

I joined Mozilla to lead New Products almost one year ago this week because this is one of the few places still willing to take that responsibility seriously. Not just in what we ship, but in how we decide what's worth building in the first place - especially at a moment when AI, platforms, and business models are all shifting at once.

Our mission - and mine - is to find the next set of opportunities for Mozilla and help shape the internet that all of us want to see.

Writing up to users

One of Mozilla's longest-held principles is respect for the people who use our products. We assume users are thoughtful. We accept skepticism as a given (it forces product development rigor - more on that later). And we design accordingly.

That respect shows up not just in how we communicate, but in the kinds of systems we choose to build and the role we expect people to play in shaping them.

You can see this in the way we're approaching New Products work across Mozilla today: Our current portfolio includes tools like Solo, which makes it easy for anyone to own their presence on the web; Tabstack, which helps developers enable agentic experiences; 0DIN, which pools the collective expertise of over 1400 researchers from around the globe to help identify and surface AI vulnerabilities; and an enterprise version of Firefox that treats the browser as critical infrastructure for modern work, not a data collection surface.

None of this is about making technology simpler than it is. It's about making it legible. When people understand the systems they're using, they can decide whether those systems are actually serving them.

Experimentation that respects people's time

Mozilla experiments. A lot. But we try to do it without treating talent and attention as an unlimited resource. Building products that users love isn't easy and requires us to embrace the uncertainty and ambiguity that comes with zero-to-one exploration.

Every experiment should answer a real question. It should be bounded. And it should be clear to the people interacting with it what's being tested and why. That discipline matters, especially now. When everything can be prototyped quickly, restraint becomes part of the craft.

Fewer bets, made deliberately. A willingness to stop when something isn't working. And an understanding that experimentation doesn't have to feel chaotic to be effective.

Creating space for more kinds of builders

Mozilla has always believed that who builds is just as important as what gets built. But let's be honest: The current tech landscape often excludes a lot of brilliant people, simply because the system is focused on only rewarding certain kinds of outcomes.

We want to unlock those meaningful ideas by making experimentation more practical for people with real-world perspectives. We're focused on lowering the barriers to building - because we believe that making tech more inclusive isn't just a nice-to-have, it's how you build better products.

A practical expression of this approach

One expression of this philosophy is a new initiative we'll be sharing more about soon: Mozilla Pioneers.

Pioneers isn't an accelerator, and it isn't a traditional residency. It's a structured, time-limited way for experienced builders to work with Mozilla on early ideas without requiring them to put the rest of their lives on hold.

The structure is intentional. Pioneers is paid. It's flexible. It's hands-on. And it's bounded. Participants work closely with Mozilla engineers, designers, and product leaders to explore ideas that could become real Mozilla products - or could simply clarify what shouldn't be built.

Some of that work will move forward. Some won't. Both outcomes are valuable. Pioneers exists because we believe that good ideas don't only come from founders or full-time employees, and that meaningful contribution deserves real support.

Applications open Jan. 26. For anyone interested (and I hope that's a lot of you) please follow us, share and apply. In the meantime, know that what's ahead is just one more example of how we're trying to build with intention.

Looking ahead

Mozilla doesn't pretend to have all the answers. But we're clear about our commitments.

As we build new products, programs, and systems, we're choosing clarity over speed, boundaries over ambiguity, and trust that compounds over time instead of short-term gains.

The future of the internet won't be shaped only by what technology can do - but by what its builders choose to prioritize. Mozilla intends to keep choosing people.

The post How Mozilla builds now appeared first on The Mozilla Blog.

22 Jan 2026 2:00pm GMT

The Rust Programming Language Blog: Announcing Rust 1.93.0

The Rust team is happy to announce a new version of Rust, 1.93.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.93.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.93.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.93.0 stable

Update bundled musl to 1.2.5

The various *-linux-musl targets now all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.

For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable Linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.

However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2.5 years ago), and we believe has sufficiently widely propagated that we're ready to make the change in Rust targets.

See our previous announcement for more details.

Allow the global allocator to use thread-local storage

Rust 1.93 adjusts the internals of the standard library to permit global allocators written in Rust to use std's thread_local! and std::thread::current without re-entrancy concerns by using the system allocator instead.

See docs for details.

cfg attributes on asm! lines

Previously, if individual parts of a section of inline assembly needed to be cfg'd, the full asm! block would need to be repeated with and without that section. In 1.93, cfg can now be applied to individual statements within the asm! block.

asm!( // or global_asm! or naked_asm!
    "nop",
    #[cfg(target_feature = "sse2")]
    "nop",
    // ...
    #[cfg(target_feature = "sse2")]
    a = const 123, // only used on sse2
);

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.93.0

Many people came together to create Rust 1.93.0. We couldn't have done it without all of you. Thanks!

22 Jan 2026 12:00am GMT

21 Jan 2026

feedPlanet Mozilla

This Week In Rust: This Week in Rust 635

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is throttled-tracing, a crate of periodic and throttled logging macros.

Thanks to Paperinik for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Cargo

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

464 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Various changes in both direction, but not much has changed overall.

Triage done by @panstromek. Revision range: 840245e9..3d087e60

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.1%, 1.6%] 21
Regressions ❌
(secondary)
0.6% [0.0%, 2.6%] 113
Improvements ✅
(primary)
-0.3% [-2.1%, -0.2%] 37
Improvements ✅
(secondary)
-1.2% [-29.6%, -0.1%] 37
All ❌✅ (primary) 0.0% [-2.1%, 1.6%] 58

3 Regressions, 4 Improvements, 7 Mixed; 6 of them in rollups 40 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Cargo

Leadership Council

No Items entered Final Comment Period this week for Compiler Team (MCPs only), Rust RFCs, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-01-21 - 2026-02-18 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I might suspect that if you are lumping all statically-typed languages into a single bucket without making particular distinction among them, then you might not have fully internalized the implications of union (aka Rust enum aka sum) typed data structures combined with exhaustive pattern matching.

I like to call it getting "union-pilled" and it's really hard to accept otherwise statically-typed languages once you become familiar.

- arwhatever on hacker news

Thanks to Colin Bennett for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

21 Jan 2026 5:00am GMT

The Rust Programming Language Blog: crates.io: development update

Time flies! Six months have passed since our last crates.io development update, so it's time for another one. Here's a summary of the most notable changes and improvements made to crates.io over the past six months.

Security Tab

Crate pages now have a new "Security" tab that displays security advisories from the RustSec database. This allows you to quickly see if a crate has known vulnerabilities before adding it as a dependency.

Security Tab Screenshot

The tab shows known vulnerabilities for the crate along with the affected version ranges.

This feature is still a work in progress, and we plan to add more functionality in the future. We would like to thank the OpenSSF (Open Source Security Foundation) for funding this work and Dirkjan Ochtman for implementing it.

Trusted Publishing Enhancements

In our July 2025 update, we announced Trusted Publishing support for GitHub Actions. Since then, we have made several enhancements to this feature.

GitLab CI/CD Support

Trusted Publishing now supports GitLab CI/CD in addition to GitHub Actions. This allows GitLab users to publish crates without managing API tokens, using the same OIDC-based authentication flow.

Note that this currently only works with GitLab.com. Self-hosted GitLab instances are not supported yet. The crates.io implementation has been refactored to support multiple CI providers, so adding support for other platforms like Codeberg/Forgejo in the future should be straightforward. Contributions are welcome!

Trusted Publishing Only Mode

Crate owners can now enforce Trusted Publishing for their crates. When enabled in the crate settings, traditional API token-based publishing is disabled, and only Trusted Publishing can be used to publish new versions. This reduces the risk of unauthorized publishes from leaked API tokens.

Blocked Triggers

The pull_request_target and workflow_run GitHub Actions triggers are now blocked from Trusted Publishing. These triggers have been responsible for multiple security incidents in the GitHub Actions ecosystem and are not worth the risk.

Source Lines of Code

Crate pages now display source lines of code (SLOC) metrics, giving you insight into the size of a crate before adding it as a dependency. This metric is calculated in a background job after publishing using the tokei crate. It is also shown on OpenGraph images:

OpenGraph image showing SLOC metric

Thanks to XAMPPRocky for maintaining the tokei crate!

Publication Time in Index

A new pubtime field has been added to crate index entries, recording when each version was published. This enables several use cases:

Thanks to Rene Leonhardt for the suggestion and Ed Page for driving this forward on the Cargo side.

Svelte Frontend Migration

At the end of 2025, the crates.io team evaluated several options for modernizing our frontend and decided to experiment with porting the website to Svelte. The goal is to create a one-to-one port of the existing functionality before adding new features.

This migration is still considered experimental and is a work in progress. Using a more mainstream framework should make it easier for new contributors to work on the frontend. The new Svelte frontend uses TypeScript and generates type-safe API client code from our OpenAPI description, so types flow from the Rust backend to the TypeScript frontend automatically.

Thanks to eth3lbert for the helpful reviews and guidance on Svelte best practices. We'll share more details in a future update.

Miscellaneous

These were some of the more visible changes to crates.io over the past six months, but a lot has happened "under the hood" as well.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

21 Jan 2026 12:00am GMT

20 Jan 2026

feedPlanet Mozilla

Data@Mozilla: This Week in Data: There’s No Such Thing as a Normal Month

("This Week in Data" is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it's about data.)

At the risk of reminding you of a Nickleback song, look at this graph:

An orange sparkline plot with many valleys, peaks, and plateaus (described in more detail in the text)

I've erased the y-axis because the absolute values don't actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.

I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We're at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.

Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to "a normal month".

But what's "normal"? Which of these months could be considered "normal" by any measure? I mean:

October and maybe May are perhaps the closest things we have to "normal" months, and by being the only "normal"-ish months that makes them rather abnormal, don't you think?

Now, I've been lying to you with data visualization here. If you're exceedingly clever you'll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn't start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn't it:

An area plot that is mostly flat showing data from 2021 to 2026 of around 200M clients.

This isn't a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you're hoping to tell.

At any rate, I hope you found it as interesting as I did to learn that December's abnormality makes it just as "normal" as the rest of the months for my cost estimation purposes.

:chutten

(this is a syndicated copy of the original blog post.)

20 Jan 2026 3:32pm GMT

Chris H-C: This Week in Data: There’s No Such Thing as a Normal Month

("This Week in Data" is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it's about data.)

At the risk of reminding you of a Nickleback song, look at this graph:

An orange sparkline plot with many valleys, peaks, and plateaus (described in more detail in the text)

I've erased the y-axis because the absolute values don't actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.

I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We're at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.

Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to "a normal month".

But what's "normal"? Which of these months could be considered "normal" by any measure? I mean:

October and maybe May are perhaps the closest things we have to "normal" months, and by being the only "normal"-ish months that makes them rather abnormal, don't you think?

Now, I've been lying to you with data visualization here. If you're exceedingly clever you'll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn't start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn't it:

An area plot that is mostly flat showing data from 2021 to 2026 of around 200M clients.

This isn't a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you're hoping to tell.

At any rate, I hope you found it as interesting as I did to learn that December's abnormality makes it just as "normal" as the rest of the months for my cost estimation purposes.

:chutten

20 Jan 2026 3:19pm GMT

19 Jan 2026

feedPlanet Mozilla

Firefox Nightly: Introducing Mozilla’s Firefox Nightly .rpm package for RPM-based linux distributions!

After introducing Debian packages for Firefox Nightly, we're now excited to extend that to RPM-based distributions.

Just like with the Debian packages, switching to Mozilla's RPM repository allows Firefox to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:


To install Firefox Nightly, follow these steps:

If you are on fedora (41+), or any other distribution using dnf5 as the package manager

sudo dnf config-manager addrepo --id=mozilla --set=baseurl=https://packages.mozilla.org/rpm/firefox --set=gpgcheck=0 --set=repo_gpgcheck=0
sudo dnf makecache --refresh
sudo dnf install firefox-nightly

If you are on openSUSE or any other distribution using zypper as the package manager

sudo zypper ar -G https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-nightly

For other RPM based distributions (RHEL, CentOS, Rocky Linux, older Fedora versions)

sudo tee /etc/yum.repos.d/mozilla.repo > /dev/null << EOF
[mozilla]
name=Mozilla Packages
baseurl=https://packages.mozilla.org/rpm/firefox
enabled=1
repo_gpgcheck=0
gpgcheck=0
EOF

# For dnf users
sudo dnf makecache --refresh
sudo dnf install firefox-nightly

# For zypper users
sudo zypper refresh
sudo zypper install firefox-nightly

Note: gpgcheck is currently disabled until Bug 2009927 is addressed.

It is worth noting that the firefox-nightly package will not conflict with your distribution's Firefox package if you have it installed, you can have both at the same time!

Adding language packs

If your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):

sudo dnf install firefox-nightly-l10n-fr

You can list the available languages with the following command:

dnf search firefox-nightly-l10n


Don't hesitate to report any problem you encounter to help us make your experience better.

19 Jan 2026 4:02pm GMT

16 Jan 2026

feedPlanet Mozilla

Mozilla GFX: Experimental High Dynamic Range video playback on Windows in Firefox Nightly 148

Modern computer displays have gained more colorful capabilities in recent years with High Dynamic Range (HDR) being a headline feature. These displays can show vibrant shades of red, purple and green that were outside the capability of past displays, as well as higher brightness for portions of the displayed videos.

We are happy to announce that Firefox is gaining support for HDR video on Windows, now enabled in Firefox Nightly 148. This is experimental for the time being, as we want to gather feedback on what works and what does not across varied hardware in the wild before we deploy it for all Firefox users broadly. HDR video has already been live on macOS for some time now, and is being worked on for Wayland on Linux.

To get the full experience, you will need an HDR display, and the HDR feature needs to be turned on in Windows (Settings -> Display Settings) for that display. This release also changes how HDR video looks on non-HDR displays in some cases: this used to look very washed out, but it should be improved now. Feedback on whether this is a genuine improvement is also welcome. Popular streaming websites may be checking for this HDR capability, so they may now offer HDR video content to you, but only if HDR is enabled on the display.

We are actively working on HDR support for other web functionality such as WebGL, WebGPU, Canvas2D and static images, but have no current estimates on when those features will be ready: this is a lot of work, and relevant web standards are still in flux.

Note for site authors: Websites can use the CSS video-dynamic-range functionality to make separate HDR and SDR videos available for the same video element. This functionality detects if the user has the display set to HDR, not necessarily whether the display is capable of HDR mode. Displaying an HDR video on an SDR display is expected to work reasonably but requires more testing - we invite feedback on that.

Notes and limitations:

16 Jan 2026 2:40am GMT

15 Jan 2026

feedPlanet Mozilla

Spidermonkey Development Blog: Flipping Responsibility for Jobs in SpiderMonkey

This blog post is written both as a heads-up to embedders of SpiderMonkey, and an explanation of why the changes are coming

As an embedder of SpiderMonkey one of the decisions you have to make is whether or not to provide your own implementation of the job queue.

The responsibility of the job queue is to hold pending jobs for Promises, which in the HTML spec are called 'microtasks'. For embedders, the status quo of 2025 was two options:

  1. Call JS::UseInternalJobQueues, and then at the appropriate point for your embedding, call JS::RunJobs. This uses an internal job queue and drain function.
  2. Subclass and implement the JS::JobQueue type, storing and invoking your own jobs. An embedding might want to do this if they wanted to add their own jobs, or had particular needs for the shape of jobs and data carried alongside them.

The goal of this blog post is to indicate that SpiderMonkey's handling of Promise jobs is changing over the next little while, and explain a bit of why.

If you've chosen to use the internal job queue, almost nothing should change for your embedding. If you've provided your own job queue, read on:

What's Changing

  1. The actual type of a job from the JS engine is changing to be opaque.
  2. The responsibility for actually storing the Promise jobs is moving from the embedding, even in the case of an embedding provided JobQueue.
  3. As a result of (1), the interface to run a job from the queue is also changing.

I'll cover this in a bit more detail, but a good chunk of the interface discussed is in MicroTask.h (this link is to a specific revision because I expect the header to move).

For most embeddings the changes turn out to be very mechanical. If you have specific challenges with your embedding please reach out.

Job Type

The type of a JS Promise job has been a JSFunction, and thus invoked with JS::Call. The job type is changing to an opaque type. The external interface to this type will be JS::Value (typedef'd as JS::GenericMicroTask);

This means that if you're an embedder who had been storing your own tasks in the same queue as JS tasks you'll still be able to, but you'll need to use the queue access APIs in MicroTask.h. A queue entry is simply a JS::Value and so an arbitrary C address can be stored in it as a JS::PrivateValue.

Jobs now are split into two types: JSMicroTasks (enqueued by the JS engine) and GenericMicroTasks (possibly JS engine provided, possibly embedding provided).

Storage Responsibility

It used to be that if an embedding provided its own JobQueue, we'd expect them to store the jobs and trace the queue. Now that an embedding finds that the queue is inside the engine, the model is changing to one where the embedding must ask the JS engine to store jobs it produces outside of promises if it would like to share the job queue.

Running Micro Tasks

The basic loop of microtask execution now looks like this:


JS::Rooted<JSObject*> executionGlobal(cx)
JS::Rooted<JS::GenericMicroTask> genericTask(cx);
JS::Rooted<JS::JSMicroTask> jsTask(cx);

while (JS::HasAnyMicroTasks(cx)) {
  genericTask = JS::DequeueNextMicroTask(cx); 

  if (JS::IsJSMicroTask(genericTask)) {
    jsMicroTask = JS::ToMaybeWrappedJSMicroTask(genericMicroTask);
    executionGlobal = JS::GetExecutionGlobalFromJSMicroTask(jsMicroTask);

    {
      AutoRealm ar(cx, executionGlobal);
      if (!JS::RunJSMicroTask(cx, jsMicroTask)) {
        // Handle job execution failure in the 
        // same way JS::Call failure would have been
        // handled
      }
    }

    continue;
  }

  // Handle embedding jobs as appropriate. 
}

The abstract separation of the execution global is required to handle cases with many compartments and complicated realm semantics (aka a web browser).

An example

In order to see roughly what the changes would look like, I attempted to patch GJS, the GNOME JS embedding which uses SpiderMonkey.

The patch is here. It doesn't build due to other incompatibilities I found, but this is the rough shape of a patch for an embedding. As you can see, it's fairly self contained with not too much work to be done.

Why Change?

In a word, performance. The previous form of Promise job management is very heavyweight with lots of overhead, causing performance to suffer.

The changes made here allow us to make SpiderMonkey quite a bit faster for dealing with Promises, and unlock the potential to get even faster.

How do the changes help?

Well, perhaps the most important change here is making the job representation opaque. What this allows us to do is use pre-existing objects as stand-ins for the jobs. This means that rather than having to allocate a new object for every job (which is costly) we can some of the time actually allocate nothing, simply enqueing an existing job with enough information to run.

Owning the queue will also allow us to choose the most efficient data structure for JS execution, potentially changing opaquely in the future as we find better choices.

Empirically, changing from the old microtask queue system to the new in Firefox led to an improvement of up to 45% on Promise heavy microbenchmarks.

Is this it?

I do not think this is the end of the story for changes in this area. I plan further investment. Aspirationally I would like this all to be stabilized by the next ESR release which is Firefox 153, which will ship to beta in June, but only time will tell what we can get done.

Future changes I can predict are things like

  1. Renaming JS::JobQueue which is now more of a 'jobs interface'
  2. Renaming the MicroTask header to be less HTML specific

However, I can also imagine making more changes in the pursuit of performance.

What's the bug for this work

You can find most of the work related to this under Bug 1983153 (sm-µ-task)

An Apology

My apologies to those embedders who will have to do some work during this transition period. Thank you for sticking with SpiderMonkey!

15 Jan 2026 5:00pm GMT

14 Jan 2026

feedPlanet Mozilla

The Mozilla Blog: How founders are meeting the moment: Lessons from Mozilla Ventures’ 2025 portfolio convening

Mozilla Ventures Convening 2025 Report book cover with green geometric design on black background

At Mozilla, we've long believed that technology can be built differently - not only more openly, but more responsibly, more inclusively, and more in service of the people who rely on it. As AI reshapes nearly every layer of the internet, those values are being tested in real time.

Our 2025 Mozilla Ventures Portfolio Convening Report captures how a new generation of founders is meeting that moment.

At the Mozilla Festival 2025 in Barcelona, from Nov. 7-9, we brought together 50 founders from 30 companies across our portfolio to grapple with some of the most pressing questions in technology today: How do we build AI that is trustworthy and governable? How do we protect privacy at scale? What does "better social" look like after the age of the global feed? And how do we ensure that the future of technology is shaped by people and communities far beyond today's centers of power?

Over three days of panels, talks, and hands-on sessions, founders shared not just what they're building, but what they're learning as they push into new terrain. What emerged is a vivid snapshot of where the industry is heading - and the hard choices required to get there.

Open source as strategy, not slogan

A major theme emerging across conversations with our founders was that open source is no longer a "nice to have." It's the backbone of trust, adoption, and long‑term resilience in AI, and a critical pillar for the startup ecosystem. But these founders aren't naïve about the challenges. Training frontier‑scale models costs staggering sums, and the gravitational pull of a few dominant labs is real. Yet companies like Union.ai, Jozu, and Oumi show that openness can still be a moat - if it's treated as a design choice, not a marketing flourish.

Their message is clear: open‑washing won't cut it. True openness means clarity about what's shared -weights, data, governance, standards - and why. It means building communities that outlast any single company. And it means choosing investors who understand that open‑source flywheels take time to spin up.

Community as the real competitive edge

Across November's sessions, founders returned to a simple truth: community is the moat. Flyte's growth into a Linux Foundation project, Jozu's push for open packaging standards, and Lelapa's community‑governed language datasets all demonstrate that the most durable advantage isn't proprietary code - it's shared infrastructure that people trust.

Communities harden technology, surface edge cases, and create the kind of inertia that keeps systems in place long after competitors appear. But they also require care: documentation, governance, contributor experience, and transparency. As one founder put it, "You can't build community overnight. It's years of nurturing."

Ethics as infrastructure

One of the most powerful threads came from Lelapa AI, which reframes data not as raw material to be mined but as cultural property. Their licensing model, inspired by Māori data sovereignty, ensures that African languages - and the communities behind them - benefit from the value they create. This is openness with accountability, a model that challenges extractive norms and points toward a more equitable AI ecosystem.

It's a reminder that ethical design isn't a layer on top of technology - it's part of the architecture.

The real competitor: fear

Founders spoke candidly about the biggest barrier to adoption: fear. Enterprises default to hyperscalers because no one gets fired for choosing the biggest vendor. Overcoming that inertia requires more than values. It requires reliability, security features, SSO, RBAC, audit logs - the "boring" but essential capabilities that make open systems viable in real organizations.

In other words, trust is built not only through ideals but through operational excellence.

A blueprint for builders

Across all 16 essays, a blueprint started to emerge for founders and startups committed to building responsible technology and open source AI:

Taken together, the 16 essays in this report point to something larger than any single technology or trend. They show founders wrestling with how AI is governed, how trust is earned, how social systems can be rebuilt at human scale, and how innovation looks different when it starts from Lagos or Johannesburg instead of Silicon Valley.

The future of AI doesn't have to be centralized, extractive or opaque. The founders in this portfolio are proving that openness, trustworthiness, diversity, and public benefit can reinforce one another - and that competitive companies can be built on all four.

We hope you'll dig into the report, explore the ideas these founders are surfacing, and join us in backing the people building what comes next.

The post How founders are meeting the moment: Lessons from Mozilla Ventures' 2025 portfolio convening appeared first on The Mozilla Blog.

14 Jan 2026 5:00pm GMT

This Week In Rust: This Week in Rust 634

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

[ES] Command Pattern in Rust: When intent doesn't need to be an object

Miscellaneous

Crate of the Week

This week's crate is diesel-guard, a linter against dangerous Postgres migrations.

Thanks to Alex Yarotsky for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

539 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Fairly quiet week, most changes due to new features which naturally carry some overhead for existing programs. Overall though a small improvement.

Triage done by @simulacrum. Revision range: 7c04f5d2..840245e9

3 Regressions, 1 Improvement, 4 Mixed; 2 of them in rollups 31 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Compiler Team (MCPs only)

Rust

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Leadership Council, Language Team, Language Reference or Unsafe Code Guidelines. Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2026-01-14 - 2026-02-11 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have written in dozens of computer languages, including specialized ones that were internal to Pixar (including one I designed). I spent decades writing C and C++. I wrote bit-slice microcode, coded for SIMD before many folks outside of Pixar had it.

I wrote the first malloc debugger that would stop your debugger at the source code line that was the problem. Unix workstation manufacturers had to do an unexpected release when this revealed all of the problems in their C libraries.

I am a better programmer in Rust for anything low-level or high-performance. It just keeps me from making an entire class of mistakes that were too easy to make in any language without garbage-collection.

Over the long term, anything that improves quality is going to win. There is a lot of belly-aching by folks who are too in love with what they've been using for decades, but it is mostly substance-free. Like people realizing that code marked "unsafe" is, surprise, unsafe. And that unsafe can be abused.

- Bruce Perens on LinkedIn

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

14 Jan 2026 5:00am GMT

The Rust Programming Language Blog: What does it take to ship Rust in safety-critical?

This is another post in our series covering what we learned through the Vision Doc process. In our first post, we described the overall approach and what we learned about doing user research. In our second post, we explored what people love about Rust. This post goes deep on one domain: safety-critical software.

When we set out on the Vision Doc work, one area we wanted to explore in depth was safety-critical systems: software where malfunction can result in injury, loss of life, or environmental harm. Think vehicles, airplanes, medical devices, industrial automation. We spoke with engineers at OEMs, integrators, and suppliers across automotive (mostly), industrial, aerospace, and medical contexts.

What we found surprised us a bit. The conversations kept circling back to a single tension: Rust's compiler-enforced guarantees support much of what Functional Safety Engineers and Software Engineers in these spaces spend their time preventing, but once you move beyond prototyping into the higher-criticality parts of a system, the ecosystem support thins out fast. There is no MATLAB/Simulink Rust code generation. There is no OSEK or AUTOSAR Classic-compatible RTOS written in Rust or with first-class Rust support. The tooling for qualification and certification is still maturing.

Quick context: what makes software "safety-critical"

If you've never worked in these spaces, here's the short version. Each safety-critical domain has standards that define a ladder of integrity levels: ISO 26262 in automotive, IEC 61508 in industrial, IEC 62304 in medical devices, DO-178C in aerospace. The details differ, but the shape is similar: as you climb the ladder toward higher criticality, the demands on your development process, verification, and evidence all increase, and so do the costs.1

This creates a strong incentive for decomposition: isolate the highest-criticality logic into the smallest surface area you can, and keep everything else at lower levels where costs are more manageable and you can move faster.

We'll use automotive terminology in this post (QM through ASIL D) since that's where most of our interviews came from, but the patterns generalize. These terms represent increasing levels of safety-criticality, with QM being the lowest and ASIL D being the highest. The story at low criticality looks very different from the story at high criticality, regardless of domain.

Rust is already in production for safety-critical systems

Before diving into the challenges, it is worth noting that Rust is not just being evaluated in these domains. It is deployed and running in production.

We spoke with a principal firmware engineer working on mobile robotics systems certified to IEC 61508 SIL 2:

"We had a new project coming up that involved a safety system. And in the past, we'd always done these projects in C using third party stack analysis and unit testing tools that were just generally never very good, but you had to do them as part of the safety rating standards. Rust presented an opportunity where 90% of what the stack analysis stuff had to check for is just done by the compiler. That combined with the fact that now we had a safety qualified compiler to point to was kind of a breakthrough." -- Principal Firmware Engineer (mobile robotics)

We also spoke with an engineer at a medical device company deploying IEC 62304 Class B software to intensive care units:

"All of the product code that we deploy to end users and customers is currently in Rust. We do EEG analysis with our software and that's being deployed to ICUs, intensive care units, and patient monitors." -- Rust developer at a medical device company

"We changed from this Python component to a Rust component and I think that gave us a 100-fold speed increase." -- Rust developer at a medical device company

These are not proofs of concept. They are shipping systems in regulated environments, going through audits and certification processes. The path is there. The question is how to make it easier for the next teams coming through.

Rust adoption is easiest at QM, and the constraints sharpen fast

At low criticality, teams described a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what you ship. One architect at an automotive OEM told us:

"We can use any crate [from crates.io] [..] we have to take care to prepare the software components for production usage." -- Architect at Automotive OEM

But at higher levels, third-party dependencies become difficult to justify. Teams either rewrite, internalize, or strictly constrain what they use. An embedded systems engineer put it bluntly:

"We tend not to use 3rd party dependencies or nursery crates [..] solutions become kludgier as you get lower in the stack." -- Firmware Engineer

Some teams described building escape hatches, abstraction layers designed for future replacement:

"We create an interface that we'd eventually like to have to simplify replacement later on [..] sometimes rewrite, but even if re-using an existing crate we often change APIs, write more tests." -- Team Lead at Automotive Supplier (ASIL D target)

Even teams that do use crates from crates.io described treating that as a temporary accelerator, something to track carefully and remove from critical paths before shipping:

"We use crates mainly for things in the beginning where we need to set up things fast, proof of concept, but we try to track those dependencies very explicitly and for the critical parts of the software try to get rid of them in the long run." -- Team lead at an automotive software company developing middleware in Rust

In aerospace, the "control the whole stack" instinct is even stronger:

"In aerospace there's a notion of we must own all the code ourselves. We must have control of every single line of code." -- Engineering lead in aerospace

This is the first big takeaway: a lot of "Rust in safety-critical" is not just about whether Rust compiles for a target. It is about whether teams can assemble an evidence-friendly software stack and keep it stable over long product lifetimes.

The compiler is doing work teams used to do elsewhere

Many interviewees framed Rust's value in terms of work shifted earlier and made more repeatable by the compiler. This is not just "nice," it changes how much manual review you can realistically afford. Much of what was historically process-based enforcement through coding standards like MISRA C and CERT C becomes a language-level concern in Rust, checked by the compiler rather than external static analysis or manual review.

"Roughly 90% of what we used to check with external tools is built into Rust's compiler." -- Principal Firmware Engineer (mobile robotics)

We heard variations of this from teams dealing with large codebases and varied skill levels:

"We cannot control the skill of developers from end to end. We have to check the code quality. Rust by checking at compile time, or Clippy tools, is very useful for our domain." -- Engineer at a major automaker

Even on smaller teams, the review load matters:

"I usually tend to work on teams between five and eight. Even so, it's too much code. I feel confident moving faster, a certain class of flaws that you aren't worrying about." -- Embedded systems engineer (mobile robotics)

Closely related: people repeatedly highlighted Rust's consistency around error handling:

"Having a single accepted way of handling errors used throughout the ecosystem is something that Rust did completely right." -- Automotive Technical Lead

For teams building products with 15-to-20-year lifetimes and "teams of teams," compiler-enforced invariants scale better than "we will just review harder."

Teams want newer compilers, but also stability they can explain

A common pattern in safety-critical environments is conservative toolchain selection. But engineers pointed out a tension: older toolchains carry their own defect history.

"[..] traditional wisdom is that after something's been around and gone through motions / testing then considered more stable and safer [..] older compilers used tend to have more bugs [and they become] hard to justify" -- Software Engineer at an Automotive supplier

Rust's edition system was described as a real advantage here, especially for incremental migration strategies that are common in automotive programs:

"[The edition system is] golden for automotive, where incremental migration is essential." -- Software Engineer at major Automaker

In practice, "stability" is also about managing the mismatch between what the platform supports and what the ecosystem expects. Teams described pinning Rust versions, then fighting dependency drift:

"We can pin the Rust toolchain, but because almost all crates are implemented for the latest versions, we have to downgrade. It's very time-consuming." -- Engineer at a major automaker

For safety-critical adoption, "stability" is operational. Teams need to answer questions like: What does a Rust upgrade change, and what does it not change? What are the bounds on migration work? How do we demonstrate we have managed upgrade risk?

Target support matters in practical ways

Safety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. Teams described friction around targets like QNX, where upstream Rust support exists but with limitations (for example, QNX 8.0 support is currently no_std only).2

This connects to Rust's target tier policy: the policy itself is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime."

"I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using. That's simply not acceptable. If you want to invest in some technology, you want to have a certain reliability." -- Senior software engineer at a major automaker

core is the spine, and it sets expectations

In no_std environments, core becomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit.

A lot of Rust's safety leverage lives there: Option and Result, slices, iterators, Cell and RefCell, atomics, MaybeUninit, Pin. But we also heard a consistent shape of gaps: many embedded and safety-critical projects want no_std-friendly building blocks (fixed-size collections, queues) and predictable math primitives, but do not want to rely on "just any" third-party crate at higher integrity levels.

"Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core." -- Principal Firmware Engineer (mobile robotics)

Async is appealing, but the long-run story is not settled

Some safety-critical-adjacent systems are already heavily asynchronous: daemons, middleware frameworks, event-driven architectures. That makes Rust's async story interesting.

But people also expressed uncertainty about ecosystem lock-in and what it would take to use async in higher-criticality components. One team lead developing middleware told us:

"We're not sure how async will work out in the long-run [in Rust for safety-critical]. [..] A lot of our software is highly asynchronous and a lot of our daemons in the AUTOSAR Adaptive Platform world are basically following a reactor pattern. [..] [C++14] doesn't really support these concepts, so some of this is lack of familiarity." -- Team lead at an automotive software company developing middleware in Rust

And when teams look at async through an ISO 26262 lens, the runtime question shows up immediately:

"If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262." -- Team lead at an automotive software company developing middleware in Rust

Async is not "just a language feature" in safety-critical contexts. It pulls in runtime choices, scheduling assumptions, and, at higher integrity levels, the question of what it would mean to certify or qualify the relevant parts of the stack.

Recommendations

Find ways to help the safety-critical community support their own needs. Open source helps those who help themselves. The Ferrocene Language Specification (FLS) shows this working well: it started as an industry effort to create a specification suitable for safety-qualification of the Rust compiler, companies invested in the work, and it now has a sustainable home under the Rust Project with a team actively maintaining it.3

Contrast this with MC/DC coverage support in rustc. Earlier efforts stalled due to lack of sustained engagement from safety-critical companies.4 The technical work was there, but without industry involvement to help define requirements, validate the implementation, and commit to maintaining it, the effort lost momentum. A major concern was that the MC/DC code added maintenance burden to the rest of the coverage infrastructure without a clear owner. Now in 2026, there is renewed interest in doing this the right way: companies are working through the Safety-Critical Rust Consortium to create a Rust Project Goal in 2026 to collaborate with the Rust Project on MC/DC support. The model is shared ownership of requirements, with primary implementation and maintenance done by companies with a vested interest in safety-critical, done in a way that does not impede maintenance of the rest of the coverage code.

The remaining recommendations follow this pattern: the Safety-Critical Rust Consortium can help the community organize requirements and drive work, with the Rust Project providing the deep technical knowledge of Rust Project artifacts needed for successful collaboration. The path works when both sides show up.

Establish ecosystem-wide MSRV conventions. The dependency drift problem is real: teams pin their Rust toolchain for stability, but crates targeting the latest compiler make this difficult to sustain. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce this friction. This would require coordination between the Rust Project (potentially the release team) and the broader ecosystem, with the Safety-Critical Rust Consortium helping to articulate requirements and adoption patterns.

Turn "target tier policy" into a safety-critical onramp. The friction we heard is not about the policy being unclear, it is about translating "tier" into practical decisions. A short, target-focused readiness checklist would help: Which targets exist? Which ones are no_std only? What is the last known tested OS version? What are the top blockers? The raw ingredients exist in rustc docs, release notes, and issue trackers, but pulling them together in one place would lower the barrier. Clearer, consolidated information also makes it easier for teams who depend on specific targets to contribute to maintaining them. The Safety-Critical Rust Consortium could lead this effort, working with compiler team members and platform maintainers to keep the information accurate.

Document "dependency lifecycle" patterns teams are already using. The QM story is often: use crates early, track carefully, shrink dependencies for higher-criticality parts. The ASIL B+ story is often: avoid third-party crates entirely, or use abstraction layers and plan to replace later. Turning those patterns into a reusable playbook would help new teams make the same moves with less trial and error. This seems like a natural fit for the Safety-Critical Rust Consortium's liaison work.

Define requirements for a safety-case friendly async runtime. Teams adopting async in safety-critical contexts need runtimes with appropriate quality and process artifacts for standards like ISO 26262. Work is already happening in this space.5 The Safety-Critical Rust Consortium could lead the effort to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.

Treat interop as part of the safety story. Many teams are not going to rewrite their world in Rust. They are going to integrate Rust into existing C and C++ systems and carry that boundary for years. Guidance and tooling to keep interfaces correct, auditable, and in sync would help. The compiler team and lang team could consider how FFI boundaries are surfaced and checked, informed by requirements gathered through the Safety-Critical Rust Consortium.

"We rely very heavily on FFI compatibility between C, C++, and Rust. In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." -- Embedded systems engineer (mobile robotics)

Conclusion

To sum up the main points in this post:

We make six recommendations: find ways to help the safety-critical community support their own needs, establish ecosystem-wide MSRV conventions, create target-focused readiness checklists, document dependency lifecycle patterns, define requirements for safety-case friendly async runtimes, and treat C/C++ interop as part of the safety story.

Get involved

If you're working in safety-critical Rust, or you want to help make it easier, check out the Rust Foundation's Safety-Critical Rust Consortium and the in-progress Safety-Critical Rust coding guidelines.

Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful. The goal is to make Rust's strengths more accessible in environments where correctness and safety are not optional.

  1. If you're curious about how rigor scales with cost in ISO 26262, this Feabhas guide gives a good high-level overview.

  2. See the QNX target documentation for current status.

  3. The FLS team was created under the Rust Project in 2025. The team is now actively maintaining the specification, reviewing changes and keeping the FLS in sync with language evolution.

  4. See the MC/DC tracking issue for context. The initial implementation was removed due to maintenance concerns.

  5. Eclipse SDV's Eclipse S-CORE project includes an Orchestrator written in Rust for their async runtime, aimed at safety-critical automotive software.

14 Jan 2026 12:00am GMT

Tarek Ziadé: The Economics of AI Coding: A Real-World Analysis

My whole stream in the past months has been about AI coding. From skeptical engineers who say it creates unmaintainable code, to enthusiastic (or scared) engineers who say it will replace us all, the discourse is polarized. But I've been more interested in a different question: what does AI coding actually cost, and what does it actually save?

I recently had Claude help me with a substantial refactoring task: splitting a monolithic Rust project into multiple workspace repositories with proper dependency management. The kind of task that's tedious, error-prone, and requires sustained attention to detail across hundreds of files. When it was done, I asked Claude to analyze the session: how much it cost, how long it took, and how long a human developer would have taken.

The answer surprised me. Not because AI was faster or cheaper (that's expected), but because of how much faster and cheaper.

The Task: Repository Split and Workspace Setup

The work involved:

This is real work. Not a toy problem, not a contrived benchmark. The kind of multi-day slog that every engineer has faced: important but tedious, requiring precision but not creativity.

The Numbers

AI Execution Time

Total: approximately 3.5 hours across two sessions

AI Cost

Total tokens: 72,146 tokens

Estimated marginal cost: approximately $4.95

This is the marginal execution cost for this specific task. It doesn't include my Claude subscription, the time I spent iterating on prompts and reviewing output, or the risk of having to revise or fix AI-generated changes. For a complete accounting, you'd also need to consider those factors, though for this task they were minimal.

Human Developer Time Estimate

Conservative estimate: 2-3 days (16-24 hours)

This is my best guess based on experience with similar tasks, but it comes with uncertainty. A senior engineer deeply familiar with this specific codebase might work faster. Someone encountering similar patterns for the first time might work slower. Some tasks could be partially templated or parallelized across a team.

Breaking down the work:

  1. Planning and research (2-4 hours): Understanding codebase structure, planning dependency strategy, reading PyO3/Maturin documentation
  2. Code migration (4-6 hours): Copying files, updating all import statements, fixing compilation errors, resolving workspace conflicts
  3. Build system setup (2-3 hours): Writing Makefile, configuring Cargo.toml, setting up pyproject.toml, testing builds
  4. CI/CD configuration (2-4 hours): Writing GitHub Actions workflows, testing syntax, debugging failures, setting up matrix builds
  5. Documentation updates (2-3 hours): Updating multiple documentation files, ensuring consistency, writing migration guides
  6. Testing and debugging (3-5 hours): Running test suites, fixing unexpected failures, verifying tests pass, testing on different platforms
  7. Git operations and cleanup (1-2 hours): Creating branches, writing commit messages, final verification

Even if we're generous and assume a very experienced developer could complete this in 8 hours of focused work, the time and cost advantages remain substantial. The economics don't depend on the precise estimate.

The Bottom Line

These numbers compare execution time and per-task marginal costs. They don't capture everything (platform costs, review time, long-term maintenance implications), but they illustrate the scale of the difference for this type of systematic refactoring work.

Why AI Was Faster

The efficiency gains weren't magic. They came from specific characteristics of how AI approaches systematic work:

No context switching fatigue. Claude maintained focus across three repositories simultaneously without the cognitive load that would exhaust a human developer. No mental overhead from jumping between files, no "where was I?" moments after a break.

Instant file operations. Reading and writing files happens without the delays of IDE loading, navigation, or search. What takes a human seconds per file took Claude milliseconds.

Pattern matching without mistakes. Updating thousands of import statements consistently, without typos, without missing edge cases. No ctrl-H mistakes, no regex errors that you catch three files later.

Parallel mental processing. Tracking multiple files at once without the working memory constraints that force humans to focus narrowly.

Documentation without overhead. Generating comprehensive, well-structured documentation in one pass. No switching to a different mindset, no "I'll document this later" debt.

Error recovery. When workspace conflicts or dependency issues appeared, Claude fixed them immediately without the frustration spiral that can derail a human's momentum.

Commit message quality. Detailed, well-structured commit messages generated instantly. No wrestling with how to summarize six hours of work into three bullet points.

What Took Longer

AI wasn't universally faster. Two areas stood out:

Initial codebase exploration. Claude spent time systematically understanding the structure before implementing. A human developer might have jumped in faster with assumptions (though possibly paying for it later with rework).

User preference clarification. Some back-and-forth on git dependencies versus crates.io, version numbering conventions. A human working alone would just make these decisions implicitly based on their experience.

These delays were minimal compared to the overall time savings, but they're worth noting. AI coding isn't instantaneous magic. It's a different kind of work with different bottlenecks.

The Economics of Coding

Let me restate those numbers because they still feel surreal:

For this type of task, these are order-of-magnitude improvements over solo human execution. And they weren't achieved through cutting corners or sacrificing immediate quality. The tests passed, the documentation was comprehensive, the commits were well-structured, the code compiled cleanly.

That said, tests passing and documentation existing are necessary but not sufficient signals of quality. Long-term maintainability, latent bugs that only surface later, or future refactoring friction are harder to measure immediately. The code is working, but it's too soon to know if there are subtle issues that will emerge over time.

This creates strange economics for a specific class of work: systematic, pattern-based refactoring with clear success criteria. For these tasks, the time and cost reductions change how we value engineering effort and prioritize maintenance work.

I used to avoid certain refactorings because the payoff didn't justify the time investment. Clean up import statements across 50 files? Update documentation after a restructure? Write comprehensive commit messages? These felt like luxuries when there was always more pressing work.

But at $5 marginal cost and 3.5 hours for this type of systematic task, suddenly they're not trade-offs anymore. They're obvious wins. The economics shift from "is this worth doing?" to "why haven't we done this yet?"

What This Doesn't Mean

Before the "AI will replace developers" crowd gets too excited, let me be clear about what this data doesn't show:

This was a perfect task for AI. Systematic, pattern-based, well-scoped, with clear success criteria. The kind of work where following existing patterns and executing consistently matters more than creative problem-solving or domain expertise.

AI did not:

The task was pure execution. Important execution, skilled execution, but execution nonetheless. A human developer would have brought the same capabilities to the table, just slower and at higher cost.

Where This Goes

I keep thinking about that 85-90% time reduction for this specific type of task. Not simple one-liners where AI already shines, but systematic maintenance work with high regularity, strong compiler or test feedback, and clear end states.

Tasks with similar characteristics might include:

Many maintenance tasks are messier: ambiguous semantics, partial test coverage, undocumented invariants, organizational constraints. The economics I observed here don't generalize to all refactoring work. But for the subset that is systematic and well-scoped, the shift is significant.

All the work that we know we should do but often defer because it doesn't feel like progress. What if the economics shifted enough for these specific tasks that deferring became the irrational choice?

I'm not suggesting AI replaces human judgment. Someone still needs to decide what "good" looks like, validate the results, understand the business context. But if the execution of systematic work becomes 10x cheaper and faster, maybe we stop treating certain categories of technical debt like unavoidable burdens and start treating them like things we can actually manage.

The Real Cost

There's one cost the analysis didn't capture: my time. I wasn't passive during those 3.5 hours. I was reading Claude's updates, reviewing file changes, answering questions, validating decisions, checking test results.

I don't know exactly how much time I spent, but it was less than the 3.5 hours Claude was working. Maybe 2 hours of active engagement? The rest was Claude working autonomously while I did other things.

So the real comparison isn't 3.5 AI hours versus 16-24 human hours. It's 2 hours of human guidance plus 3.5 hours of AI execution versus 16-24 hours of human solo work. Still a massive win, but different from pure automation.

This feels like the right model: AI as an extremely capable assistant that amplifies human direction rather than replacing human judgment. The economics work because you're multiplying effectiveness, not substituting one for the other.

Final Thoughts

Five dollars marginal cost. Three and a half hours. For systematic refactoring work that would have taken me days and cost hundreds or thousands of dollars in my time.

These numbers make me think differently about certain kinds of work. About how we prioritize technical debt in the systematic, pattern-based category. About what "too expensive to fix" really means for these specific tasks. About whether we're approaching some software maintenance decisions with outdated economic assumptions.

I'm still suspicious of broad claims that AI fundamentally changes how we work. But I'm less suspicious than I was. When the economics shift this dramatically for a meaningful class of tasks, some things that felt like pragmatic trade-offs start to look different.

The tests pass. The documentation is up to date. And I paid less than the cost of a fancy coffee drink.

Maybe the skeptics and the enthusiasts are both right. Maybe AI doesn't replace developers and maybe it does change some things meaningfully. Maybe it just makes certain kinds of systematic work cheap enough that we can finally afford to do them right.

What About Model and Pricing Changes?

One caveat worth noting: these economics depend on Claude Sonnet 4.5 at January 2026 pricing. Model pricing can change, model performance can regress or improve with updates, tool availability can shift, and organizational data governance constraints might limit what models you can use or what tasks you can delegate to them.

For individuals and small teams, this might not matter much in the short term. For larger organizations making long-term planning decisions, these factors matter. The specific numbers here are a snapshot, not a guarantee.

References

14 Jan 2026 12:00am GMT

13 Jan 2026

feedPlanet Mozilla

Firefox Nightly: Phasing Out the Older Version of Firefox Sidebar in 2026

Over a year ago, we introduced an updated version of the sidebar that offers easy access to multiple tools - bookmarks, history, tabs from other devices, and a selection of chatbots - all in one place. As the new version has gained popularity and we plan our future work, we have made a decision to retire the older version in 2026.

Old sidebar version

Updated sidebar version

We know that changes like this can be disruptive - especially when they affect established workflows you rely on every day. While use of the older version has been declining, it remains a familiar and convenient tool for many - especially long-time Firefox users who have built workflows around it.

Unfortunately, supporting two versions means dividing the time and attention of a very small team. By focusing on a single updated version, we can fix issues more quickly, incorporate feedback more efficiently, and deliver new features more consistently for everyone. For these reasons, in 2026, we will focus on improving the updated sidebar to provide many of the conveniences of the older version, then transition everyone to the updated version.

Here's what to expect:

Our goal is to make our transition plans transparent and implement suggested improvements that are feasible within the new interaction model, while preserving the speed and flexibility that long-time sidebar users value. Several implemented and planned improvements to the updated sidebar were informed by your feedback, and we expect that to continue throughout the transition:

If you'd like to share what functionality you've been missing in the new sidebar and what challenges you've experienced when you tried to adopt it, please share your thoughts in this Mozilla Connect thread or file a bug in Bugzilla's Sidebar component, so your feedback can continue shaping Firefox.

13 Jan 2026 10:57pm GMT