23 Apr 2026
Planet Mozilla
Mozilla Addons Blog: WebExtensions API Changes (Firefox 149-152)
Intro
Hey everyone, we've been working on some exciting changes, and want to share them with you.
But first, let me introduce myself. I am Christos, the new Sr. Developer Relations engineer in Add-ons, and I'm excited to write my first post on the Add-ons engineering blog.
Deprecations and changes
To start, I'm looking at a couple of features that are going away: avoiding content script execution in extension contexts, decoupling file access from host permissions, and improving the display of pageAction SVG icon.
executeScript / registerContentScript in moz-extension documents
Deprecated: Firefox 149 Removed: Firefox 152
Starting in Firefox Nightly 149 and scheduled for Firefox 152, the scripting and tabs injection APIs no longer inject into moz-extension://documents. This change brings the API in line with broader efforts to discourage string-based code execution in extension contexts, alongside the default CSP that restricts script-src to extension URLs and the removal of remote source allowlisting in MV3 (bug 1581608).
Firefox emits a warning when this restriction is met, so you are aware of and can address any use of this process in your extensions. This is an example of the warning message:
Content Script execution in moz-extension document has been deprecated and it has been blocked
To work around this change, you can:
- Import scripts directly in the extension page's HTML.
- Use module imports or standard <script> tags in extension documents.
- Restructure code to avoid dynamic code execution patterns. An extension can run code in its documents dynamically by registering a runtime.onMessage listener in the document's script, then sending a message to trigger execution of the required code.
File access becomes opt-in
Target: Firefox 152
Extensions requesting file://*/ or <all_urls> currently trigger the "Access your data for all websites" permission message, and when granted, can run content scripts in file:-URLs. From Firefox 152, file access in extensions requires an opt-in for all extensions, including those already installed (bug 2034168).
pageAction SVG icon CSS filter (automatic color scheme)
Removed: Firefox 152
Firefox has been automatically applying a greyscale and brightness CSS filter to pageAction (address bar button) SVG icons when a dark theme is active. This was intended to improve contrast, but it actually reduced contrast for multi-color icons and caused poor visibility for some extensions, such as Firefox Multi-Account Containers.
For icons that adapt to light and dark color schemes, you can now use @media (prefers-color-scheme: dark) in the SVG icon, or the MV3 action manifest key, and specify theme_icons.
Here is an example of how to use a `prefers-color-scheme` media query in a pageAction SVG icon to control how the icon adapts to dark mode:
manifest.json
"page_action": {
"default_icon": "icons/icon.svg"
}
icons/icon.svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" width="16" height="16">
<style>
:root { color: black; }
@media (prefers-color-scheme: dark) { :root { color: white; } }
</style>
<path fill="currentColor" d="M2 2h12v12H2z"/>
</svg>
Use of prefers-color-scheme media queries is also allowed in MV2 browserAction and MV3 action SVG icons as an alternative to the theme_icons manifest properties.
There are additional examples at the Mozilla Developer Network on how to test your extension pageAction icon with and without the implicit CSS filter.
New APIs & Capabilities
Now to the new stuff. Here, you get the ability to use popups without user activation, initial support for the new tab split view feature, and WebAuthn RP ID assertion.
openPopup without user activation (Firefox Desktop)
Available: Firefox 149 Desktop
action.openPopup() and browserAction.openPopup() no longer require a user gesture on Firefox Desktop. You can open your extension's popup programmatically, e.g., in response to a native-messaging event, an alarm, or a background-script condition.
This change is part of the ongoing cross-browser alignment work in the WebExtensions Community Group to harmonize popup behavior across engines.
Example
Before (Firefox < 149): must hang off a user gesture, e.g., a context menu click:
browser.menus.create({
id: "nudge",
title: "Open popup",
contexts: ["all"],
});
browser.menus.onClicked.addListener((info) => {
if (info.menuItemId === "nudge") {
browser.action.openPopup(); // user clicked the menu → allowed
}
});
After (Firefox ≥ 149) - same intent, no user gesture needed, fires from a timer:
browser.alarms.create("nudge", { delayInMinutes: 1 });
browser.alarms.onAlarm.addListener((alarm) => {
if (alarm.name === "nudge") {
browser.action.openPopup(); // works without a click
}
});
It's the same call with the same result, but only the trigger changes from a user-action handler to any background event.
It's the same call with the same result, but only the trigger changes from a user-action handler to any background event.
splitViewId in the tabs API
Available: Firefox 149
Firefox 149 introduces a new read-only splitViewId property on the tabs.Tab object to expose Firefox's new split view feature (where two tabs are displayed side-by-side in one window). Split views are treated as one unit, and Web Extensions treat them the same way.
In Firefox 150, extensions can swap tabs within a split view. This update also resolves a confusing issue where using the user interface to reverse tab order incorrectly reports the tabs.onMoved event with inaccurate values. Additionally, Firefox introduces unsplitting behavior for web extensions: when tabs.move() is called with split-view tabs positioned separately (non-adjacently) in the array. Now, after the call, Firefox removes the split view rather than keeping the tabs locked together.
Here is an example of using the new splitViewId property.
// Log whenever a tab joins or leaves a split view.
browser.tabs.onUpdated.addListener((tabId, changeInfo) => {
if (!("splitViewId" in changeInfo)) return;
if (changeInfo.splitViewId === browser.tabs.SPLIT_VIEW_ID_NONE) {
console.log(`Tab ${tabId} left its split view`);
} else {
console.log(`Tab ${tabId} joined split view ${changeInfo.splitViewId}`);
}
});
// Firefox desktop also supports a filter to limite onUpdated events:
// }, { properties: ["splitViewId"] });
Firefox 151 enables extensions to move split views in tab groups. More improvements are coming, such as the ability to create split views from extensions (bug 2016928).
WebAuthn RP ID assertion
Available: Firefox 150
Previously, web extensions couldn't use WebAuthn credentials registered on their company's website or mobile apps. When extensions tried to set a custom Relying Party ID (RP ID) in navigator.credentials.create() or navigator.credentials.get(), Firefox rejected it with "SecurityError: The operation is insecure."
With Firefox 150, Extensions can now assert a WebAuthn RP ID for any domain they have host permissions for
when calling navigator.credentials.create() or navigator.credentials.get(). This applies to both the publicKey.rp.id field during credential creation and the publicKey.rpId field during authentication.
A critical detail for server-side validation: When relying party servers validate credentials created by extensions, they must account for different origin formats across browsers. In Chrome, the origin follows the pattern chrome-extension://extensionid, which matches the extension's location.origin. Firefox 150 introduces a new stable origin format: moz-extension://hash, where the hash is a 64-character SHA-256 representation of the extension ID (using characters a-p to represent hex values). Importantly, this hash-based origin is the same all users, unlike Firefox's existing UUID-based moz-extension:// URLs used for extension documents.
To extract the origin from a credential for validation:
let clientData = JSON.parse(new TextDecoder().decode( publicKeyCredential.response.clientDataJSON )); console.log(clientData.origin);
For more details, see Use Web Authn API in web extensions on MDN.
Summary
| Change | Type | Firefox Version |
| executeScript / registerContentScript in moz-extension documents | Deprecation → Removal | Deprecated 149, removed 152 |
| File access opt-in | Change | 152 |
| pageAction SVG CSS filter | Removal | 152 |
| openPopup() without user activation | New capability | 149 (Desktop only) |
| splitViewId on tabs.Tab | New API | 149 |
| WebAuthn RP ID assertion | New capability | 150 |
Need more?
You can always find detailed information about WebExtensions API and Add-ons updates in the MDN release notes, e.g., for Firefox 149 and Firefox 150.
For any help or questions navigating any changes, don't hesitate to post your topic on the Add-ons Discourse.
The post WebExtensions API Changes (Firefox 149-152) appeared first on Mozilla Add-ons Community Blog.
23 Apr 2026 9:30pm GMT
Thunderbird Blog: Mobile Progress Report – April 2026

It's been a very busy couple of months as we've reworked processes & priorities and established a roadmap for both iOS and Android. We are determining how best we can coordinate with the community, and think that our roadmap for the year has a good balance of fixes and features. Today, I want to talk about our contributors and pull requests, Notifications in the Android app, progress in the iOS app, and an overview of our roadmap for both apps this year.
Contributors & Pull Requests
We are so grateful for the support and code contributions of many members, whether building items on our roadmap, improving the user experience, or, of course, translating. As we work on our roadmap priorities, we will make time to review PRs and will discuss them weekly, and prioritize those that help solve issues and bugs or align with our roadmap items. Please be patient with our Pull Request pipeline. Typically, in working with the community, we try to react very quickly.
Roadmap
For Android, we've chosen the items on our roadmap because we think these will be the highest-impact features and bring the most value to everyone. Our focus this year is to simplify and modernize the Android codebase. This means reworking some of the architecture. This will be super helpful for us to move more quickly and will reduce complex bugs. The app has an older codebase, and like many older ones, it has its challenges. We have three full-time Android engineers and several community contributors, and we hope to better position ourselves to move quickly. At a high level, Android is focusing on the rearchitecture, a better Message List experience, and Message Reader screens. We are also simplifying how users can connect to Thunder Mail as we open it up.
Notifications
One thing that is at the top of my mind right now, too, is Push Notifications, specifically changes that Google has made to background processes, which affect our Notifications. We are looking into what we can do to solve this, so know that it has become a top priority for us. I've been asked, "Why is it so hard for Thunderbird to get Push Notifications right?" and I wanted to speak to some of the challenges we have. Most apps' Notifications are triggered by their own web services, which then send Notifications through Apple or Google, who pass them to users. But email is different. In an email client, we typically don't own our own backend services, but other companies do (Microsoft, Google, Hotmail, Yahoo, Proton, etc.). And they can have their own flavors of SMTP - how we get the emails, and no specific Push Notification implementation.
So we have a work around: polling those providers ever X minutes asking for new emails, and triggering local notifications - but we can't hook into a native Push Notification process like your banking app for example. This is under the IMAP implementation. The JMAP implementation (think modern email protocols) has something in place we can more readily consume. Another challenge is how the battery is affected by how often we poll the providers, and we need specific permissions from Google to run this process in the background. Those permissions changed recently which is why Notifications are having issues.
I've simplified some pieces here, but hopefully that gives you an idea of some of the complexity and tradeoffs that we are working with. With all of that said, this is very important to us, and is our users' biggest pain point. It is becoming our biggest need for a fix. I'll give an update on where that sits within the roadmap next progress report when we have explored what solutions we can provide.
iOS Progress
For the iOS roadmap, everything is moving along well. We have been wrapping up most of our IMAP & SMTP tickets, and we are moving into the Account Data pieces to manage accounts and authorizations. We will also be having a new member join us in the next couple of weeks, and are still looking for a Staff iOS engineer. This will add some speed, but we've made good progress in getting the inner pieces together - what I consider the most complex parts. As we move to more standard mobile backend pieces and more standard UI, we leave the world of unknown unknowns, and will be picking up steam.
At a high level our iOS roadmap is build out these screens:
- Account Setup and Drawer
- Messages: List, Reader, Compose, Search
And have these pieces in place:
- IMAP
- SMTP
- MIME
- OAuth
- Encryption
- Email Composition
And our target is still end of the year for the iOS release.
Thank You!
Again we are so grateful to you, our community, for your support, and we are excited for this next quarter as we start to see the fruits of our labors.
The post Mobile Progress Report - April 2026 appeared first on The Thunderbird Blog.
23 Apr 2026 11:00am GMT
Wil Clouser: Firefox Sync adds official PostgreSQL support
The Sync Storage team has landed official PostgreSQL support for Firefox Sync.
Historically, Sync has only officially supported Google Spanner as a storage backend, with MySQL working unofficially. That has been a pretty high barrier to entry for people self-hosting their own services.
With PostgreSQL support, we hope to make self-hosting more approachable and continue supporting people who want the agency of hosting their data on infrastructure they control.
There is updated documentation for running it with Docker, including a one-shot docker compose setup:
https://mozilla-services.github.io/syncstorage-rs/how-to/how-to-run-with-docker.html
Mozilla is publishing Docker images for the PostgreSQL build here:
https://ghcr.io/mozilla-services/syncstorage-rs/syncstorage-rs-postgres
If you've been interested in self-hosting Sync but were put off by the storage requirements, take another look. If you run into bugs or have feedback, please file issues here:
https://github.com/mozilla-services/syncstorage-rs/issues
23 Apr 2026 7:00am GMT
Jonathan Almeida: Gmail filters based on X-Phabricator-Stamps header
I want Phabricator emails to have a Gmail label so I can know which patches had me as a reviewer that then had follow-up comments from other folks.
This is useful for me when I review a patch and then I need to respond back to discussions in a more timely manner in comment threads that I've created.
It's difficult to do this today similar to Bugzilla Gmail filters because there are fewer identifiers that the more simplistic Gmail filter parameters can help with.
Today I learnt that there is an X-Phabricator-Stamps header in those Phabricator emails that let's you identify you as a the reviewer in a patch. So using that information, I wrote the Google script below to run every minute and avoid re-processing the same email twice.
A couple variables were added to the top and some console.logs are sprinkled around for my own debugging.
var REVIEWER = "jonalmeida";
var LABEL_NAME = "Phabricator/Comments";
var BODY_MATCH = "commented on this revision.";
var SENDER = "phabricator@mozilla.com";
/**
* Run once manually to install the per-minute trigger.
*/
function install() {
uninstall();
ScriptApp.newTrigger('processInbox')
.timeBased()
.everyMinutes(1)
.create();
}
/**
* Run once manually to remove the trigger.
*/
function uninstall() {
ScriptApp.getProjectTriggers().forEach(function(t) {
ScriptApp.deleteTrigger(t);
});
PropertiesService.getScriptProperties().deleteProperty('lastRun');
}
/**
* Every run, we try to avoid processing the same email twice because
* there is no API trigger to run a script on every new email received.
*/
function processInbox() {
var props = PropertiesService.getScriptProperties();
var lastRun = parseInt(props.getProperty('lastRun') || '0');
var now = Math.floor(Date.now() / 1000);
// On first run, look back 2 minutes
if (lastRun === 0) {
lastRun = now - 120;
}
var label = GmailApp.getUserLabelByName(LABEL_NAME);
if (!label) {
label = GmailApp.createLabel(LABEL_NAME);
}
console.log("last run: " + lastRun);
var threads = GmailApp.search("from:" + SENDER + " after:" + lastRun);
console.log("threads to process: " + threads.length);
for (var i = 0; i < threads.length; i++) {
var thread = threads[i];
var messages = thread.getMessages();
console.log("messages to process: " + messages.length);
for (var j = 0; j < messages.length; j++) {
if (hasReviewerStamp(messages[j])) {
thread.addLabel(label);
console.log(thread.getFirstMessageSubject());
break;
}
}
}
props.setProperty('lastRun', String(now));
}
function hasReviewerStamp(message) {
var raw = message.getRawContent();
var match = raw.match(/^X-Phabricator-Stamps:\s*(.+)$/m);
if (!match) {
return false;
}
var stamps = match[1].trim().split(/\s+/);
return (stamps.indexOf("reviewer(@" + REVIEWER + ")") > -1) && raw.indexOf(BODY_MATCH) > -1;
}
/**
* For debugging - see the list of labels you can search which
* differs from what is used in the Gmail UI filter.
*/
function listAllLabels() {
console.log("All labels");
var labels = GmailApp.getUserLabels();
for (var i = 0; i < labels.length; i++) {
console.log(labels[i].getName());
}
}
23 Apr 2026 12:00am GMT
22 Apr 2026
Planet Mozilla
Mozilla Data YouTube Channel: Towards a Telemetry Taxonomy
Leif Oines talks about an effort to define a more complete taxonomy for Mozilla's data.
22 Apr 2026 10:41pm GMT
This Week In Rust: This Week in Rust 648
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
Foundation
- RustConf 2026 schedule and registration are live! Early bird ticket prices are available through April 29.
Project/Tooling Updates
- axum-harness: agent-native backend architecture template for Axum - semantic-first, topology-late, multi-agent harness
- lean-decimal: 2~6X faster than
rust_decimal - Building Semantic Version Control in Rust
- Oxanus v1.0 - Job processing library
- flodl 0.5.2: HuggingFace, in Rust
- One Sized trait does not fit all
- tinyboot v0.4.0 Released - The API is Stable
- Slint 1.16 Released
- Danube Messaging adds Key-Shared subscriptions
- Announcing mtp-mount: pure-Rust FUSE mount for MTP devices
- wrkflw v0.8.0 - Validate and Run GitHub Actions locally.
Observations/Thoughts
- Cryptographic Right Answers: Post Quantum and Rust Edition
- Learning rust through an LLM to develop a TUI RSS reader (and what I tell my students)
- What Happens When You Build an Inode-Style Vector in Rust
- Ownership & Borrowing versus Reference Counting
- The Edge of Safe Rust
- [video] Third Online Func Prog Sweden 2026
Rust Walkthroughs
- [video] Build a Full Stack Twitter Clone web application in Rust (Axum & Leptos)
- The Impatient Programmer's Guide to Bevy and Rust: Chapter 12 - Let There Be Networking
- [video] RustCurious lesson 6: Enums and Polymorphism
Crate of the Week
This week's crate is farben, a German-named macro crate for terminal colors.
Thanks to Nik Revenco for the suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
- rust-cookbook - Add Asynchronous section with tokio runtime recipes (other high impact examples)
- wacp-platform - Fix test-only clippy drifts in
wacp-runtime/tests.rs+console-db/queries/tests.rs(other good first issues)
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- EuroRust | 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
- NDC Techtown | 2026-05-03 | Kongsberg, Norway | 2026-09-21 to 23.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
542 pull requests were merged in the last week
Compiler
- don't hash
DelayedLints - refactor FnDecl and FnSig non-type fields into a new wrapper type
- suggest removing
&when awaiting a reference to a future - suggest returning a reference for unsized place from a closure
Library
- abort in core
- constify
Index(Mut),Deref(Mut) forVec - core/num: implement feature
integer_cast_extras core::unicode: ReplaceCasedtable withLt- libtest: use binary search for --exact test filtering
- move
std::io::ErrorKindtocore::io
Rustdoc
- fix
redundant_explicit_linksincorrectly firing (or not firing) under certain scenarios - preserve
doc(cfg)on locally re-exported type aliases
Clippy
- add MSRV check for
manual_noop_waker - add
useless_borrows_in_formattinglint - do not propose to refactor when no variant constructor is used
- do not trigger
let_and_returnonlet else - extend
byte_char_slicesto cover arrays - extend
zst_offsetlint to detectNonNull<T>offset calculations - fix a case where
collapsible_matchsuggested a transformation that changes runtime behavior - fix
cloned_ref_to_slice_refsfalse negative onto_owned() - fix
expect_fun_callsuggests wrongly for string slicing - fix
for_kv_mapfalse negative when usingiteranditer_mut - parenthesize
AssocOp::Castin suggestion when replacement operator is<to avoid parse error useless_conversion: do not lint(a..b).into_iter()(for edition migration)
Rust-Analyzer
completion: reduce relevance for deprecated items- remove duplicate lints
- allow crate authors to declare that their trait prefers to be imported
as _ - do not complete unstable items that use an internal feature
- exclude refs(find all refs) from deps and stdlib
- support extract variable in macro call
- add parentheses on record expr for
replace_let_with_if_let - adjust name of
extract_type_alias - allow ambiguity in assoc type shorthand if they resolve to the same assoc type, between supertraits this time
- port call expr type checking and closure upvar inference from rustc
- respect
#[deprecated]attr when deciding if aModuleDefcompletion isdeprecated - some fixes for
upvars_mentioned() - use
ProofTreeVisitorfor unsized coercion - parse
type constitems - perf: do not check solver's cache validity on every access
- sync function call args check fudging with rustc
Rust Compiler Performance Triage
This week was a bit all over the place, but the largest regressions were either already fixed or they are being investigated. There were also a couple of nice perf. wins.
Triage done by @Kobzol. Revision range: dab8d9d1..9ab01ae5
Summary:
| (instructions:u) | mean | range | count |
|---|---|---|---|
| Regressions ❌ (primary) |
0.7% | [0.2%, 4.6%] | 39 |
| Regressions ❌ (secondary) |
0.6% | [0.2%, 1.4%] | 31 |
| Improvements ✅ (primary) |
-0.6% | [-4.8%, -0.1%] | 70 |
| Improvements ✅ (secondary) |
-0.7% | [-4.1%, -0.0%] | 93 |
| All ❌✅ (primary) | -0.1% | [-4.8%, 4.6%] | 109 |
3 Regressions, 4 Improvements, 6 Mixed; 4 of them in rollups 41 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Error on invalid macho section specifier
- Allow trailing
selfin more contexts - Add FCW to disallow
$cratein macro matcher - Lint unused pub items in binary crates
- const-stabilize
char::is_control()
No Items entered Final Comment Period this week for Language Reference, Language Team, Leadership Council, Rust RFCs or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- Add contribution policy for AI-generated work
- Bounded Trait Casting
- Support heterogeneous try blocks (
try_blocks_heterogeneous) RFC
Upcoming Events
Rusty Events between 2026-04-22 - 2026-05-20 🦀
Virtual
- 2026-04-22 | Virtual (Girona, ES) | Rust Girona
- 2026-04-23 | Virtual (Amsterdam, NL) | Bevy Game Development
- 2026-04-23 | Virtual (Berlin, DE) | Rust Berlin
- 2026-04-24 | Virtual (Nairobi, KE) | RustaceansKenya
- 2026-04-28 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-04-28 | Virtual (London, UK) | Women in Rust
- 2026-04-28 | Virtual (Tel Aviv-yafo, IL) | Code Mavens 🦀 - 🐍 - 🐪
- 2026-04-29 | Virtual (Girona, ES) | Rust Girona
- 2026-05-01 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-05-02 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2026-05-03 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-05-06 | Virtual (Cardiff, GB) | Rust and C++ Cardiff
- 2026-05-06 | Virtual (Girona, ES) | Rust Girona
- 2026-05-06 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2026-05-07 | Virtual (Berlin, DE) | Rust Berlin
- 2026-05-07 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-05-12 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-05-12 | Virtual (London, UK) | Women in Rust
- 2026-05-13 | Virtual (Girona, ES) | Rust Girona
- 2026-05-17 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-05-19 | Virtual (Washington, DC, US) | Rust DC
- 2026-05-20 | Virtual (Girona, ES) | Rust Girona
- 2026-05-20 | Virtual (Vancouver, BC, CA) | Vancouver Rust
Asia
- 2026-05-13 | Malaysia, MY | Rust Meetup Malaysia
Europe
- 2026-04-23 | Aarhus, DK | Rust Aarhus
- 2026-04-23 | Paris, FR | Rust Paris
- 2026-04-24 - 2026-04-26 | Augsburg, DE | Rust Meetup Augsburg
- 2026-04-25 | Stockholm, SE | Stockholm Rust
- 2026-04-29 | Copenhagen, DK | Copenhagen Rust Community
- 2026-04-29 | Paris, FR | Paris Rustaceans
- 2026-04-30 | Berlin, DE | Rust Berlin
- 2026-04-30 | Manchester, GB | Rust Manchester
- 2026-05-02 | Augsburg, DE | Rust Munich and Rust Augsburg
- 2026-05-04 | Amsterdam, NH, NL | Rust Developers Amsterdam Group
- 2026-05-04 | Frankfurt, DE | Rust Rhein-Main
- 2026-05-05 | Olomouc, CZ | Rust Moravia
- 2026-05-07 | Edinburgh, GB | Rust and Friends
- 2026-05-13 | Girona, ES | Rust Girona
- 2026-05-14 | Switzerland, CH | PostTenebrasLab
- 2026-05-18 | Milano, MI, IT | Rust Language Milan
- 2026-05-19 | Aarhus, DK | Rust Aarhus
- 2026-05-19 | Amsterdam, NL | RustNL
- 2026-05-19 | Leipzig, SN, DE | Rust - Modern Systems Programming in Leipzig
- 2026-05-19 | London, UK | Women in Rust
North America
- 2026-04-20 - 2026-04-22 | Portland, OR | Tokio
- 2026-04-22 | Austin, TX, US | Rust ATX
- 2026-04-22 | New York, NY, US | Rust NYC
- 2026-04-22 | Portland, OR | Apache DataFusion Meetup
- 2026-04-23 | Los Angeles, CA, US | Rust Los Angeles
- 2026-04-25 | Boston, MA, US | Boston Rust Meetup
- 2026-04-28 | New York, NY, US | Rust NYC
- 2026-04-30 | Atlanta, GA, US | Rust Atlanta
- 2026-05-07 | Saint Louis, MO, US | STL Rust
- 2026-05-14 | Portland, OR, US | PDXRust
- 2026-05-14 | San Diego, CA, US | San Diego Rust
- 2026-05-19 | San Francisco, CA, US | San Francisco Rust Study Group
- 2026-05-20 | San Francisco, CA, US | Bay Area Rust Meetup
Oceania
- 2026-05-14 | Melbourne, AU | Rust Melbourne
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
in Rust we pay the price of composition up-front
Thanks to Nadieril for the self-suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
22 Apr 2026 4:00am GMT
Mozilla Performance Blog: Telemetry Alerting: How It Works
We recently released the telemetry alerting beta, and announced it in the blog post here! This blog post will dive into the details of how it works across Treeherder, and Mozdetect. At a high level, MozDetect handles the change point detection for telemetry probes, and Treeherder handles storing the detections, and producing the emails/bugs for these.
MozDetect
All of the existing, and any future change detection point techniques used for telemetry alerting are built in MozDetect. Having these live outside of Treeherder gives a low-barrier to entry for adding new features, and testing existing ones without having to set up everything needed for alerting in Treeherder. It's built as a python module that is run through uv. This makes it very easy for anyone to run the code because of uv's excellent python version, and dependency management. How to work with the code in this repository is outlined here, along with how to add your own techniques to it (note the access to mozdata through gcloud is required for this).
Detectors are split into two parts: (i) a detector that performs a comparison between two groups, and (ii) a detector that performs detection on a time series (using the detector from (i)). Our default detection technique, called cdf_squared lives here. The timeseries_detector_name is the name that will be used to access the detector from the telemetry probe side through the change_detection_technique field. The only method that absolutely needs to be implemented by these is the detect_changes method and it must return a list of Detection objects. These detection objects contain all the necessary information for producing an alert. There is also an optional_detection_info field that can contain additional things like attachments that would be added to Bugzilla bugs, and additional_data that can hold JSON data for storage in the DB. The cumulative distribution function (CDF) squared technique uses these to store the CDF before and after the detection along with a graph of these as an attachment for the Bugzilla bug.
Example of a CDF graph that is provided in bugs.
CDF Squared Detection Technique
The CDF squared technique detects changes in time-series histogram data by comparing CDFs between consecutive windows. It takes two CDFs, each representing the distribution of measurements over a time window, and computes the sum of squared differences between the two CDFs at each bin. The sign of the summed linear difference is then used to assign a direction to the squared difference score so that the output encodes whether the distribution moved to higher values (right shift) or lower values (left shift).
For time-series detection, this base comparison is applied in a rolling fashion across the full history of data. Each day's 7-day smoothed CDF is compared against the next one, producing a continuous signal of squared CDF differences over time. A Butterworth low-pass filter is then applied to that signal to remove high-frequency noise while preserving genuine trend changes. Finally, scipy's find_peaks function is used to locate statistically significant peaks and valleys in the filtered signal using a dynamic alert threshold based on the historical data. Information is extracted from those areas and then used to build the detection information needed for the alert generation process.
Alerting
Our alerting tooling lives in the Treeherder codebase. It's run through our PerfSheriff Bot (called Sherlock) and runs once per day. When a detection is produced from MozDetect, a telemetry alert is added to the database and then the TelemetryAlertManager is called to handle it. The manager's tasks are split into 6 ordered phases:
- Update alerts with changes from Bugzilla. This step ensures that any changes that happen in the bugs filed are mirrored into our database. Currently, we only track resolution changes here.
- Comment on existing bugs. This step is for updating existing bugs with information from new alerts. This step is not currently being used. In the future, this could be used to inform probe owners that a probe which doesn't produce bugs has produced an alert in the same time range.
- File new bugs for alerts. This step handles filing bugs for any new alerts on probes set up for producing bugs.
- Modify existing bugs with new alerts. This step handles any modifications needed to existing bugs based on the new bugs that were created. Currently, the "See Also" field is modified for existing bugs to include the new bugs.
- Produce emails for new alerts. This step handles producing emails for any alerts set up to produce emails.
- Housekeeping. This step handles redoing any failures that happen above in either the current run or past runs. Currently, it's being used to retry bug modifications and sending emails when we encounter a failure there. This excludes retrying bug filling since we delete the alert in that case and retry it the next time the alert is generated.
After the housekeeping step, the manager is done for the day and runs again on the next day to handle any updates and new alerts. Contrary to how alerting works for performance tests in CI, this process is fully automated and requires no human input at any point.
Setting up telemetry probes for alerting happens on the mozilla-central side in their probe schema using the new monitor field in the metadata section (example for email alerts, example for bug alerts). The telemetry alerting documentation has information about how to do this. We then use an index.json file from the telemetry dictionary to gather all the probes that should be alerting. The information there is supplemented by more granular information later in the pipeline to gather things like the time unit used for the probe to be able to better format the Bugzilla bug table.
Once a telemetry probe is set up for alerting and is found by our system, the owners (those listed in the email notification fields) will begin either receiving emails or have bugs produced for them. These can also be viewed by everyone on this dashboard.
Example of an alert being viewed in the dashboard.
Acknowledgements
Getting the project to this point involved work from people across multiple teams here at Mozilla. Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder-related changes.
If you hit any issues with the telemetry alerting system, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
22 Apr 2026 12:40am GMT
21 Apr 2026
Planet Mozilla
Mozilla Data YouTube Channel: Data Incident Process
Mike Droettboom talks about Data @ Mozilla's process for handling incidents.
21 Apr 2026 11:46pm GMT
Mozilla Performance Blog: Telemetry Alerting Beta Announcement
We're happy to announce that the Telemetry Alerting beta is now open to everyone!
Monitoring for changes in telemetry probes that you own can be difficult to do on a regular and continuous basis. With telemetry alerting, that changes today! You can now quickly set up your timing distribution probes for automated monitoring on Windows with notifications through email or a Bugzilla bug.
To get started, if you only need email alerts, simply add monitor: True to the metadata section of your probe (example).
Example of an email alert.
If you would prefer to receive Bugzilla bugs when a change is detected, set the monitor field like so (example):
monitor: alert: True lower_is_better: True/False # Optional bugzilla_notification_emails: - <YOUR-BUGZILLA-EMAIL-HERE>
Example of an alert bug.
More information about telemetry alerting, and how to set up a probe can be found here in the documentation. There's also a dashboard that can show you all of the existing telemetry alerts along with some detection information. For now, we only support change detection on Windows for `timing_distribution` probes (see here for other desktop platforms, and android).
Please note that this is an open beta and we are actively looking for feedback on this system. If you hit any issues, or have any suggestions feel free to file a bug in the Testing :: Performance component or reach out to us in either #perf-help on Slack or in #perftest on Matrix.
Special thanks to Eduardo Filho for his support on the telemetry probe side, to Bas Schouten for his guidance and work on the CDF Squared detection technique, and to Andrej Glavic and Beatrice Acasandrei for their help in reviewing the Treeherder changes.
For a more detailed look at how this works, see this blog post.
21 Apr 2026 7:58pm GMT
The Mozilla Blog: What’s new in Firefox mobile: Less clutter, more control and a free built-in VPN

Mobile browsing hasn't kept up with how people actually use their phones.
Right now, even basic tasks can feel harder than they should. Finding what you need can mean scrolling through ads and filler content, keeping track of too many tabs, or thinking twice about how private your connection is.
A mobile browser should do more - and we're raising the bar. Firefox is rolling out a set of updates that build on our most popular desktop features and adapt them for how you browse on-the-go. Here's what's out now, and what's coming next.
Get the key points with Shake to Summarize

When you're following a recipe, reading a product review, or deciding whether a long article is worth your time, getting to the useful part can take longer than it should.
With Shake to Summarize, you can shake or tap your phone to generate a quick summary of the page. Currently available for iOS users in English, we're expanding availability to all iOS users in German, French, Spanish, Portuguese, Italian and Japanese starting with Firefox 150 on April 21. We'll also soon be making Shake to Summarize available to Android users in English, so they too can get to the key points of any article in seconds.
Take control of how AI shows up
AI features are becoming a more common part of browsers - but not everyone wants the same experience. Firefox gives you a say in how they're used. With AI Controls, you can turn AI features off entirely, enable only the ones you want, or adjust things over time. Rolling out on Android and iOS beginning May 21.
Stay protected with a free, built-in VPN
Firefox's free built-in VPN covers up to 50 gigabytes of your browsing in Firefox each month, across desktop and mobile devices. It adds a layer of protection to your browsing activity by masking your IP address - especially useful when you're on public Wi-Fi. Unlike many "free VPNs" that rely on ads or selling user data to generate revenue, Firefox is built with a different model: no selling your browsing data, no injecting ads into your traffic. Instead, we offer a limited amount of browser-level protection for free, alongside Mozilla VPN, our paid, unlimited, full-device VPN service. Rolling out on Android soon.
Keep your tabs organized with Tab Groups
Tab Groups have been among the most-requested mobile features from our Mozilla community, and they're coming on mobile soon. You'll be able to group related tabs to stay organized, whether you're comparing restaurants, planning a trip or saving articles to read later.
We're also building toward smart groupings, where Firefox can automatically suggest tab groups for you. Rolling out on Android soon.
More updates, built around how you browse on mobile
Your phone comes with a browser. That doesn't mean it has to stay your default
"Firefox exists to give people a better way to experience the web, and that has to be just as true on mobile as it is on desktop," said Ajit Varma, head of Firefox. "For many people, their phone is their primary way of getting online, and they deserve a browser that's fast, intuitive and built around their needs. That's why we're investing in mobile more than ever before. We're building for the millions of people who choose Firefox every day, and giving even more people a reason to do the same."
Firefox is building a mobile experience designed around how people browse - with tools that help you move faster, stay organized and stay in control.
These updates begin rolling out in April with more on the way.

Take Firefox with you
Download Firefox mobileThe post What's new in Firefox mobile: Less clutter, more control and a free built-in VPN appeared first on The Mozilla Blog.
21 Apr 2026 7:36pm GMT
The Mozilla Blog: The zero-days are numbered

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.
As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week's release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.
As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up.
Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn't finished, but we've turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.
Until now, the industry has largely fought security to a draw. Vendors of critical internet-exposed software like Firefox take security extremely seriously and have teams of people who get out of bed every morning thinking about how to keep users safe. Nevertheless, we've all long quietly acknowledged that bringing exploits to zero was an unrealistic goal. Instead, we aimed to make them so expensive that only actors with functionally unlimited budgets can afford them, and that the cost of burning such an expensive asset disincentivizes those actors against casual use.
This is because security to date has been offensively-dominant: the attack surface isn't infinite, but it's large enough to be difficult to defend comprehensively with the tools we've had available. This gives attackers an asymmetric advantage, since they only need to find one chink in the armor.
We use defense-in-depth to apply multiple layers of overlapping defenses, but no layer is bulletproof. Firefox runs each website in a separate process sandbox, but attackers try to combine bugs in the rendering code with bugs in the sandbox to escape to a more privileged context. We've led the industry in building and adopting Rust, but we still can't afford to stop everything to rewrite decades of C++ code, especially since Rust only mitigates certain (very common) classes of vulnerabilities.
We pair defense-in-depth engineering with an internal red team tasked with staying on the leading edge of automated analysis techniques. Until recently, these have largely been dynamic analysis techniques like fuzzing. Fuzzing is quite fruitful in practice, but some parts of the code are harder to fuzz than others, leading to uneven coverage.
Elite security researchers find bugs that fuzzers can't largely by reasoning through the source code. This is effective, but time-consuming and bottlenecked on scarce human expertise. Computers were completely incapable of doing this a few months ago, and now they excel at it. We have many years of experience picking apart the work of the world's best security researchers, and Mythos Preview is every bit as capable. So far we've found no category or complexity of vulnerability that humans can find that this model can't.
This can feel terrifying in the immediate term, but it's ultimately great news for defenders. A gap between machine-discoverable and human-discoverable bugs favors the attacker, who can concentrate many months of costly human effort to find a single bug. Closing this gap erodes the attacker's long-term advantage by making all discoveries cheap.
Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher. Some commentators predict that future AI models will unearth entirely new forms of vulnerabilities that defy our current comprehension, but we don't think so. Software like Firefox is designed in a modular way for humans to be able to reason about its correctness. It is complex, but not arbitrarily complex1.
The defects are finite, and we are entering a world where we can finally find them all.
1 There's a risk that codebases begin to surpass human comprehension as a result of more AI in the development process, scaling bug complexity along with (or perhaps faster than) discovery capability. Human-comprehensibility is an essential property to maintain, especially in critical software like browsers and operating systems.
The post The zero-days are numbered appeared first on The Mozilla Blog.
21 Apr 2026 6:29pm GMT
Niko Matsakis: Symposium: community-oriented agentic development
I'm very excited to announce the first release of the Symposium project as well as its inclusion in the Rust Foundation's Innovation Lab. Symposium's goal is to let everyone in the Rust community participate in making agentic development better. The core idea is that crate authors should be able to vend skills, MCP servers, and other extensions, in addition to code. The Symposium tool then installs those extensions automatically based on your dependencies. After all, who knows how to use a crate better than the people who maintain it?
If you want to read more details about how Symposium works, I refer you to the announcement post from Jack Huey on the main Symposium blog. This post is my companion post, and it is focused on something more personal - the reasons that I am working on Symposium.
I believe in extensibility everywhere
The short version is that I believe in extensibility everywhere. Right now, the Rust language does a decent job of being extensible: you can write Rust crates that offer new capabilities that feel built-in, thanks to proc-macros, traits, and ownership. But we're just getting started at offering extensibility in other tools, and I want us to hurry up!
I want crate authors to be able to supply custom diagnostics. I want them to be able to supply custom lints. I want them to be able to supply custom optimizations. I want them to be able to supply custom IDE refactorings. And, as soon as I started messing around with agentic development, I wanted extensibility there too.
Symposium puts crate authors in charge
The goal of Symposium is to give crate authors, and the broader Rust community, the ability to directly influence the experience of people writing Rust code with agents. Rust is a really popular target language for agents because the type system provides strong guardrails and it generates efficient code - and I predict it's only going to become more popular.
Despite Rust's popularity as an agentic coding target, the Rust community right now are basically bystanders when it comes to the experience of people writing Rust with agents; I want us to have a means of influencing it directly.
Enter Symposium. With Symposium, Crate authors can package up skills etc and then Symposium will automatically make them available for your agent. Symposium also takes care of bridging the small-but-very-real gaps between agents (e.g., each has their own hook format, and some of them use .agents/skills and some use .claude/skills, etc).
Example: the assert-struct crate
Let me give you an example. Consider the assert-truct crate, recently created by Carl Lerche. assert-struct lets you write convenient assertions that test the values of specific struct fields:
assert_struct!(val, _ {
items: [1, 2, ..],
tags: #("a", "b", ..),
..
});
The problem: agents don't know about it
This crate is neat, but of course, no models are going to know how to use it - it's not part of their training set. They can figure it out by reading the docs, but that's going to burn more tokens (expensive, slow, consumes carbon), so that's not a great idea.
You could teach the agent how to use it…
In practice what people do today is to add skills to their project - for example, in his toasty crate, Carl has a testing skill that also shows how to use assert-struct. But it seems silly for everybody who uses the crate to repeat that content.
…but wouldn't it be better the crate could teach the agent itself?
With Symposium, teaching your agent how to use your dependencies should not be necessary. Instead, your crates can publish their own skills or other extensions.
The way this works is that the assert-struct crate defines the skill once, centrally, in its own repository1. Then there is a separate file in Symposium's central recommendations repository with a pointer to the assert-struct repository. Any time that the assert-struct repository updates that skill, the updates are automatically synchronized for you. Neat! (You can also embed skills directly in the rr repository, but then updating them requires a PR to that repo.)
Frequently asked questions
How do I add support for my crate to Symposium?
It's easy! Check out the docs here:
https://symposium.dev/crate-authors/supporting-your-crate.html
What kind of extensions does Symposium support?
Skills, hooks, and MCP Servers, for now.
Why does Symposium have a centralized repository?
Currently we allow skill content to be defined in a decentralized fashion but we require that a plugin be added to our central recommendations repository. This is a temporary limitation. We eventually expect to allow crate authors to adds skills and plugins in a fully decentralized fashion.
We chose to limit ourselves to a centralized repository early on for three reasons:
- Even when decentralized support exists, a centralized repository will be useful, since there will always be crates that choose not to provide that support.
- Having a central list of plugins will make it easy to update people as we evolve Symposium.
- Having a centralized repository will help protect against malicious skills[^threat] while we look for other mechanisms, since we can vet the crates that are added and easily scan their content.
What if I want to add skills for crates private to my company? I don't want to put those in the central repository!
No problem, you can add a custom plugin source.
Are you aware of the negative externalities of LLMs?
I am, very much so. I feel like a lot of the uses of LLMs we see today are not great (e.g., chat bots hijack conversational and social cues to earn trust that they don't deserve) and to reconfirm peoples' biases instead of challenging their ideas. And I'm worried about the environmental cost of data centers and the way companies have retreated from their climate goals. And I don't like how centralized models concentrate economic power.2 So yeah, I see all that. And I also see how LLMs enable people to build things that they couldn't build before and help to make previously intractable problems soluble - and that includes more and more people who never thought of themselves as programmers3. My goal with Symposium and other projects is to be part of the solution, finding ways to leverage LLMs that are net positive: opening doors, not closing them.
Extensibility: because everybody has something to offer
Fundamentally, the reason I am working on Symposium is that I believe everybody has something unique to offer. I see the appeal of strongly opinionated systems that reflect the brilliant vision of a particular person. But to me, the most beautiful systems are the ones that everybody gets to build together4. This is why I love open source. This is why I love emacs5. It's why I love VSCode's extension system, which has so many great gems6.
To me, Symposium is a double win in terms of empowerment. First, it makes agents extensible, which is going to give crate authors more power to support their crates. But it also helps make agentic programming better, which I believe will ultimately open up programming to a lot more people. And that is what it's all about.
-
Actually as of this posting, the assert-struct skill is embedded directly in the recommendations repo. But I opened a PR to put it on assert-struct and I'll port it over once it lands. ↩︎
-
I'm very curious to do more with open models. ↩︎
-
Within Amazon, it's been amazing to watch how many people who never thought of themselves as software developers are starting to build software. Considering the challenges the software industry has with representation, I find this very encouraging. Diverse teams are stronger, better teams! ↩︎
-
None of this is to say I don't believe in good defaults; there's a reason I use Zed and VSCode these days, and not emacs, much as I love it in concept. ↩︎
-
OMG. One of my friends college wrote this amazing essay some time back on emacs. Next time you're doomscrolling on the toilet or whatever, pop over to this essay instead. Fair warning, it's long, so it'll take you a while to read, but I think it nails what people love about emacs. ↩︎
-
These days I'm really enjoying Zed, but I have to say, I really miss kahole/edamagit! Which of course is inspired by the magit emacs package. ↩︎
21 Apr 2026 4:24pm GMT
Firefox Developer Experience: Firefox WebDriver Newsletter 150
WebDriver is a remote control interface that enables introspection and control of user agents. As such, it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we've done as part of the Firefox 150 release cycle.
Contributions
Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 150, Khalid AlHaddad contributed several improvements:
- Added a new test to check that viewport dimentions are correct immediately after
browsingContext.createresolves. - And more test improvements:
- Asynchronous tests now consistently use
pytest asynciomarkers. - Introduced a new fixture to install WebExtensions and automatically uninstall them at the end of the test.
- Updated the helper for waiting on BiDi events to use a timeout multiplier, and migrated it to a fixture.
- Asynchronous tests now consistently use
WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!
General
- Fixed an issue where pending downloads could block browser shutdown due to a confirmation prompt. The prompt is now dismissed automatically.
WebDriver BiDi
- Added the
emulation.setNetworkConditionscommand, which supports thetype: offlineat the moment. Using this, you can emulate offline mode either on specific browsing contexts, on user contexts (a.k.a. containers) or globally. - Improved handling of non utf-8 header values across
networkmodule commands and events. These are now correctly serialized asBytesValue. - Fixed an issue where download events triggered by responses with a "Content-Disposition" header were missing the
navigationproperty when initiated from a link withtarget="_blank". - Updated the
log.entryAddedevent so it is only emitted for console API calls that produce a visible output in developer tools (see also the console specification: using the printer). Calls such asconsole.clearorconsole.timeno longer trigger an event. - Fixed a race condition in
browsingContext.setViewportwhich could cause timeouts when multiple contexts were created in parallel. - Improved
browsingContext.locateNodesto allow retrieval of the HTML element (documentElement) of a page when using thecsslocator.
Marionette
- Fixed the
WebDriver:getShadowRootcommand to no longer return user-agent shadow roots.
21 Apr 2026 2:01pm GMT
16 Apr 2026
Planet Mozilla
The Rust Programming Language Blog: Announcing Rust 1.95.0
The Rust team is happy to announce a new version of Rust, 1.95.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.95.0 with:
$ rustup update stable
If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.95.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.95.0 stable
cfg_select!
Rust 1.95 introduces a cfg_select! macro that acts roughly similar to a compile-time match on cfgs. This fulfills the same purpose as the popular cfg-if crate, although with a different syntax. cfg_select! expands to the right-hand side of the first arm whose configuration predicate evaluates to true. Some examples:
cfg_select!
let is_windows_str = cfg_select! ;
if-let guards in matches
Rust 1.88 stabilized let chains. Rust 1.95 brings that capability into match expressions, allowing for conditionals based on pattern matching.
match value
Note that the compiler will not currently consider the patterns matched in if let guards as part of the exhaustiveness evaluation of the overall match, just like if guards.
Stabilized APIs
MaybeUninit<[T; N]>: From<[MaybeUninit<T>; N]>MaybeUninit<[T; N]>: AsRef<[MaybeUninit<T>; N]>MaybeUninit<[T; N]>: AsRef<[MaybeUninit<T>]>MaybeUninit<[T; N]>: AsMut<[MaybeUninit<T>; N]>MaybeUninit<[T; N]>: AsMut<[MaybeUninit<T>]>[MaybeUninit<T>; N]: From<MaybeUninit<[T; N]>>Cell<[T; N]>: AsRef<[Cell<T>; N]>Cell<[T; N]>: AsRef<[Cell<T>]>Cell<[T]>: AsRef<[Cell<T>]>bool: TryFrom<{integer}>AtomicPtr::updateAtomicPtr::try_updateAtomicBool::updateAtomicBool::try_updateAtomicIn::updateAtomicIn::try_updateAtomicUn::updateAtomicUn::try_updatecfg_select!mod core::rangecore::range::RangeInclusivecore::range::RangeInclusiveItercore::hint::cold_path<*const T>::as_ref_unchecked<*mut T>::as_ref_unchecked<*mut T>::as_mut_uncheckedVec::push_mutVec::insert_mutVecDeque::push_front_mutVecDeque::push_back_mutVecDeque::insert_mutLinkedList::push_front_mutLinkedList::push_back_mutLayout::dangling_ptrLayout::repeatLayout::repeat_packedLayout::extend_packed
These previously stable APIs are now stable in const contexts:
Destabilized JSON target specs
Rust 1.95 removes support on stable for passing a custom target specification to rustc. This should not affect any Rust users using a fully stable toolchain, as building the standard library (including just core) already required using nightly-only features.
We're also gathering use cases for custom targets on the tracking issue as we consider whether some form of this feature should eventually be stabilized.
Other changes
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.95.0
Many people came together to create Rust 1.95.0. We couldn't have done it without all of you. Thanks!
16 Apr 2026 12:00am GMT
15 Apr 2026
Planet Mozilla
Mozilla Localization (L10N): Localizer Spotlight: Baurzhan
About you
My name is Baurzhan Muftakhidinov. I'm from Kazakhstan. I speak Kazakh, Russian, English and I have been contributing to Mozilla localization for more than 18 years.
From Linux Curiosity to Mozilla Localization
Q: How did you get involved in localization, and what drew you to Mozilla?
A: I came to Mozilla through Linux during my student years. I became interested in Linux at university, and very quickly I noticed how closely the open source world was connected: where there was Linux, Firefox was usually nearby.
When installing Linux distributions, one of the first things I noticed was language support. Many languages were available, but Kazakh was often missing or only partially supported. That made me ask a simple question: why is that, and what can be done about it?
Through Ubuntu's CD distribution program, I discovered Launchpad and began translating Firefox there. Around the same time, through a local Linux forum, I connected with Timur Timirkhanov, who already had experience with Mozilla localization. He helped me understand Mozilla's processes, pointed me to packages that needed translation, and opened a locale registration ticket for Kazakh in Bugzilla.
Soon after, Dauren Sarsenov joined, and in the beginning it was mainly the two of us working on Firefox. When Kazakh first appeared in a Firefox beta in spring 2009, we were extremely proud. It felt like a real milestone - not just translating isolated strings, but seeing a major global product appear in Kazakh.
For me, that was bigger than one browser. At the time, we were dreaming about a fully usable open source desktop in Kazakh, and Mozilla localization became one important part of that larger goal. What started as curiosity became a long-term commitment: making technology more accessible in Kazakh and proving that our language belongs in modern software.
Q: Which Mozilla products are closest to you? Do you use them regularly?
A: Firefox is definitely the product closest to me because I use it every day - both desktop and mobile. It never feels like I am translating something distant from my real life. I see the interface, the wording choices, and the practical impact of localization almost daily.
What makes Firefox especially meaningful is that it is both symbolic and practical. Symbolically, it showed that Kazakh could be present in one of the most important pieces of everyday software. Practically, it gave users a browser they could use in their own language. A browser is the gateway to the internet, so localizing Firefox means much more than translating one application.
I also use Thunderbird from time to time and visit MDN quite often. Even when I am not translating, I interact with Mozilla products as a user, so there is always a natural connection between volunteer work and daily habits.
People around me know me through Firefox localization more than through anything else. Very often I am simply "the person who translated Firefox into Kazakh." That says a lot about how visible Firefox has been.
Promoting Kazakh Localization and Building an Ecosystem
Q: How have you promoted Kazakh-localized software?
A: Most of my promotion work has been grassroots. In earlier years, I shared updates on Linux and open source forums, especially communities already interested in free software. Even when people were not personally interested in contributing, many showed strong support and encouragement. That confirmed that localization mattered beyond just the translation team.
One of my bigger efforts was creating a Debian-based Linux distribution from 2012 to 2015 called Kazsid. I built it partly to test how Kazakh localization worked across multiple applications in a real desktop environment. I included programs that already had Kazakh translations - Firefox, LibreOffice, desktop environments, and other tools - set Kazakh as the default language, and tested how everything worked together.
I shared the builds on forums, and some people downloaded and tried them. It was one of the most practical ways I encouraged interest in Linux and localized software.
Later, as translations matured upstream, maintaining a separate distribution was no longer necessary. That was actually a positive sign - users could install standard distributions and get the same localized experience.
Today I post updates on LinkedIn. It helps maintain visibility, even if it does not often bring in new contributors.
Working Independently - and Working Systematically
Q: What does the Kazakh localization community look like today?
A: At the moment, I am effectively the only active contributor across several major open source localization efforts in Kazakh, including Mozilla products, LibreOffice, GNOME, Xfce, and others.
In the early years, several people made meaningful contributions, but most eventually moved on. Timur helped significantly, especially in the earlier stages and in understanding Mozilla's processes, and I still occasionally consult trusted people when I need a second opinion.
The challenge for smaller languages is not only starting a translation but maintaining it over the long term. From early on, I was not thinking about one application. My goal was broader: to help create a real open source desktop experience in Kazakh. A browser translated into Kazakh is important, but a full ecosystem is even more meaningful. Sustainability is the hardest part.
Q: How do you approach quality when you are the main translator?
A: Direct user feedback is rare. So QA depends largely on my own testing, judgment, and systems.
I test software in real use, especially Firefox. In earlier years, I also used Nightly builds. Before settling on new terminology, I check dictionaries and reference materials. I consult fluent speakers when needed, and sometimes I discuss wording with my wife to see how natural it sounds.
My principle is that translations should feel clear and alive, not mechanically imported. I studied in Kazakh and remember the terms we were actually taught in IT-related subjects, and that background matters to me.
Because of my scripting background, I have written small tools in Python to help verify translations, track terminology, and maintain consistency. QA is not just "reading it once and hoping for the best." It is a combination of linguistic judgment, real usage, consultation, and automated checking.
More recently, I have been exploring how AI can assist localization. By testing translations through tools like the Google Gemini API and guiding terminology carefully, I have been able to close significant translation gaps. For Kazakh, newer models understand context much better than traditional machine translation systems. AI does not replace judgment, but it can make the work faster and more effective.
Professional Background
Q: How does your professional background influence your localization work?
A: My background is partly technical and partly analytical. I studied IT, worked as a Linux system administrator, and later moved into data analysis and GIS.
Those technical skills helped significantly. Automation makes a long-term localization effort much more manageable, especially when one person is doing most of the work.
Localization has strengthened my discipline and consistency. It requires patience and regular effort. Over time, I developed an instinct for terminology and phrasing - whether a term feels natural or artificial in context.
A Few Personal Notes
I have loved reading since I was four years old. My favorite genres are science fiction and popular science. Reading is still how I recharge.
I have lived in several cities in Kazakhstan, so I sometimes joke that I am a true nomad.
My family has always been supportive of my open source work. And when I run into a particularly difficult translation, I can still discuss it with my wife and get a fresh perspective.
15 Apr 2026 10:38pm GMT
Firefox Tooling Announcements: Happy BMO Push Day! (20260415.1)
The following changes have been pushed to bugzilla.mozilla.org:
- Bug 2023761 - [GITHUB] Allow use of individual api keys for pull requests and push comments instead of single share secret
- Bug 2012634 - "Phabricator Revisions" table overflows on X axis on mobile
- Bug 2028222 - Pasting multi-line text after selecting multi-line text does not overwrite, but applies markup for link
- Bug 2029522 - CI workflow uses deprecated docker-compose v1 and actions/checkout@v3
- Bug 2031520 - Missing space in "Throw away my changes, andrevisit bug NNN" message (when marking a bug as a duplicate of a hidden bug)
- Bug 2030581 - REST API: PUT /rest/bug/attachment/{id} does not pass is_markdown when adding comment
- Bug 2018260 - "Fields You Can Search On" is blocking people from making it through quicksearch.html doc
- Bug 2028240 - Cloned security bugs should default to being secure
- Bug 2031007 - When linking a Github pull request to a BMO bug, the attachment filename should contain the repository name in addition to the pull request ID
Discuss these changes in the BMO Matrix Room
1 post - 1 participant
15 Apr 2026 9:29pm GMT

