My name is Oliver Chan, though I am mostly known by my username Olvcpr423. I'm from China, and I speak Mandarin and Cantonese. I have been contributing to Mozilla localization in Simplified Chinese since 2020.
Getting Started
Q:How did you first get involved in localization, and what led you to Mozilla?
A: My localization journey actually began with Minecraft back in 2018, when I was 13. I was an avid player of this globally popular game. Similar to Mozilla, its developer uses a crowdsourcing platform to let players localize the game themselves. I joined the effort and quickly realized that I had a strong interest in translation. More importantly, I found myself eager to use my skills to help bridge language gaps, so that more people could enjoy content from different languages easily.
Firefox was the first Mozilla product I ever used. I started using it relatively late, in 2020, and my connection with Firefox began thanks to my uncle. Although I was aware that Firefox had a long history, I didn't yet understand what made it special. I gradually learned about its unique features and position as I explored further, and from then on, Firefox became my primary browser.
Later that same year, I noticed a typo while using Firefox and suggested a fix on Pontoon (I honestly can't recall how I found Pontoon at the time). That small contribution marked the beginning of my journey as a Mozilla localizer. I believe many people's localization journeys also start by correcting a single typo.
Working on Mozilla Products
Q: Which Mozilla projects do you enjoy working on the most, and why?
A: Firefox, absolutely. For one thing, it's my favorite piece of software, which makes working on it personally meaningful. More importantly, Firefox has a massive Chinese user base, which gives me a strong sense of responsibility to provide the best possible language support for my fellow speakers. On top of that, Firefox's mission as the last independent browser gives me extra motivation when working on its localization.
Aside from Firefox, Common Voice has been the most impactful project I've localized for Mozilla. It collects voices from a diverse range of speakers to build a publicly available voice dataset, which I think is especially valuable in this era. And honestly, working on the text for a voice-collection platform is a wonderful experience, isn't it? 😀
Thunderbird is another project I find especially rewarding. It is popular on Linux, and localizing it means supporting many users who rely on it for everyday communication, which I consider vital work.
Q:How does regularly using these products influence how you approach localization?
A: Regular usage is essential for localization teams (like us) that lack dedicated LQA processes and personnel. Without routinely using the product, it's easy to overlook issues that only become apparent in context, such as translations that don't fit the context or layout problems.
Since we also lack a centralized channel to gather feedback from the broader community, we have to do our best to identify as many issues as we can ourselves. We also actively monitor social media and forums for user complaints related to localization. In addition, whenever I come across a screenshot of an unfamiliar interface, I take it as an opportunity to check for potential issues.
Community & Collaboration
Q:How does the Chinese localization community collaborate in practice?
A: In practice, besides myself, there is only one other active member on Pontoon for our locale. While the workload is still manageable, we do need to seriously think about recruiting new contributors and planning for succession to ensure sustainability.
That said, our community is larger than what you see on Pontoon alone. We have a localization group chat where many members stay connected. Although they may not actively contribute to Pontoon - some work on SUMO or MDN, some are regular users, while others are less active nowadays - I can always rely on them for insightful advice whenever I encounter tricky issues or need to make judgment calls. Oftentimes, we make collective decisions on key terminology and expressions to reflect community consensus.
Q:How do you coordinate translation, review, and testing when new strings appear?
A: Recently, our locale hit 60,000 strings - a milestone well worth celebrating. Completing the translation of such a massive volume has been a long-term effort, built on nearly two decades of steady, cumulative work by successive contributors. I'd like to take this opportunity to thank each and every one of them.
As for coordination, we don't divide work by product - partly because all products already have a high completion level, and the number of products and new strings is still manageable. In practice, we treat untranslated strings a bit like Whac-A-Mole: whenever new strings appear, anyone available just steps in to translate them. Testing is also a duty we all share.
For review, we follow a cross-review principle. We avoid approving our own suggestions and instead leave them for peers to review. This helps reduce errors and encourages discussion, ensuring we arrive at the best possible translations.
Q:Did anyone mentor you when you joined the community, and how do you support new contributors today?
A: When I first joined Mozilla localization, I wasn't familiar with the project's practices or consensus. The locale manager 你我皆凡人 helped me greatly by introducing them. For several years, they were almost the only active proofreader for our locale, and I'd like to take this opportunity to pay tribute to their long-term dedication.
Today, when reviewing suggestions from newcomers, if a translation doesn't yet meet the approval standard, I try my best to explain the issues through comments and encourage them to keep contributing, rather than simply rejecting their work - which could easily discourage them and dampen their enthusiasm.
Q:What do you think is most important for keeping the community sustainable over time?
A: It's all about the people. Without people, there is no community. We need fresh blood to ensure we don't face a succession crisis. At the moment, recruiting from within the Mozilla ecosystem (MDN or SUMO) is the most immediate approach, but I won't give up on trying to draw in more people from the broader community.
Continuity of knowledge is also important. We mentor newcomers so they understand how the project works, along with its best practices and historical context. Documentation becomes necessary as time passes or the community grows; it ensures knowledge is preserved over time and prevents "institutional amnesia" as people come and go.
Background, Skills & Personal Lens
Q:What's your background outside localization, and how does it shape your approach to translation?
A: I'm currently a student majoring in accounting. While accounting and software localization may seem worlds apart, I believe they share similar characteristics. The IFRS (International Financial Reporting Standards) identifies six qualitative characteristics of accounting information, and with a slight reinterpretation, I find that they also apply surprisingly well to localization and translation. For example:
Relevance: translations should help users use the product smoothly and as expected
Faithful representation: translations should reflect the original meaning and nuance, without being constrained by literal form
Verifiability: translations should be reasonable to any knowledgeable person
Timeliness: translations should be delivered promptly
Understandability: translations should be easy to comprehend
Comparability: translations should stay consistent with existing strings and industry standards
On a personal level, I developed qualities like prudence and precision through localization long before I started my degree, which gave me a head start in accounting. In turn, what I've learned through my studies has helped me perform even better in localization. It's a somewhat interesting interplay.
Q:Besides translation, what else have you gained through localization?
A: I knew very little about Web technologies before I started localizing for Mozilla. Through working on Firefox localization, I gradually developed a solid understanding of Web technologies and gained deeper insight into how the Web works.
Fun Facts
Q:Any fun or unexpected facts you'd like to share about yourself?
A: My connection with Firefox began thanks to my uncle. One day, he borrowed my computer and complained that Firefox wasn't installed - it had always been his go-to browser. So I decided to give it a try and installed it on my machine. That was how my journey with Firefox began.
I love watching anime, especially Bocchi the Rock! and the band Kessoku Band featured in the series. I also enjoy listening to Anisongs and Vocaloid music, particularly songs voiced by Hatsune Miku and Luo Tianyi. And while I enjoy watching football matches, I'm not very good at playing football myself!
This year I was lucky again and was able to attend FOSDEM. This turned out to be more of a social conference than a technical one for me this year. I mean: I had a bunch of really great conversations, with peers and users of Firefox. I was there to man the Mozilla booth. The idea was to engage people and have them fill up a bingo, in exchange they might go back home with a T-shirt a baseball cap or a pair of socks. Most people that I saw on Saturday afternoon and Sunday morning. Some people complained about AI, but not as many as I was expecting. Explaining why and that https://techcrunch.com/2026/02/02/firefox-will-soon-let-you-block-all-of-its-generative-ai-features/ would soon be available made them all understand and think that they could keep Firefox as their main browser. Our sticker stock melts like snow under the sun. The people from mozilla.ai had some pretty interesting discussions with some users that came by the booth.
When the FOSDEM schedule got published, I got exited by the fact that the Mozilla room had been renamed the web browser room. Inclusion done the right, the best way to push for an open web. That dev room was located in the room that had historically served the Mozilla community back in 2004/2005/2006/2007 ... Unfortunately, I woke up 30m past Midnight on Saturday and was unable to get back to sleep. The sessions I had intended to watch were just at the time I got a big tired / want to sleep feeling. This was also true for the other room I was interested in : the BSD dev room.
Last but not least, as I had helped organize the Search dev room, a very nice recap was posted on LinkedIn. I was doing the MC in that room. It was a lot of fun and I learned a lot.
This year the conference was a social event. I've met plenty of "old" or not so old friend. I've counted 33 people, not counting my previous manager and her daughter. I know I have missed at least 3 people. Very nice conversation with many of these people. I really was a pleasure to meet and interact.
The highlight of this FOSDEM was seeing he Sun sparc station 4 on one of the stands.
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker. * **Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Overall a positive week for instruction counts (~1% improvement on check/debug/opt/doc builds). Cycle counts and memory usage remain broadly unchanged across the week though.
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
In C++, the muscle memory you develop over time is avoidant. You learn not to do certain things. It's a negative memory, not in a pejorative sense, but in the sense that you have to remember what not to do rather than what to do: a list of patterns to avoid, of traps to dodge. And this list keeps growing, because the language doesn't prevent you from falling into the traps, you just have to remember they exist.
In Rust, muscle memory is constructive. You learn patterns that are inherently correct. You don't have to remember what to avoid because the compiler won't let you do it. Instead of thinking "I must remember not to leave the door open", you learn to build a door that closes by itself.
If you were running into any problems last week opening links from other applications, specifically with Firefox being foregrounded but not opening the URL, this should now be fixed in Nightly and Beta (bug 2010535, fixed by Mossop). Please file a bug if you've updated your browser and the bug is still happening.
Dão & Moritz continued their work on the new separate search bar, adding a 'x' to clear the input, respecting the browser.search.openintab preference, matching the search history to the legacy bar. This new version of the separate search bar is enabled by default in Nightly.
Rob Wu investigated and fixed an issue that can prevent langpacks from being staged successfully as part of Firefox version updates (landed in 148, and will be included in a 147 dot release) - Bug 2006489
Greg Stoll introduced a proper localized stings mapping table for Add-on Gated Site Permission, change needed as part of the work WebSerial DOM API - Bug 1826747
WebExtensions Framework
Rachel Rusiecki contributed a nice cleanup of the WebExtensions internals by removing the extensions.manifestV3.enabled rollout pref - Bug 1804921
Emilio investigated and fixed a drag and drop issue hit by WebExtensions action popup pages, regression introduced in Firefox 146 (by Bug 1933181 ) and fixed in Firefox 148 and 147 - Bug 2007274
WebExtension APIs
Piro (TreeStyleTab extension developer) contributed a fix for browser.tabs.create unexpected rejection hit when openeredTabId is the tab id of a discarded tab - Bug 1762249
Fix issue related to the extensions event page suspending while downloads.download API call is waiting for user input through the file chooser - Bug 2005953, Bug 2005963
Fixed issue hit by the tabs API (and TreeStyleTab as a side effect of that) on builds were sync-about-blank is enabled (currently only Nightly builds) - Bug 2004525
Fixed issue related to data set through browser.session.setTabValue not being preserved when the tab is moved between windows - Bug 2002643
Fixed issue with declarativeNetRequest initialization at startup when one extension using declarativeNetRequest does not have any static DNR rules dataset declared in their manifest - Bug 2006233
Arai contributed changes needed to allow declarativeNetRequest roles to apply successfully to cached web requests resources - Bug 1949623
AI Window
assistant response markdown rendering with prosemirror 2001504
Alexandra Borovova updated the reset behavior of the emulation.setGeolocationOverride and the emulation.setScreenOrientationOverride commands to align with the spec changes. With this update, when calling these commands to reset the override, e.g., a browsing context, only this override will be reset, and if there is an override set for a user context related to this browsing context, this override will be applied instead.
Alexandra Borovova fixed user prompts open and close events to reference the correct context ID in case prompts are being opened from iframes on desktop and Android.
eslint-env comments are being removed as ESLint v9 does not support them (use eslint-file-globals.config.mjs instead). ESLint v10 (currently in rc) will raise errors for them.
More eslint-plugin-jsdoc rules have been enabled across the whole tree. These are the ones relating to valid-jsdoc. A few remain, but will need work by teams to fix the failures.
Linux users with new installs of Firefox were experiencing an issue where newtab was appearing blank (amongst otherbugs). This appears to be related to content sandboxing and the XDG base directory support that was recently added for Linux builds. Emilio Cobos Álvarez is working on a fix in this bug.
In the meantime, we've disabled all train-hop XPIs on Beta and Release for Linux builds. They will fallback to the built-in versions of New Tab instead.
Finn has been working through his onboarding bug list:
bug 1947638, switching about:preferences to open the profile selector window in a dialog, not a subdialog
bug 1950247, improve a11y by making headings on the edit profile page actually headings
bug 2001276, excluding the ignoredSharedPrefs list when creating a new profile
Mossop also continued making behavior more consistent across the toolkit profile service and the selectable profile service, fixed bug 2004345 - ensuring that, if a toolkit profile has a selectable profile group, we don't allow that toolkit profile to be deleted from about:profiles. Instead we warn the user.
Search and Navigation
Address Bar
Moritz fixed a multi-second jank when dragging large text over tabs or the address bar.
Drew and Daisuke are starting the work on standardizing the UI for the various address bar result types (in-flight: 2010176, 2010177, 2010184) and their result menus (2010168, 2010171, 2010172).
Advance notice that this week we are planning on landing a change to the search service to change it from an XPCOM service to a Javascript singleton. This is part of work to remove the XPCOM interfaces as the service hasn't been accessed from C++ for a while.
This will help reduce development overhead of needing to do full builds for interface changes.
Other interesting fixes
The browser.urlbar.switchTabs.searchAllContainers preference has been removed.
ESC key should nownot save modified data in the Edit bookmark dialog when accessed from Star Icon.
UX Fundamentals
We're disabling the felt-privacy error pages in Firefox 148 while we sort out a few small issues. Going to try to get this in for Firefox 149 with the new error page UI for all errors.
AI controls showing the option to block AI enhancements.
AI is changing the web, and people want very different things from it. We've heard from many who want nothing to do with AI. We've also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.
Starting with Firefox 148, which rolls out on Feb. 24, you'll find a new AI controls section within the desktop browser settings. It provides a single place to block current and future generative AI features in Firefox. You can also review and manage individual AI features if you choose to use them. This lets you use Firefox without AI while we continue to build AI features for those who want them.
One place to manage your AI preferences
Firefox offers AI features to enhance everyday browsing. These features are optional, and they're easy to turn on or off.
At launch, AI controls let you manage these features individually:
Translations, which help you browse the web in your preferred language.
Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
AI-enhanced tab grouping, which suggests related tabs and group names.
Link previews, which show key points before you open a link.
AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.
You can choose to use some of these and not others. If you don't want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle. When it's toggled on, you won't see pop-ups or reminders to use existing or upcoming AI features.
Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.
Firefox AI controls overview.
The browser that gives you a say
AI controls give you more say in how you move across the web.
We believe choice is more important than ever as AI becomes a part of people's browsing experiences. What matters to us is giving people control, no matter how they feel about AI.
If you'd like to try AI controls early, they'll be available first in Firefox Nightly. We'd love to hear what you think on Mozilla Connect.
Performance issues in Python often don't look like bugs.
They don't crash, they don't fail tests, and they don't stand out in code review. They just quietly turn into cliffs when the input size grows.
This post is about one such performance fix in transformers, what it revealed, and a small experiment that came out of it: LoopSleuth, a local LLM-powered complexity scanner.
It Started With a Tokenizer Converter
While working on transformers, I fixed a performance issue in convert_slow_tokenizer.py that took a tokenizer conversion step from 4 minutes down to ~1 second when running on very large vocabularies (100k+ tokens).
The Test That Surfaced It
This started when CI flagged test_voxtral_tokenizer_converts_from_tekken as the slowest test in the suite.
The test loads mistralai/Voxtral-Mini-3B-2507 and forces the fallback path to TokenizersBackend.
That fallback triggers the slow→fast tokenizer conversion step - and that conversion was doing repeated .index() lookups inside a sort key, turning large vocabularies into a performance cliff.
The root cause was a classic scaling trap.
The Original Pattern
# BEFORE (simplified excerpt)
forrank,tokeninenumerate(bpe_ranks):local=sorted(local,key=lambdax:(bpe_ranks.index(x[0]),bpe_ranks.index(x[1]),),)
(Simplified excerpt - the key issue is the repeated .index() inside the sort key.)
At first glance this looks harmless.
But list.index() is O(n).
And the real killer is that it happens inside a sorted() key function.
Sorting local means computing the key for every element, and each key performs two linear searches through bpe_ranks: sorted() calls the key function once per element (O(m)), and each key calls .index() twice (O(n)), so the total becomes O(m·n) - often a scaling trap when m and n are both large.
The Fix
# AFTER (reduces key computation from O(n) to O(1))
token_to_rank={token:rankforrank,tokeninenumerate(bpe_ranks)}forrank,tokeninenumerate(bpe_ranks):local=sorted(local,key=lambdax:(token_to_rank[x[0]],token_to_rank[x[1]],),)
The optimization is simple:
replace repeated linear searches with constant-time dictionary lookups
This doesn't eliminate all sorting work (the outer loop still sorts repeatedly), but it removes the quadratic lookup cost that was dominating runtime.
The takeaway wasn't just "use dicts" - it was that asymptotic traps often hide in perfectly valid Python idioms.
Could This Have Been Caught Automatically?
After landing that fix, I kept wondering:
How many other places in the codebase have the exact same pattern?
This wasn't a correctness issue:
everything worked
tests passed
the slowdown only appeared at scale
And none of the linting tools I normally rely on flagged it.
Ruff's PERF rules catch obvious constructs like unnecessary list copies, but they don't reason about .index() inside a sort key.
In theory, a linter could detect patterns like:
repeated .index() inside loops
.index() inside sort keys
nested iteration over growing structures
But most rule-based linters avoid making strong claims about asymptotic complexity.
That's a reasonable trade-off: linters are fast, deterministic, and low-noise - but they often miss scaling issues unless you add very specific custom rules.
This is where I started wondering whether an LLM could help fill the gap.
Scanning Transformers With Claude
As an experiment, I ran Claude Code over the repository with one question:
Find quadratic complexity patterns similar to the tokenizer converter bug.
The result was surprisingly useful.
It scanned ~3,000 Python functions across the codebase in a few minutes and flagged ~20 instances of the same anti-pattern:
.index() inside loops
.index() inside sort keys
nested iteration patterns with superlinear blow-up at scale
About half were genuine hot-path candidates; others were technically quadratic but not performance-critical in practice.
Instead of running a massive model in the cloud, I wanted to know:
could a small local model catch these patterns?
could we build something closer to a linter?
could we automate complexity review?
That's how I ended up hacking together a small prototype I called LoopSleuth.
Why Rust + llama.cpp?
My first instinct was to build this as a Python script on top of transformers itself.
But I wanted this experiment to be:
fast startup time
easy CI binary distribution
no Python runtime dependency
easy to integrate into tooling
A single static binary makes it easy to drop into CI, like Ruff.
And honestly, I also wanted an excuse to explore the Rust ecosystem that powers tools like Ruff and Ty.
So LoopSleuth is written in Rust and uses:
rustpython-parser to extract functions
llama.cpp bindings for local inference
In practice, a small model like Qwen2.5-Coder 3B (Q4) already gives surprisingly good results for this narrow task.
LoopSleuth: A Small Complexity Scanner
LoopSleuth is a CLI tool that:
parses Python modules
extracts functions (each function is analyzed in isolation: signature + body, without full module context)
sends each function to a local LLM
asks a focused question:
Does this contain patterns that may scale quadratically?
If the model answers "QUADRATIC", it also asks for an optimization suggestion.
This framing treats complexity as a heuristic warning (like a linter) rather than a mathematical proof.
How It Works
The prompt is deliberately simple and constrained:
Classify this function as OK or QUADRATIC.
Look for list.index(), nested loops, or linear operations inside loops.
Return only one word: OK or QUADRATIC.
This makes the model focus on structural patterns rather than trying to perform full dataflow analysis, and the constrained output format makes parsing reliable.
Because it's a CLI, it can be used in a few practical ways:
as a local complexity scanner during development
as a lightweight pre-pass before calling a large cloud model (reducing token usage)
as a GitHub Action on pull requests to catch patches that introduce quadratic behavior
Why Not Just Use Existing Linters?
Before building anything, I tried the usual suspects.
Tools like Ruff, Pylint, and performance-focused plugins can catch a lot:
Pylint warns about string concatenation in loops (consider-using-join)
Ruff has PERF rules inspired by Perflint
But none of the linters I tried really caught the pattern that triggered this whole experiment:
repeated .index() lookups inside loops
.index() inside sort key functions
nested iteration patterns that only become problematic at scale
These tools are excellent at enforcing specific rules, but they generally don't try to answer:
"Does this function scale quadratically with input size?"
That gap is what made the LLM approach interesting to explore.
A Quick Comparison
One thing I wanted to sanity-check early was whether existing linters would catch the same issues.
So I built a small test file with a handful of intentionally quadratic functions (nested loops, .remove() in loops, string concatenation, etc.) and ran:
LoopSleuth
Ruff (with --select ALL)
Pylint
The results were pretty stark:
Tool
Detects .index() in loop?
Reports complexity?
Ruff
❌
❌
Pylint
❌
❌
LoopSleuth
✅
✅ (heuristic)
LoopSleuth flagged all 5 quadratic functions, while Ruff and Pylint flagged plenty of style and quality issues but neither directly reported algorithmic complexity problems.
This isn't really a criticism of those tools - they're simply not designed for that job.
To be clear, there may be ways to approximate some of these checks with custom rules or plugins, and linters remain the first line of defense for code quality.
LoopSleuth is just exploring a different axis: scaling behavior.
Still an Experiment
LoopSleuth is not a replacement for linters.
It's a small experiment.
Traditional linters like Ruff or Pylint excel at catching specific code smells. But most scaling bugs don't come from a single construct. They come from composition:
nested iteration
repeated membership checks
linear operations inside loops
Rule-based linters struggle to infer:
"this .index() is inside a hot path"
"this loop is over the same input size"
"this becomes O(n²) at scale"
LLMs, even small ones, can often reason about these patterns more directly.
That said, LoopSleuth runs against isolated Python functions one by one, which means it doesn't yet understand:
cross-function context
runtime sizes
whether a loop is actually hot in practice
Limitations
Like any heuristic tool, LoopSleuth has trade-offs:
False positives:
small fixed-size loops that never scale
code in non-hot paths
patterns that look quadratic but have early exits
False negatives:
complexity hidden across function calls
indirect iteration patterns
subtle algorithm choices
The accuracy depends heavily on prompt design and model choice.
Important: LoopSleuth is a screening tool, not a replacement for profiling or benchmarking. It flags patterns that may cause issues, but only real measurements can confirm actual performance problems.
More broadly, I'm interested in whether this approach can extend beyond complexity analysis to other classes of performance issues.
One direction would be to build a small library of prompts for:
repeated tensor conversions
hidden CPU/GPU sync points
accidental re-tokenization
And in an ideal world, we could fine-tune a small model (like Qwen2.5-Coder 3B) to specialize on this kind of performance reasoning.
What's Next
If this experiment proves useful, here are some directions worth exploring:
AST-based prefiltering to skip obviously safe functions
Caching inference results to avoid re-analyzing unchanged code
Training on real perf bugs from issue trackers and PRs
GitHub Actions integration to catch regressions in CI
Right now LoopSleuth is a proof of concept, but these extensions could make it practical for real codebases.
Conclusion
LoopSleuth started as a simple question:
Could we catch quadratic complexity bugs automatically?
The answer is: not perfectly.
But even a small local model can spot surprising amounts of hidden O(n²) behavior.
And as codebases grow - especially ones like transformers - performance traps scale with them.
LoopSleuth is a small experiment toward making complexity visible earlier.
If you have examples of hidden scaling bugs or want to contribute detection patterns, I'd love to collect them as test cases. Feel free to try it locally or open an issue.
Welcome to the Q4 2025 edition of the Firefox Security & Privacy Newsletter.
Security and privacy are foundational to Mozilla's manifesto and central to how we build Firefox. In this edition, we highlight key security and privacy work from Q4 2025, organized into the following areas:
Firefox Product Security & Privacy - new security and privacy features and integrations in Firefox
Core Security - platform-level security and hardening efforts
Community Engagement - updates from our security research and bug bounty community
Web Security & Standards - advancements that help websites better protect their users from online threats
Preface
Note: Some of the bugs linked below might not be accessible to the general public and restricted to specific work groups. We de-restrict fixed security bugs after a grace-period, until the majority of our user population have received Firefox updates. If a link does not work for you, please accept this as a precaution for the safety of all Firefox users.
Firefox Product Security & Privacy
Functional Privacy. Firefox empowers users with control and choice - including the option for maximum privacy protections. Yet, our commitment lies in targeting online tracking by default in ways that ensures the web continues to function accurately and smoothly. With focus on this important balance, our protections have blocked more than 1 trillion tracking attempts, while reported site compatibility issues were driven down to an all time low: 500, as compared to 1,100 in Q1 of 2025.
Improved page redirect prevention: Firefox now blocks top-level redirects from iframes. This new prevention mechanism aligns Firefox behaviour with other browsers and protects users against so-called malvertising attacks.
Improved protections against navigational cross-site tracking: Navigational tracking is used to track users across different websites using browser navigations. Bounce tracking is a type of navigational tracking that "bounces" user navigations through an intermediary tracking site. Firefox's Bounce Tracking Protection already protects against this tracking vector. And Firefox 145 uplevels this by also eliminating cache access for these intermediate redirect pages.
Global Privacy Control (GPC): Following Firefox's lead as the first major browser to do this, Thunderbird has also now replaced the legacy "Do Not Track" (DNT) in favor of Global Privacy Control (GPC). This new control has the much needed legal footing to clearly communicate a user's "do-not-sell-or-share preference" and other browsers are expected to follow soon.
Warning prompts for digital identity requests: When a webpage attempts to open a digital wallet app using custom URL schemes such as openid4vp, mdoc, mdoc-openid4vp, or haip, Firefox on Desktop and Android (Firefox 145 and newer) now displays clear warning prompts that explain what's happening and give users control.
Core Security
Certificate Transparency (CT) on Android: Certificate Transparency enables rapid detection of unauthorized or fraudulent SSL/TLS certificates. CT has been available in Firefox Desktop since Firefox 136 and is now also available on Android starting with Firefox 145.
Post-Quantum Cryptography (ML-KEM): ML-KEM is a next-generation public-key cryptosystem designed to resist attacks from large-scale quantum computers. Post-quantum (PQ) cryptography with ML-KEM support shipped in Firefox 132 for Desktop. Support is now also available on Android starting with Firefox 145 and in WebRTC starting with Firefox 146.
Community Engagement
Mozilla and Firefox at the 39th Chaos Communication Congress (39C3): Teams from Firefox Security, Privacy, Networking, Thunderbird, and Public Policy collaborated to raise awareness of their work and gather direct community feedback. A clear highlight was the popularity of our swag, with our folks distributing 1,000 Fox Ears. The high level of engagement was further sustained by a dedicated community meetup and an impromptu AMA session, which drew attention from over 100 people.
Firefox Bug Bounty Hall of Fame: We just updated the Hall of Fame, which credits all of the skillful security researchers that strive to keep Firefox secure as of Q4 2025. If you also want to contribute to Firefox security, please look at our Bug Bounty pages.
Web Security & Standards
Integrity-Policy: Firefox 145 has added support for the Integrity-Policy response header. The header allows websites to ensure that only scripts with an integrity attribute will load. Errors will be logged to the console, with support for the Reporting API coming in early 2026.
Compressed Elliptic Curve Points in WebCrypto: Firefox 146 adds support for compressed elliptic curve points in WebCrypto. This reduces public key sizes by nearly half, saving bandwidth and storage, while still allowing the full point to be reconstructed mathematically. With this addition, Firefox now leads in WebCrypto web platform test coverage.
Going Forward
As a Firefox user, you automatically benefit from the security and privacy improvements described above through Firefox's regular automatic updates. If you're not using Firefox yet, you can download it to enjoy a fast, secure browsing experience - while supporting Mozilla's mission of a healthy, safe, and accessible web for everyone.
We'd like to thank everyone who helps make Firefox and the open web more secure and privacy-respecting.
Working on the mobile Firefox team gives you the opportunity to touch on many different parts of the browser space. You often need to test the interaction between web content and the application integration's to another component, say for example, a site registering for a WebPush subscription and Firefox using Firebase Cloud Messaging to deliver the encrypted message to the end-user. Hunting around for an example to validate everything fine and dandy takes time.
Sometimes a simple test site for your use case is helpful for initial validation or comparison against other browsers.
Below is a list of tests that I've used in the past (in no particular order):
Push notifications requires a server to send a notification to the client (not the same as a WebNotification), so you can use this WebPush test site for validating just that.
There are Too Many™ different prompt and input element types. The MDN docs have the best collection of all of them.
Forms and Autocomplete
There are various form types and various heuristics to trigger completion options, so they deserve their own section. The more (test sites) the merrier!
Sign-up and login forms behave differently, so they are handy to test separately. For example, autofilling a generated password is useful on a registration form but not on a login one.
Make your own
If you need to make your own, try to write out the code yourself so you can understand the reduced test case. If it's not straight-forward, try using the Testcase Reducer by Thomas Wisniewski.
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.
Learn how this was implemented from the original source here.
<noscript><p>Loading comments relies on JavaScript. Try enabling JavaScript and reloading, or visit <a href="https://mindly.social/@jonalmeida/115937256635328128">the original post</a> on Mastodon.</p></noscript>
<noscript>You need JavaScript to view the comments.</noscript> &>"'
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
This week saw a very nice win from doing overall less work in the compiler (https://github.com/rust-lang/rust/pull/151382). There were a few regressions, but only in artificial stress tests, we are keeping an eye on them.
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
AI is here, and has started to define how we search, create, communicate - and how the web itself works. Some of you love AI, but want it to work better for yourselves and society. Some of you hate it, and don't want any of it.
We get it.
We also know, as Mozilla, that the future is being decided now. The big tech players are racing to lock down and control AI, and make sure it works on their terms, not ours.
Updates on what's new and coming with our core products, Firefox and Thunderbird.
A look at how Mozilla is investing in open source AI and privacy preserving tech.A snapshot of our financials, and how we allocate resources to balance mission and money
Stories from people across Mozilla and our community who are building tools, products, and movements that push AI in a better direction
And, a commitment to giving you a choice in everything we do - including the option to say no to AI altogether.
All of this is guided by Mozilla's double bottom line: advancing the public interest and building sustainable businesses. This model lets us invest patiently, say no to extractive approaches, and support ecosystems that would otherwise struggle to exist.
A vision for what comes next
The future of AI - and the future of the web - is ours to define. We want to make that future to be one where humanity thrives, and technology helps out.
If you believe the future of AI should be human-centered, transparent, and open, we invite you to explore the report, share with your community and build that future with us.
Huge thanks to :arai for working on this feature! It's currently not enabled by default but will be soon. It can be enabled through window.toggleDarkMode().
[arai-a] Add a menu to copy the Marker Table as text (#5732)
[arai-a] Do not apply sticky tooltip on double click (#5754)
[Markus Stange] Allow seeing different assembly code for the same function (#5349)
[fatadel] Align double-click behavior of stack chart with flame graph (#5782)
[Markus Stange] Add a Focus Self transform (#5774)
[Markus Stange] Fix "scroll to hotspot" functionality in the source view + assembly view (#5759)
[Nazım Can Altınova] Enable the Turkish locale in production (#5786)
Who will build the next version of the web? Mozilla wants to make it more likely that it's you. We are committing time and resources to bring experienced builders into Mozilla for a short, programmed period, to work with our New Products leaders to build tools and products for the next version of the web.
A different program from a different kind of company
Our mission at Mozilla is to ensure the internet is a global public resource, open and accessible to all. We know that there are a lot of gifted, experienced and thoughtful technologists, designers, and builders who care as deeply about the internet as we do - but seek a different environment to explore what's possible than what they might find across the rest of the tech industry.
Pioneers is intentionally structured to make it possible for those who don't typically get the opportunity to create new products to participate. The program is paid, flexible (i.e. you can do it part-time if needed), and bounded. We're not asking you to gamble your livelihood in order to explore how we can improve the internet.
This matters to me
My own career advanced the most dramatically in moments when change was piling on top of change and most people couldn't grasp the compounding effects of these shifts. That's why I stepped up to start an independent blogging company back in 2002 (Gizmodo) and again in 2004 (Engadget).
It's also why, a lifetime later, I joined Mozilla to lead New Products, where I've had the good fortune of supporting the development of meaningful new Mozilla tools like Solo, Tabstack, 0DIN, and an enterprise version of Firefox.
Changing the game
We've designed Pioneers to make space for technologists - professionals comfortable working across code, product, and systems - to collaborate with Mozilla on foundational ideas for AI and the web in a way that reflects these shared values.
We're looking for people to work with; this is not a contest for ideas, and you don't apply with a pitch deck. Our vision:
Pioneers are paid. Participants receive compensation for their time and work.
It's flexible, designed so participants can be in the program and continue to work on existing commitments. You don't have to put your life on hold.
It's hands-on. Builders work closely with Mozilla leaders to prototype and pressure-test concepts.
It's bounded. The program is time-limited and focused, with clear expectations on both sides.
It's real. Some ideas will move forward inside Mozilla. Some will not - and they'll still be valuable. If it makes sense, there will be an opportunity for you to join Mozilla full-time to bring your concept to market.
Applications are open Monday, Jan. 26 and close Monday, Feb. 16, 2026.
Pioneers isn't an accelerator, and it isn't a traditional residency. It's a way to explore foundational ideas for AI and the web in a high-trust environment, with the possibility of continuing that work at Mozilla.
If this sounds like the kind of work you want to do, we want to hear from you. Hopefully, by reading to the end of this post, you're either thinking of applying yourself - or know someone who should. I encourage you to check out (and share) Mozilla Pioneers, thanks!
Shout-out to new contributor Lorenz A, who fixed almost 70 bugs over the past few weeks! Most of this work was modernizing some of our DevTools code to use ES6 classes (example)
Split View has been enabled by default in Nightly! You can right click on a tab to add it to a split view, and from there select the other tab you'd like to view in the split. Or, multi-select 2 tabs with Ctrl/Cmd, and choose "Open in Split View" from the tab context menu
@rejects for indicating if an async (or promise returning) function may reject. This is not standard in JSDoc, and TypeScript doesn't have an equivalent. Hence for now, this is an alternative way that we can use to at least document the expectations.
Quick update this week - OS Integration intern Nishu is traveling a long road to add support for storing profiles in the secure MacOS App Group container (bug 1932976), over the break she fixed
Daisuke fixed multiple address bar bugs, including broken "switch to [tab group]" behaviour, persisted search terms, and a missing unified search button in private new tabs (2002936, 1968218, 1961568)
Jeremy Swinarton aligned the tab note editor to spec in Tab note content textarea spec, refining textarea sizing, focus/blur save behavior, and keyboard shortcuts for consistent editing and better a11y across platforms.
Stephen Thompson added a one-click entry point in hover previews via Add note button to tab hover preview, surfacing Tab Notes in the preview tooltip (behind notes and hover-preview prefs) with full keyboard focusability and theme-aware iconography.
Stephen Thompson hooked History API updates in Update canonical URL for tab note on pushState to recompute the canonical URL on pushState/replaceState/popstate, preventing stale or misplaced notes during SPA navigations.
Last year brought a wealth of new features and fixes to Firefox on Linux. Besides numerous improvements and bug fixes, I want to highlight some major achievements: HDR video playback support, reworked rendering for fractionally scaled displays, and asynchronous rendering implementation. All this progress was enabled by advances in the Wayland compositor ecosystem, with new features implemented by Mutter and KWin.
HDR
The most significant news on the Wayland scene is HDR support, tracked by Bug 1642854. It's disabled by default but can be enabled in recent Wayland compositors using the gfx.wayland.hdr preference at about:config (or by gfx.wayland.hdr.force-enabled if you don't have an HDR display).
HDR mode uses a completely different rendering path, similar to the rendering used on Windows and macOS. It's called native rendering or composited rendering, and it places specific application layers directly into the Wayland compositor as subsurfaces.
The first implementation was done by Robert Mader (presented at FOSDEM), and I unified the implementation for HDR and non-HDR rendering paths as new WaylandSurface object.
The Firefox application window is actually composited from multiple subsurfaces layered together. This design allows HDR content like video frames to be sent directly to the screen while the rest of the application (controls and HTML page) remains in SDR mode. It also enables power-efficient rendering when video frames are decoded on the graphics card and sent directly to the screen (zero-copy playback). In fullscreen mode, this rendering is similar to mpv or mplayer playback and uses minimal power resources.
I also received valuable feedback from AMD engineers who suggested various improvements to HDR playback. We removed unnecessary texture creation over decoded video frames (they're now displayed directly as wl_buffers without any GL operations) and implemented wl_buffer recycling as mpv does.
For HDR itself (since composited rendering is available for any video playback), Firefox on Wayland uses the color-management-v1 protocol to display HDR content on screen, along with BT.2020 video color space and PQ color transfer function. It uses 10-bit color vectors, so you need VP9 version 2 to decode it in hardware. Firefox also implements software decoding and direct upload to dmabuf frames as a fallback.
The basic HDR rendering implementation is complete, and we're now in the testing and bug-fixing phase. Layered rendering is quite tricky as it involves rapid wl_surface mapping/unmapping and quick wl_buffer switches, which are difficult to handle properly. HDR rendering of scaled surfaces is still missing-we need fractional-scale-v2 for this (see below), which allows positioning scaled subsurfaces directly in device pixels. We also need to test composited/layered rendering for regular web page rendering to ensure it doesn't drain your battery. You're very welcome to test it and report any bugs you find.
Fractional scale
The next major work was done for fractional scale rendering, which shipped in Firefox 147.0. We updated the rendering pipeline and widget sizing to support fractionally scaled displays (scales like 125%, etc.). This required reworking the widget size code to strictly upscale window/surface sizes and coordinates and never downscale them, as downscaling introduces rounding errors.
Another step was identifying the correct rounding algorithm for Wayland subsurfaces and implementing it. Wayland doesn't define rounding for it, only for toplevel windows, so we're in a gray area here. I was directed to Stable rounding by Michel Daenzer. It's used by Mutter and Sway so Firefox implements it for those two compositors while using a different implementation for KWin. This may be updated to use the fractional-scale-v2 protocol when it becomes available.
Fractional scaling is enabled by default, and you should see crisp and clear output regardless of your desktop environment or screen scale.
Asynchronous rendering
Historically, Firefox disabled and re-enabled the rendering pipeline for scale changes, window create/destroy events, and hide/show sequences. This stems from Wayland's architecture, where a Wayland surface is deleted when a window becomes invisible or is submitted to the compositor with mismatched size/scale (e.g., 111 pixels wide at 200% scale).
Such rendering disruptions cause issues with multi-threaded rendering-they need to be synchronized among threads, and we must ensure surfaces with the wrong scale aren't sent to the screen, as this leads to application crashes due to protocol errors.
Firefox 149.0 (recent nightly) has a reworked Wayland painting pipeline (Bug 1739232) for both EGL and software rendering. Scale management was moved from wl_buffer fixed scale to wp_viewport, which doesn't cause protocol errors when size/scale doesn't match (producing only blurred output instead of crashes).
We also use a clever technique: the rendering wl_surface / wl_buffer / EGLWindow is created right after window creation and before it's shown, allowing us to paint to it offscreen. When a window becomes visible, we only attach the wl_surface as a subsurface (making it visible) and remove the attachment when it's hidden. This allows us to keep painting and updating the backbuffer regardless of the actual window status, and the synchronized calls can be removed.
This brings speed improvements when windows are opened and closed, and Linux rendering is now synchronized with the Windows and macOS implementations.
… and more
Other improvements include a screen lock update for audio playback, which allows the screen to dim but prevents sleep when audio is playing. We also added asynchronous Wayland object management to ensure we cleanly remove Wayland objects without pending callbacks, along with various stability fixes.
And there are even more challenges waiting for us Firefox Linux hackers:
Wayland session restore (session-restore-v1) to restore Firefox windows to the correct workspace and position.
Implement drag and drop for the Firefox main window, and possibly add a custom Wayland drag and drop handler to avoid Gtk3 limitations and race conditions.
Utilize the fractional-scale-v2 protocol when it becomes available.
Investigate using xdg-positioner directly instead of Gtk3 widget positioning to better handle popups.
Vulkan video support via the ffmpeg decoder to enable hardware decoding on NVIDIA hardware.
And of course, we should plan properly before we even start. Ready, Scrum, Go!