12 Sep 2025

feedPlanet Mozilla

The Rust Programming Language Blog: crates.io phishing campaign

We received multiple reports of a phishing campaign targeting crates.io users (from the rustfoundation.dev domain name), mentioning a compromise of our infrastructure and asking users to authenticate to limit damage to their crates.

These emails are malicious and come from a domain name not controlled by the Rust Foundation (nor the Rust Project), seemingly with the purpose of stealing your GitHub credentials. We have no evidence of a compromise of the crates.io infrastructure.

We are taking steps to get the domain name taken down and to monitor for suspicious activity on crates.io. Do not follow any links in these emails if you receive them, and mark them as phishing with your email provider.

If you have any further questions please reach out to security@rust-lang.org and help@crates.io.

12 Sep 2025 12:00am GMT

11 Sep 2025

feedPlanet Mozilla

Mozilla Thunderbird: State of the Thunder: Mozilla Connect Updates

Welcome back to the latest season of State of the Thunder! After a short break, we're back and ready to go. Michael Ellis, our Manager of Community Programs, is helping Alessandro with hosting duties. Along with members of the Thunderbird team and community, they're answering your questions and keeping everyone updated on our roadmap progress for our projects.

In this episode, we're talking about our initiatives for regular community feedback, tackling a variety of questions, and providing status updates on the top 20-ish Mozilla Connect Thunderbird suggestions.

Community Questions

Accidental Message Order Sorting

Question: Clearly the number one issue with Thunderbird that breaks for many of my clients is that if they accidentally click on a column header the sorting of the message is changed. "My messages are gone" is what I then hear all the time from my clients. It would be wonderful if the sorting of the message could be locked and not changed through such an easy operation, which often is invoked accidentally.

Answer: This is a great usability question and a complicated one. Alessandro recommends switching to CardsView, as it's harder to accidentally change. This one one of the reasons we implemented it! However, we can definitely explore options to lock the message order in through enterprise policies. We would want to be mindful of users who wanted to change the order.

Michael discusses the option of a pop-up warning that could inform the user they're about the change the message sorting order. Increased friction through a pop-up, though, as Alessandro and Jesse Miksic from the design team both point out, can cause its own issues. But this is certainly something we'll look into more!

Move Focus Keyboard Shortcut

Question: Could there be consideration to add a keystroke to immediately move the focus to the list of messages in the currently open mailbox? Even better if keystrokes that would automatically do this for the inbox folder or the default account.

Answer: Alessandro notes Thunderbird already has this ability, but it's not super noticeable. The F6 key allows you to switch focuses between the main areas of the application. So we're approaching this problem from two directions: implementing tabular keyboard navigation and customizable shortcuts. We don't have an expected delivery date on the latter, but we plan to have a searchable keyboard shortcut hub. We know our interface can be a little daunting, and we're tackling it from multiple angles.

Option for Simplified Thunderbird?

Question: I work for a company which develops a Raspberry Pi-based computer made specific… specifically for blind consumers. Thunderbird is installed on this device by default. Many of our users are not tech-savvy. and just want a simple email client. I would love to have an easy method for removing some of the clutters with the goal of having a UI with fewer controls. Currently, users often have to press the tab key many times just to move to the list of messages in their inbox. For some users, all they really want is the message list and the list of folders, with the menu bar open, and that's it. A bit like we once had with Outlook Express.

Answer: Alessandro and Ryan Sipes, our director, have talked about the need for a lighter version of Thunderbird a lot. This would help users who don't need all the power of Thunderbird, and just want to focus on their messages (not even their folders). However, Ryan doesn't want a separate version of Thunderbird we'd need to maintain, but to build a better usability curve into Thunderbird. Answering this question means having a Thunderbird that is simple by default, but more powerful and customizable if needed, without upsetting our current users.

Heather Ellsworth from the community team also supports the idea of a user preference for a lighter Thunderbird. At conferences and co-working spaces, she constantly hears the requests for a slightly simpler version of Thunderbird.

Thunderbird PPA

Question: I'm using Linux, one of the Ubuntu-derived flavors. And I have Thunderbird 128.14 ESR installed through the Mozilla Team PPA. I would love to know when the ESR version of 140 will be available in this PPA.

Answer: Heather, who works a lot with Linux packaging, takes this question. This PPA isn't an official distribution channel for Thunderbird, which leads to some confusion. Our official Linux packages are the Snap and flatpak, and the tarball available on our website. A community member named Rico, whose handle is ricotz, maintains this PPA. In the PPA, you can click on his name to learn how to contact him for question like this.

Top 20-ish Mozilla Connect Posts

If you've ever posted an idea to make Thunderbird better in a blog comment, social media post, or a SUMO (Mozilla Support) thread, you've probably been prompted to share your suggestion on Mozilla Connect. This helps us keep our community feedback in one place, which helps our team prioritize features the community wants!

Where we're falling short, however, is keeping the community updated on the progress of their suggestions. With a dedicated community team, this is something we can do better! Right now, we'd like to provide a quick status update on the top 20-ish Mozilla Connect posts related to Thunderbird.

Sync

We implemented this in the Daily build of the desktop app last year, using a staging environment for Firefox Sync. But Firefox Sync is called Firefox Sync because it's built for Firefox. Thunderbird profiles, in comparison, have a lot more data points. This meant we had to build something completely different.

As we started to spin up Thunderbird Pro, we decided it made more sense to have a Thunderbird account that would manage everything, including Sync. Unfortunately, this meant a lot of delays. So Sync is still on our radar, and we hope to have it next year, barring further complications.

GNOME Desktop Integration

Yes, we're working on this, starting with native desktop notifications. Ideally, we want to be integrated with more Linux desktop environments through expanded native APIs.

Color for Thunderbird Accounts

We already have it! You can access your account settings and customize the colors of each account.

Show full email address on mouse-over

Already have this too. If this doesn't happen, it's a bug, and we'd definitely appreciate a report at Bugzilla.

Don't save passwords as plain text, but rather integrate with the OS storage system

We're exploring this as both part of our increased native OS integrations and strengthening and security integrations with Thunderbird.

Thunderbird should, by default, have all telemetry as an opt-in option, or have zero telemetry

We're already adopting opt-in telemetry for an upcoming release of Thunderbird for Android, and we want to make this the default for desktop in the future. While desktop is currently opt-out, Alessandro stresses we only have a few limited telemetry probes for desktop Thunderbird. And those probes can show how the majority of users are using the app and help us avoid bad UX choices.

Thunderbird for iPhone and iPad

In progress!

JMAP Support

Currently in the works for the upcoming iOS release, with plans for support on desktop and Android. Thundermail will also come with JMAP.

Firefox Translate

Exploring this is low on our list right now. This is both because of performance concerns and we want to be very cautious with anything concerning machine learning, which includes translation.

Watch the Video (Also on Peertube)

Listen on the Thundercast!

Our Next State of the Thunder

Anxious to know the rest of the top 20 Mozilla Connect posts? Join us on Tuesday, September 16 at 3 PM Pacific (22:00 UTC)! Find out how to join on the TB Planning mailing list. We think this will be a great season and who knows, by the end of it, we may even have a jingle. See you next time!

The post State of the Thunder: Mozilla Connect Updates appeared first on The Thunderbird Blog.

11 Sep 2025 7:20pm GMT

10 Sep 2025

feedPlanet Mozilla

This Week In Rust: This Week in Rust 616

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is GrimoireCSS, a CSS engine crafted in Rust, focusing on unmatched flexibility, reusable dynamic styling, and optimized performance for every environment.

Thanks to Dmitrii Shatokhin for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

390 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Overall, a fairly neutral week with relatively few changes affecting performance landing.

Triage done by @simulacrum. Revision range: 75ee9ffd..f13ef0d7

1 Regression, 5 Improvements, 3 Mixed; 4 of them in rollups 33 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference, Leadership Council or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-09-10 - 2025-10-08 🦀

Virtual
Asia
Europe
North America
Oceania:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Hello,

We are sorry you aren't happy with the state of the async in the current edition of Rust. The memory ownership intuition you were meant to develop when working with single-threaded and/or parallel execution turned to be too expensive to port into our zero-cost concurrency framework, reinvented from scratch for the ultimate benefit to no one in particular.

We aren't planning to do anything about it.

Rust Async Support - International Department

- 00100011 on rust-users

Thanks to Aleksander Krauze for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

10 Sep 2025 4:00am GMT

The Rust Programming Language Blog: Rust compiler performance survey 2025 results

Two months ago, we launched the first Rust Compiler Performance Survey, with the goal of helping us understand the biggest pain points of Rust developers related to build performance. It is clear that this topic is very important for the Rust community, as the survey received over 3 700 responses! We would like to thank everyone who participated in the survey, and especially those who described their workflows and challenges with an open answer. We plan to run this survey annually, so that we can observe long-term trends in Rust build performance and its perception.

In this post, we'll show some interesting results and insights that we got from the survey and promote work that we have already done recently or that we plan to do to improve the build performance of Rust code. If you would like to examine the complete results of the survey, you can find them here.

And now strap in, as there is a lot of data to explore! As this post is relatively long, here is an index of topics that it covers:

Overall satisfaction

To understand the overall sentiment, we asked our respondents to rate their satisfaction with their build performance, on a scale from 0 (worst) to 10 (best). The average rating was 6, with most people rating their experience with 7 out of 10:

<noscript> <img alt="satisfaction" height="600" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/satisfaction.png" /> </noscript>
[PNG] [SVG]

To help us understand the overall build experience in more detail, we also analyzed all open answers (over a thousand of them) written by our respondents, to help us identify several recurring themes, which we will discuss in this post.

One thing that is clear from both the satisfaction rating and the open answers is that the build experience differs wildly across users and workflows, and it is not as clear-cut as "Rust builds are slow". We actually received many positive comments about users being happy with Rust build performance, and appreciation for it being improved vastly over the past several years to the point where it stopped being a problem.

People also liked to compare their experience with other competing technologies. For example, many people wrote that the build performance of Rust is not worse, or is even better, than what they saw with C++. On the other hand, others noted that the build performance of languages such as Go or Zig is much better than that of Rust.

While it is great to see some developers being happy with the state we have today, it is clear that many people are not so lucky, and Rust's build performance limits their productivity. Around 45% respondents who answered that they are no longer using Rust said that at least one of the reasons why they stopped were long compile times.

In our survey we received a lot of feedback pointing out real issues and challenges in several areas of build performance, which is what we will focus on in this post.

Important workflows

The challenges that Rust developers experience with build performance are not always as simple as the compiler itself being slow. There are many diverse workflows with competing trade-offs, and optimizing build performance for them might require completely different solutions. Some approaches for improving build performance can also be quite unintuitive. For example, stabilizing certain language features could help remove the need for certain build scripts or proc macros, and thus speed up compilation across the Rust ecosystem. You can watch this talk from RustWeek about build performance to learn more.

It is difficult to enumerate all possible build workflows, but we at least tried to ask about workflows that we assumed are common and could limit the productivity of Rust developers the most:

<noscript> <img alt="limiting-workflows" height="500" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/limiting-workflows.png" /> </noscript>
[PNG] [SVG]

We can see that all the workflows that we asked about cause significant problems to at least a fraction of the respondents, but some of them more so than others. To gain more information about the specific problems that developers face, we also asked a more detailed, follow-up question:

<noscript> <img alt="problems" height="650" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/problems.png" /> </noscript>
[PNG] [SVG]

Based on the answers to these two questions and other experiences shared in the open answers, we identified three groups of workflows that we will discuss next:

Incremental rebuilds

Waiting too long for an incremental rebuild after making a small source code change was by far the most common complaint in the open answers that we received, and it was also the most common problem that respondents said they struggle with. Based on our respondents' answers, this comes down to three main bottlenecks:

Several users have mentioned that they would like to see Rust perform hot-patching (such as the subsecond system used by the Dioxus UI framework or similar approaches used e.g. by the Bevy game engine). While these hot-patching systems are very exciting and can produce truly near-instant rebuild times for specialized use-cases, it should be noted that they also come with many limitations and edge-cases, and it does not seem that a solution that would allow hot-patching to work in a robust way has been found yet.

To gauge how long is the typical rebuild latency, we asked our respondents to pick a single Rust project that they work on and which causes them to struggle with build times the most, and tell us how long they have to wait for it to be rebuilt after making a code change.

<noscript> <img alt="rebuild-wait-time" height="400" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/rebuild-wait-time.png" /> </noscript>
[PNG] [SVG]

Even though many developers do not actually experience this latency after each code change, as they consume results of type checking or inline annotations in their code editor, the fact that 55% of respondents have to wait more than ten seconds for a rebuild is far from ideal.

If we partition these results based on answers to other questions, it is clear that the rebuild times depend a lot on the size of the project:

<noscript> <img alt="rebuild-wait-time-code-size" height="600" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/rebuild-wait-time-code-size.png" /> </noscript>
[PNG] [SVG]

And to a lesser factor also on the number of used dependencies:

<noscript> <img alt="rebuild-wait-time-dep-count" height="800" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/rebuild-wait-time-dep-count.png" /> </noscript>
[PNG] [SVG]

We would love to get to a point where the time needed to rebuild a Rust project is dependent primarily on the amount of performed code changes, rather than on the size of the codebase, but clearly we are not there yet.

Type checking and IDE performance

Approximately 60% of respondents say that they use cargo terminal commands to type check, build or test their code, with cargo check being the most commonly used command performed after each code change:

<noscript> <img alt="cargo-commands" height="500" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/cargo-commands.png" /> </noscript>
[PNG] [SVG]

While the performance of cargo check does not seem to be as big of a blocker as e.g. incremental rebuilds, it also causes some pain points. One of the most common ones present in the survey responses is the fact that cargo check does not share the build cache with cargo build. This causes additional compilation to happen when you run e.g. cargo check several times to find all type errors, and when it succeeds, you follow up with cargo build to actually produce a built artifact. This workflow is an example of competing trade-offs, because sharing the build cache between these two commands by unifying them more would likely make cargo check itself slightly slower, which might be undesirable to some users. It is possible that we might be able to find some middle ground to improve the status quo though. You can follow updates to this work in this issue.

A related aspect is the latency of type checking in code editors and IDEs. Around 87% of respondents say that they use inline annotations in their editor as the primary mechanism of inspecting compiler errors, and around 33% of them consider waiting for these annotations to be a big blocker. In the open answers, we also received many reports of Rust Analyzer's performance and memory usage being a limiting factor.

The maintainers of Rust Analyzer are working hard on improving its performance. Its caching system is being improved to reduce analysis latency, the distributed builds of the editor are now optimized with PGO, which provided 15-20% performance wins, and work is underway to integrate the compiler's new trait solver into Rust Analyzer, which could eventually also result in increased performance.

More than 35% users said that they consider the IDE and Cargo blocking one another to be a big problem. There is an existing workaround for this, where you can configure Rust Analyzer to use a different target directory than Cargo, at the cost of increased disk space usage. We realized that this workaround has not been documented in a very visible way, so we added it to the FAQ section of the Rust Analyzer book.

Clean and CI builds

Around 20% of participants responded that clean builds are a significant blocker for them. In order to improve their performance, you can try a recently introduced experimental Cargo and compiler option called hint-mostly-unused, which can in certain situations help improve the performance of clean builds, particularly if your dependencies contain a lot of code that might not actually be used by your crate(s).

One area where clean builds might happen often is Continuous Integration (CI). 1495 respondents said that they use CI to build Rust code, and around 25% of them consider its performance to be a big blocker for them. However, almost 36% of respondents who consider CI build performance to be a big issue said that they do not use any caching in CI, which we found surprising. One explanation might be that the generated artifacts (the target directory) is too large for effective caching, and runs into usage limits of CI providers, which is something that we saw mentioned repeatedly in the open answers section. We have recently introduced an experimental Cargo and compiler option called -Zembed-metadata that is designed to reduce the size of the target directories, and work is also underway to regularly garbage collect them. This might help with the disk space usage issue somewhat in the future.

One additional way to significantly reduce disk usage is to reduce the amount of generated debug information, which brings us to the next section.

Debug information

The default Cargo dev profile generates full debug information (debuginfo) both for workspace crates and also all dependencies. This enables stepping through code with a debugger, but it also increases disk usage of the target directory, and crucially it makes compilation and linking slower. This effect can be quite large, as our benchmarks show a possible improvement of 2-30% in cycle counts if we reduce the debuginfo level to line-tables-only (which only generates enough debuginfo for backtraces to work), and the improvements are even larger if we disable debuginfo generation completely1.

However, if Rust developers debug their code after most builds, then this cost might be justified. We thus asked them how often they use a debugger to debug their Rust code:

<noscript> <img alt="debugger" height="500" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/debugger.png" /> </noscript>
[PNG] [SVG]

Based on these results, it seems that the respondents of our survey do not actually use a debugger all that much2.

However, when we asked people if they require debuginfo to be generated by default, the responses were much less clear-cut:

<noscript> <img alt="required-debuginfo" height="400" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/required-debuginfo.png" /> </noscript>
[PNG] [SVG]

This is the problem with changing defaults: it is challenging to improve the workflows of one user without regressing the workflow of another. For completeness, here are the answers to the previous question partitioned on the answer to the "How often do you use a debugger" question:

<noscript> <img alt="required-debuginfo-debugger" height="400" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/required-debuginfo-debugger.png" /> </noscript>
[PNG] [SVG]

It was surprising for us to see that around a quarter of respondents who (almost) never use a debugger still want to have full debuginfo generated by default.

Of course, you can always disable debuginfo manually to improve your build performance, but not everyone knows about that option, and defaults matter a lot. The Cargo team is considering ways of changing the status quo, for example by reducing the level of generated debug information in the dev profile, and introducing a new built-in profile designed for debugging.

Workarounds for improving build performance

Build performance of Rust is affected by many different aspects, including the configuration of the build system (usually Cargo) and the Rust compiler, but also the organization of Rust crates and used source code patterns. There are thus several approaches that can be used to improve build performance by either using different configuration options or restructuring source code. We asked our respondents if they are even aware of such possibilities, whether they have tried them and how effective they were:

<noscript> <img alt="compile-time-improvement-mechanisms" height="600" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/compile-time-improvement-mechanisms.png" /> </noscript>
[PNG] [SVG]

It seems that the most popular (and effective) mechanisms for improving build performance are reducing the number of dependencies and their activated features, and splitting larger crates into smaller crates. The most common way of improving build performance without making source code changes seems to be the usage of an alternative linker. It seems that especially the mold and LLD linkers are very popular:

<noscript> <img alt="alternative-linker" height="500" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/alternative-linker.png" /> </noscript>
[PNG] [SVG] [Wordcloud of open answers]

We have good news here! The most popular x86_64-unknown-linux-gnu Linux target will start using the LLD linker in the next Rust stable release, resulting in faster link times by default. Over time, we will be able to evaluate how disruptive is this change to the overall Rust ecosystem, and whether we could e.g. switch to a different (even faster) linker.

Build performance guide

We were surprised by the relatively large number of users who were unaware of some approaches for improving compilation times, in particular those that are very easy to try and typically do not require source code changes (such as reducing debuginfo or using a different linker or a codegen backend). Furthermore, almost 42% of respondents have not tried to use any mechanism for improving build performance whatsoever. While this is not totally unexpected, as some of these mechanisms require using the nightly toolchain or making non-trivial changes to source code, we think that one the reasons is also simply that Rust developers might not know about these mechanisms being available. In the open answers, several people also noted that they would appreciate if there was some sort of official guidance from the Rust Project about such mechanisms for improving compile times.

It should be noted that the mechanisms that we asked about are in fact workarounds that present various trade-offs, and these should always be carefully considered. Several people have expressed dissatisfaction with some of these workarounds in the open answers, as they find it unacceptable to modify their code (which could sometimes result e.g. in increased maintenance costs or worse runtime performance) just to achieve reasonable compile times. Nevertheless, these workarounds can still be incredibly useful in some cases.

The feedback that we received shows that it might be beneficial to spread awareness of these mechanisms in the Rust community more, as some of them can make a really large difference in build performance, but also to candidly explain the trade-offs that they introduce. Even though several great resources that cover this topic already exist online, we decided to create an official guide for optimizing build performance (currently work-in-progress), which will likely be hosted in the Cargo book. The aim of this guide is to increase the awareness of various mechanisms for improving build performance, and also provide a framework for evaluating their trade-offs.

Our long-standing goal is to make compilation so fast that similar workarounds will not be necessary anymore for the vast majority of use-cases. However, there is no free lunch, and the combination of Rust's strong type system guarantees, its compilation model and also heavy focus on runtime performance often go against very fast (re)build performance, and might require usage of at least some workarounds. We hope that this guide will help Rust developers learn about them and evaluate them for their specific use-case.

Understanding why builds are slow

When Rust developers experience slow builds, it can be challenging to identify where exactly is the compilation process spending time, and what could be the bottleneck. It seems that only very few Rust developers leverage tools for profiling their builds:

<noscript> <img alt="profiling-tools" height="450" src="https://blog.rust-lang.org/2025/09/10/rust-compiler-performance-survey-2025-results/profiling-tools.png" /> </noscript>
[PNG] [SVG]

This hardly comes as a surprise. There are currently not that many ways of intuitively understanding the performance characteristics of Cargo and rustc. Some tools offer only a limited amount of information (e.g. cargo build --timings), and the output of others (e.g. -Zself-profile) is very hard to interpret without knowledge of the compiler internals.

To slightly improve this situation, we have recently added support for displaying link times to the cargo build --timings output, to provide more information about the possible bottleneck in crate compilation (note this feature has not been stabilized yet).

Long-term, it would be great to have tooling that could help Rust developers diagnose compilation bottlenecks in their crates without them having to understand how the compiler works. For example, it could help answer questions such as "Which code had to be recompiled after a given source change" or "Which (proc) macros take the longest time to expand or produce the largest output", and ideally even offer some actionable suggestions. We plan to work on such tooling, but it will take time to manifest.

One approach that could help Rust compiler contributors understand why are Rust (re)builds slow "in the wild" is the opt-in compilation metrics collection initiative.

What's next

There are more interesting things in the survey results, for example how do answers to selected questions differ based on the used operating system. You can examine the full results in the full report PDF.

We would like to thank once more everyone who has participated in our survey. It helped us understand which workflows are the most painful for Rust developers, and especially the open answers provided several great suggestions that we tried to act upon.

Even though the Rust compiler is getting increasingly faster every year, we understand that many Rust developers require truly significant improvements to improve their productivity, rather than "just" incremental performance wins. Our goal for the future is to finally stabilize long-standing initiatives that could improve build performance a lot, such as the Cranelift codegen backend or the parallel compiler frontend. One such initiative (using a faster linker by default) will finally land soon, but the fact that it took many years shows how difficult it is to make such large cutting changes to the compilation process.

There are other ambitious ideas for reducing (re)build times, such as avoiding unnecessary workspace rebuilds or e.g. using some form of incremental linking, but these will require a lot of work and design discussions.

We know that some people are wondering why it takes so much time to achieve progress in improving the build performance of Rust. The answer is relatively simple. These changes require a lot of work, domain knowledge (that takes a relatively long time to acquire) and many discussions and code reviews, and the pool of people that have time and motivation to work on them or review these changes is very limited. Current compiler maintainers and contributors (many of whom work on the compiler as volunteers, without any funding) work very hard to keep up with maintaining the compiler and keeping it working with the high-quality bar that Rust developers expect, across many targets, platforms and operating systems. Introducing large structural changes, which are likely needed to reach massive performance improvements, would require a lot of concentrated effort and funding.

  1. This benchmark was already performed using the fast LLD linker. If a slower linker was used, the build time wins would likely be even larger.

  2. Potentially because of the strong invariants upheld by the Rust type system, and partly also because the Rust debugging experience might not be optimal for many users, which is a feedback that we received in the State of Rust 2024 survey.

10 Sep 2025 12:00am GMT

09 Sep 2025

feedPlanet Mozilla

Mozilla Open Policy & Advocacy Blog: Mozilla Meetup: “The Future of Competition: How to Save the Open Web”

The promise of an open and competitive internet hangs in the balance. From the future of AI agents to the underappreciated role of browsers and browser engines, the technological landscape continues to evolve. Getting the regulatory and enforcement backdrop right is critical: from competition bills in Congress to the EU's DMA, the stakes for innovation, privacy and consumer choice have never been higher.




The post Mozilla Meetup: "The Future of Competition: How to Save the Open Web" appeared first on Open Policy & Advocacy.

09 Sep 2025 2:10pm GMT

The Mozilla Blog: On Firefox for iOS, summarize a page with a shake or a tap

On mobile, browsing often means quick checks on small screens, squeezed in between everything else you're doing. We built Shake to Summarize on iOS to give you a clear summary with one move. That way, you can get what you need more easily and keep going.

How it works

Whether you just want the recipe, need to know something fast, or want to see if a long read is worth the time, Shake to Summarize gives you the key takeaways in seconds. To activate it, you can:

The feature works on webpages with fewer than 5,000 words. (Learn more about content you can summarize here.)

Here's an example of a summary:

Three smartphone screens showing Firefox article summarized with Apple Intelligence on translation updates for Chinese, Japanese, and Korean users.

If you have an iPhone 15 Pro or later with iOS 26+, the summary is created on your device using Apple Intelligence. On other devices with earlier iOS versions, the page text is sent securely to Mozilla cloud-based AI, which creates the summary and sends it back.

Rollout starts Sept. 9

Shake to Summarize starts rolling out this week in the U.S. for English-language Firefox iOS users, then expands from there.

You'll see a prompt the first time you come across content that can be summarized, and you can turn the feature on or off in settings anytime.

Designed for user choice

Sometimes you want the whole story. Sometimes you just need the highlights. Firefox mobile gives you both and leaves the choice to you.

Let us know what you think once you give it a try.

Take control of your internet

Download Firefox

The post On Firefox for iOS, summarize a page with a shake or a tap appeared first on The Mozilla Blog.

09 Sep 2025 10:00am GMT

08 Sep 2025

feedPlanet Mozilla

The Mozilla Blog: Defending an open web: What the Google search ruling means for the future

The Mozilla logo in green on a black background

Last week, Judge Amit Mehta issued a ruling in the remedies proceedings of the U.S. v. Google LLC search case. Among the issues addressed, one key aspect that stood out for us was the court's ruling on Google's search agreements.

The Court ordered changes to Google's search agreements to give browsers more flexibility. Under the court's decision, Google cannot restrict browsers from defaulting to or offering different search engines / generative AI services. They can also not prevent browsers from promoting these services.

Crucially, the Court considered but ultimately rejected a proposed ban on search payments to small, independent browsers like Firefox. If the ban had been enforced, it would have made it harder for independent browsers to compete, effectively reducing competition in the browser market.

In his reasoning, Judge Mehta cited Mozilla's testimony, recognizing that banning payments to smaller browsers would harm innovation, competition, and consumers, and would threaten the pro-competitive role of Mozilla in the ecosystem. Ensuring that Mozilla's Gecko - the only independent browser engine left - can continue to compete with Google and Apple is vital for the future of the open web.

The court also required a range of data sharing remedies - narrowing the scope of those proposed by the Department of Justice and State Attorneys General, while broadening their access. As Mozilla has discovered first-hand through previous antitrust cases and the implementation of the EU Digital Markets Act, ensuring that such remedies are effective in restoring competition requires careful attention and monitoring. Careful thought must also be given to protecting user privacy and security.

It will also be critical to ensure that these data remedies avoid simply transferring power from one tech giant to another - particularly given the focus on facilitating greater search competition through AI providers.

This balance is something we've stressed throughout the trial. True competition in search starts with a healthy marketplace, one where small and large companies can compete on merit, where consumers have choice, and where the best new products, features, and ideas have a chance.

As this case continues to unfold, one thing won't change: Mozilla's commitment to an internet that's open, accessible, and built for the public good. We've historically met market and regulatory shifts with creativity and care. Each moment has helped us grow and discover new ways to live out our mission, and we're invigorated about the path forward.

The post Defending an open web: What the Google search ruling means for the future appeared first on The Mozilla Blog.

08 Sep 2025 6:26pm GMT

Wladimir Palant: A look at a P2P camera (LookCam app)

I've got my hands on an internet-connected camera and decided to take a closer look, having already read about security issues with similar cameras. What I found far exceeded my expectations: fake access controls, bogus protocol encryption, completely unprotected cloud uploads and firmware riddled with security flaws. One could even say that these cameras are Murphy's Law turned solid: everything that could be done wrong has been done wrong here. While there is considerable prior research on these and similar cameras that outlines some of the flaws, I felt that the combination of severe flaws is reason enough to publish an article of my own.

My findings should apply to any camera that can be managed via the LookCam app. This includes cameras meant to be used with less popular apps of the same developer: tcam, CloudWayCam, VDP, AIBoxcam, IP System. Note that the LookCamPro app, while visually very similar, is technically quite different. It also uses the PPPP protocol for low-level communication but otherwise doesn't seem to be related, and the corresponding devices are unlikely to suffer from the same flaws.

A graphic with the LookCam logo in the middle. Around it are arranged five devices with the respective camera locations marked: a radio clock, a power outlet, a light switch, a USB charger, a bulb socket.

There seems to be little chance that things will improve with these cameras. I have no way of contacting either the hardware vendors or the developers behind the LookCam app. In fact, it looks like masking their identity was done on purpose here. But even if I could contact them, the cameras lack an update mechanism for their firmware. So fixing the devices already sold is impossible.

I have no way of knowing how many of these cameras exist. The LookCam app is currently listed with almost 1.5 million downloads on Google Play however. An iPhone and a Windows version of the app are also available but no public statistics exist here.

Contents

The highlights

The camera cannot be easily isolated from unauthorized access. It can either function as a WiFi access point, but setting a WiFi password isn't possible. Or it can connect to an existing network, and then it will insist on being connected to the internet. If internet access is removed the camera will go into a reboot loop. So you have the choice of letting anybody in the vicinity access this camera or allowing it to be accessed from the internet.

The communication of this camera is largely unencrypted. The underlying PPPP protocol supports "encryption" which is better described as obfuscation, but the LookCam app almost never makes use of it. Not that it would be of much help, the proprietary encryption algorithms being developed without any understanding of cryptography. These rely on static encryption keys which are trivially extracted from the app but should be easy enough to deduce even from merely observing some traffic.

The camera firmware is riddled with buffer overflow issues which should be trivial to turn into arbitrary code execution. Protection mechanisms like DEP or ASLR might have been a hurdle but these are disabled. And while the app allows you to set an access password, the firmware doesn't really enforce it. So access without knowing the password can be accomplished simply by modifying the app to skip the password checks.

The only thing preventing complete compromise of any camera is the "secret" device ID which has to be known in order to establish a connection. And by "secret" I mean that device IDs can generally be enumerated but they are "secured" with a five letter verification code. Unlike with some similar cameras, the algorithm used to generate the verification code isn't public knowledge yet. So somebody wishing to compromise as many cameras as possible would need to either guess the algorithm or guess the verification codes by trying out all possible combinations. I suspect that both approaches are viable.

And while the devices themselves have access passwords which a future firmware version could in theory start verifying, the corresponding cloud service has no authentication beyond knowledge of the device ID. So any recordings uploaded to the cloud are accessible even if the device itself isn't. Even if the camera owner hasn't paid for the cloud service, anyone could book it for them if they know the device ID. The cloud configuration is managed by the server, so making the camera upload its recordings doesn't require device access.

The hardware

Most cameras connecting to the LookCam app are being marketed as "spy cam" or "nanny cam." These are made to look like radio clocks, USB chargers, bulb sockets, smoke detectors, even wall outlets. Most of the time their pretended functionality really works. In addition they have an almost invisible pinhole camera that can create remarkably good recordings. I've seen prices ranging from US$40 to hundreds of dollars.

The marketing spin says that these cameras are meant to detect when your house is being robbed. Or maybe they allow you to observe your baby while it is in the next room. Of course, in reality people are far more inventive in their use of tiny cameras. Students discovered them for cheating in exams. Gamblers use them to get an advantage at card games. And then there is of course the matter of non-consentual video recordings. So next time you stay somewhere where you don't quite trust the host you might want to search for "LookCam" on YouTube, just to get an idea of how to recognize such devices.

The camera I had was based on the Anyka AK39Ev330 hardware platform, essentially an ARM CPU with an attached pinhole camera. Presumably, other cameras connecting to the LookCam app are similar, even though there are some provisions for hardware differences in the firmware. The device looked very convincing, its main giveaway being unexpected heat development.

All LookCam cameras I've seen were strictly noname devices, it is unclear who builds them. Given the variety of competing form factors I suspect that a number of hardware vendors are involved. Maybe there is one vendor producing the raw camera kit and several others who package it within the respective casings.

The LookCam app

The LookCam app can manage a number of cameras. Some people demonstrating the app on YouTube had around 50 of them, though I suspect that these are camera sellers and not regular users.

App screenshot, a screen titled “My Device.” It lists a number of cameras with stills on the left side. The cameras are titled something like G000001NRLXW. At the bottom of the screen are the options Video (selected), Photo, Files and More.<figcaption> LookCam app as seen in the example screenshot </figcaption>

While each camera can be given a custom name, its unique ID is always visible as well. For example, the first camera listed in the screenshot above has the ID GHBB-000001-NRLXW which the apps shortens into G000001NRLXW. Here GHBB is the device prefix: LookCam supports a number of these but only BHCC, FHBB and GHBB seem to exist in reality (abbreviated as B, F and G respectively). 000001 is the device number, each prefix can theoretically support a million devices. The final part is a five-letter verification code: NRLXW. This one has to be known for the device connection to succeed, it makes enumerating device IDs more difficult.

Out of the box, the device is in access point mode: it provides a WiFi access point with the device ID used as wireless network name. You can connect to that access point, and LookCam will be able to find the camera via a network broadcast, allowing you to configure it. You might be inclined to leave the camera in access point mode but it is impossible to set a WiFi password. This means that anybody in the vicinity can connect to this WiFi network and access the camera through it. So there is no way around configuring the camera to connect to your network.

Once the camera is connected to your network the P2P "magic" happens. LookCam app can still find the camera via a network broadcast. But it can also establish a connection when you are not on the same network. In other words: the camera can be accessed from the internet, assuming that someone knows its device ID.

Exposing the camera to internet-based attacks might not be something that you want, with it being in principle perfectly capable of writing its recordings to an SD card. But if you deny it access to the internet (e.g. via a firewall rule) the camera will try to contact its server, fail, panic and reboot. It will keep rebooting until it receives a response from the server.

One thing to note is also: the device ID is displayed in pretty much every screen of this app. So when users share screenshots or videos of the app (which they do often) they will inevitably expose the ID of their camera, allowing anyone in the world to connect to it. I've seen very few cases of people censoring the device ID, clearly most of them aren't aware that it is sensitive information. The LookCam app definitely isn't communicating that it is.

The PPPP protocol

The basics

How can LookCam establish a connection to the camera having only its device ID? The app uses the PPPP protocol developed by the Chinese company CS2 Network. Supposedly, in 2019 CS2 Network had 300 customers with 20 million devices in total. This company supplies its customers with a code library and the corresponding server code which the customers can run as a black box. The idea of the protocol is providing an equivalent of the TCP protocol which implicitly locates a device by its ID and connects to it.

Screenshot of a presentation slide divided in two with TCP on the left and P2P on the right. Left side shows the calls to establish a TCP connection and write data, right side equivalent function calls with PPC_ prefix<figcaption> Slide from a CS2 Network sales pitch </figcaption>

Side note: Whoever designed this protocol didn't really understand TCP. For example, they tried to replicate the fault tolerance of TCP. But instead of making retransmissions an underlying protocol feature there are dozens of different (not duplicated but really different) retransmission loops throughout the library. Where TCP tries to detect network congestions and back off the PPPP protocol will send even more retransmitted messages, rendering suboptimal connections completely unusable.

Despite being marketed as Peer-to-Peer (P2P) this protocol relies on centralized servers. Each device prefix is associated with a set of three servers, this being the protocol designers' idea of high-availability infrastructure. Devices regularly send messages to all three servers, making sure that these are aware of the device's IP address. When the LookCam app (client) wants to connect to a device, it also contacts all three servers to get the device's IP address.

Screenshot of a presentation slide titled “High Availability Architecture.” The text says: Redundant P2P Servers, Flexible and Expandable Relay Servers<figcaption> Slide from a CS2 Network sales pitch </figcaption>

The P2P part is the fact that device and client try to establish a direct connection instead of relaying all communication via a central server. The complicating factor here are firewalls which usually disallow direct connections. The developers didn't like established approaches like Universal Plug and Play (UPnP), probably because these are often disabled for security reasons. So they used a trick called UDP hole punching. This involves guessing which port the firewall assigned to outgoing UDP traffic and then communicating with that port, so that the firewall considers incoming packets a response to previously sent UDP packets and allows them through.

Does that always work? That's doubtful. So the PPPP protocol allows for relay servers to be used as fallback, forwarding traffic from and to the device. But this direct communication presumably succeeds often enough to keep the traffic on PPPP servers low, saving costs.

The FHBB and GHBB device prefixes are handled by the same set of servers, named the "mykj" network in the LookCam app internally. Same string appears in the name of the main class as well, indicating that it likely refers to the company developing the app. This seems to be a short form of "Meiyuan Keji," a company name that translates as "Dollar Technology." I couldn't find any further information on this company however.

The BHCC device prefix is handled by a different set of servers that the app calls the "hekai" network. The corresponding devices appear to be marketed in China only.

The "encryption"

With potentially very sensitive data being transmitted one would hope that the data is safely encrypted in transit. The TCP protocol outsources this task to additional layers like TLS. The PPPP protocol on the other hand has built-in "encryption," in fact even two different encryption mechanisms.

First there is the blanket encryption of all transmitted messages. The corresponding function is aptly named P2P_Proprietary_Encrypt and it is in fact a very proprietary encryption algorithm. To my untrained eye there are a few issues with it:

In addition to that, some messages get special treatment. For example, the MSG_REPORT_SESSION_READY message is generally encrypted via P2P_Proprietary_Encrypt function with a key that is hardcoded in the CS2 library and has the same value in every app I checked.

Some messages employ a different encryption method. In case of the networks supported by LookCam it is only the MSG_DEV_LGN_CRC message (device registering with the server) that is used instead of the plaintext MSG_DEV_LGN message. As this message is sent by the device, the corresponding encryption key is only present in the device firmware, not in the application. I didn't bother checking whether the server would still accept the unencrypted MSG_DEV_LGN message.

The encryption function responsible here is PPPP_CRCEnc. No, this isn't a cyclic redundancy check (CRC). It's rather an encryption function that will extend the plaintext by a four bytes padding. The decryptor will validate the padding, presumably that's the reason for the name.

Of course, this still doesn't make it an authenticated encryption scheme, yet the padding oracle attack is really the least of its worries. While there is a complicated selection approach, it effectively results in a sequence of bytes that the plaintext is XOR'ed with. Same sequence for every single message being encrypted in this way. Wikipedia has the following to say on the security of XOR ciphers:

By itself, using a constant repeating key, a simple XOR cipher can trivially be broken using frequency analysis. If the content of any message can be guessed or otherwise known then the key can be revealed.

Well, yes. That's what we have here.

It's doubtful that any of these encryption algorithms can deter even a barely determined attacker. But a blanket encryption with P2P_Proprietary_Encrypt (which LookCam doesn't enable) would have three effects:

  1. Network traffic is obfuscated, making the contents of transmitted messages not immediately obvious.
  2. Vulnerable devices cannot be discovered on the local network using the script developed by Paul Marrapese. This script relies on devices responding to an unencrypted search request.
  3. P2P servers can no longer be discovered easily and won't show up on Shodan for example. This discovery method relies on servers responding to an unencrypted MSG_HELLO message.

The threat model

It is obvious that the designers of the PPPP protocol don't understand cryptography, yet for some reason they don't want to use established solutions either. It cannot even be about performance because AES is supported in hardware on these devices. But why for example this strange choice of encrypting a particular message while keeping the encryption of highly private data optional? Turns out, this is due to the threat model used by the PPPP protocol designers.

Screenshot of a presentation slide containing yellow text: Malicious hacker can make thousands of Fake Device by writing a software program (As you know, the cost may be less than 1 USD), however. It then continues with red text: It may cause thousands pcs of your product to malfunction, thus cost hundred thousands.<figcaption> Slide from a CS2 Network sales pitch </figcaption>

As a CS2 Network presentation deck shows, their threat model isn't concerned about data leaks. The concern is rather denial-of-service attacks caused by registering fake devices. And that's why this one message enjoys additional encryption. Not that I really understand the concern here, since the supposed hacker would still have to generate valid device IDs somehow. And if they can do that - well, them bringing the server down should really be the least concern.

But wait, there is another security layer here!

Screenshot of a presentation slide titled “Encrypted P2P Server IP String.” The text says: The encrypted string is given to platform owner only. Without correct string, Fake Device can’t use P2P API to reach P2P Server. The API require encrypt P2P Server IP String, but not raw IP String.<figcaption> Slide from a CS2 Network sales pitch </figcaption>

This is about the "init string" already mentioned in the context of encryption keys above. It also contains the IP addresses of the servers, mildly obfuscated. While these were "given to platform owner only," these are necessarily contained in the LookCam app:

Screenshot of a source code listing with four fields g_hekai_init_string, g_mykj_init_string, g_ppcs_init_string, g_rtos_init_string. All four values are strings consisting of upper-case letters.

Some other apps contain dozens of such init strings, allowing them to deal with many different networks. So the threat model of the PPPP protocol cannot imagine someone extracting the "encrypted P2P server IP string" from the app. It cannot imagine someone reverse engineering the (trivial) obfuscation used here. And it definitely cannot imagine someone reverse engineering the protocol, so that they can communicate with the servers via "raw IP string" instead of their obfuscated one. Note: The latter has happened on several documented occasions already, e.g. here.

These underlying assumptions become even more obvious on this slide:

Screenshot of a presentation slide titled “Worry about security?” The text says: Super Device can not spy any data it Relayed (No API for this)<figcaption> Slide from a CS2 Network sales pitch </figcaption>

Yes, the only imaginable way to read out network data is via the API of their library. With a threat model like this, it isn't surprising that the protocol makes all the wrong choices security-wise.

The firmware

Once a connection is etablished the LookCam app and the camera will exchange JSON-encoded messages like the following:

{
  "cmd": "LoginDev",
  "pwd": "123456"
}

A paper from the Warwick University already took a closer look at the firmware and discovered something surprising. The LookCam app will send a LoginDev command like above to check whether the correct access password is being used for the device. But sending this command is entirely optional, and the firmware will happily accept other commands without a "login"!

The LookCam app will also send the access password along with every other command yet this password isn't checked by the firmware either. I tried adding a trivial modification to the LookCam app which made it ignore the result of the LoginDev command. And this in fact bypassed the authentication completely, allowing me to access my camera despite a wrong password.

I could also confirm their finding that the DownloadFile command will read arbitrary files, allowing me to extract the firmware of my camera with the approach described in the paper. They even describe a trivial Remote Code Execution vulnerability which I also found in my firmware: that firmware often relies on running shell commands for tasks that could be easily done in its C language code.

This clearly isn't the only Remote Code Execution vulnerability however. Here is some fairly typical code for this firmware:

char[256] buf;
char *cmd = cJSON_GetObjectItem(request, "cmd")->valuestring;
memset(buf, 0, sizeof(buf));
memcpy(buf, cmd, strlen(cmd));

This code copies a string (pointlessly but this isn't the issue here). It completely fails to consider the size of the target buffer, going by the size of the incoming data instead. So any command larger than 255 bytes will cause a buffer overflow. And there is no stack canary here, Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) are disabled, so nothing prevents this buffer overflow from being turned into Remote Code Execution.

Finally, I've discovered that the searchWiFiList command will produce the list of WiFi networks visible to the camera. These by itself often already allow a good guess as to where the camera is located. In combination with a geolocation service these will typically narrow down the camera's position to a radius of only a few dozen meters.

The only complication here: most geolocation services require not the network names but the MAC addresses of the access points. The MAC addresses aren't part of the response data however. But: searchWiFiList works by running iwlist shell command and storing the complete output in /tmp/wifi_scan.txt file. It reads this file but does not remove it. This means that the file can subsequently be downloaded via DownloadFile command (allows reading arbitrary files as mentioned above) and that one contains full data including MAC addresses of all access points. So somebody who happened to learn the device ID can not only access the video stream but also find out where exactly this footage is being recorded.

The camera I've been looking at is running firmware version 2023-11-22. Is there a newer version, maybe one that fixes the password checks or the already published Remote Code Execution vulnerability? I have no idea. If the firmware for these cameras is available somewhere online then I cannot find it. I've also been looking for some kind of update functionality in these devices. But there is only a generic script from the Anyka SDK which isn't usable for anyone other than maybe the hardware vendor.

The cloud

When looking at the firmware I noticed some code uploading 5 MiB data chunks to api.l040z.com (or apicn.l040z.com if you happen to own a BHCC device). Now uploading exactly 5 MiB is weird (this size is hardcoded) but inspecting the LookCam app confirmed it: this is cloud functionality, and the firmware regularly uploads videos in this way. At least it does that when cloud functionality is enabled.

First thing worth noting: while the cloud server uses regular HTTP rather than some exotic protocol, all connections to it are generally unencrypted. The firmware simply lacks a TLS library it could use, and so the server doesn't bother with supporting TLS. Meaning for example: if you happen to use their cloud functionality your ISP better be very trustworthy because it can see all the data your camera sends to the LookCam cloud. In fact, your ISP could even run its own "cloud server" and the camera will happily send your recorded videos to it.

Anyone dare a guess what the app developers mean by "financial-grade encryption scheme" here? Is it worse or better than military-grade encryption?

Screenshot containing two text sections. The section above it titled “Safe storage” and reads: The video data is stored in the cloud, even if the device is offline or lost. Can also view previous recordings. The section below is titled “Privacy double encryption” and reads: Using financial-grade encryption scheme, data is transmitted from data to Transfer data from transfer data from transfer.<figcaption> Screenshot from the LookCam app </figcaption>

Second interesting finding: the cloud server has no authentication whatsoever. The camera only needs to know its device ID when uploading to the cloud. And the LookCam app - well, any cloud-related requests here also require device ID only. If somebody happens to learn your device ID they will gain full access to your cloud storage.

Now you might think that you can simply skip paying for the cloud service which, depending on the package you book, can come for as much as $40 per month. But this doesn't mean that you are on the safe side because you aren't the one controlling the cloud functionality on your device, the cloud server is. Every time the device boots up it sends a request to http://api.l040z.com/camera/signurl and the response tells it whether cloud functionality needs to be enabled.

So if LookCam developers decide that they want to see what your camera is doing (or if Chinese authorities become interested in that), they can always adjust that server response and the camera will start uploading video snapshots. You won't even notice anything because the LookCam app checks cloud configuration by requesting http://api.l040z.com/app/cloudConfig which can remain unchanged.

And they aren't the only ones who can enable the cloud functionality of your device. Anybody who happens to know your device ID can buy a cloud package for it. This way they can get access to your video recordings without ever accessing your device directly. And you will only notice the cloud functionality being active if you happen to go to the corresponding tab in the LookCam app.

How safe are device IDs?

Now that you are aware of device IDs being highly sensitive data, you certainly won't upload screenshots containing them to social media. Does that mean that your camera is safe because nobody other than you knows its ID?

The short answer is: you don't know that. First of all, you simply don't know who already has your device ID. Did the shop that sold you the camera write the ID down? Did they maybe record a sales pitch featuring your camera before they sold it to you? Did somebody notice your camera's device ID show up in the list of WiFi networks when it was running in access point mode? Did anybody coming to your home run a script to discover PPPP devices on the network? Yes, all of that might seem unlikely, yet it should be reason enough to wonder whether your camera's recordings are really as private as they should be.

Then there is the issue of unencrypted data transfers. Whenever you connect to your camera from outside your home network the LookCam app will send all data unencrypted - including the device ID. Do you do that when connected to public WiFi? At work? In a vacation home? You don't know who else is listening.

And finally there is the matter of verification codes which are the only mechanism preventing somebody from enumerating all device IDs. How difficult would it be to guess a verification code? Verification codes seem to use 22 letters (all Latin uppercase letters but A, I, O, Q). With five letters this means around 5 million possible combinations. According to Paul Marrapese PPPP servers don't implement rate limiting (page 33), making trying out all these combinations perfectly realistic - maybe not for all possible device IDs but definitely for some.

But that resource-intensive approach is only necessary as long as the algorithm used to generate verification codes is a secret. Yet we have to assume that at least CS2 Network's 300 customers have access to that algorithm, given that their server software somehow validates device IDs. Are they all trustworthy? How much would it cost to become a "customer" simply in order to learn that algorithm?

And even if we are willing to assume that CS2 Network runs proper background checks to ensure that their algorithm remains a secret: how difficult would it be to guess that algorithm? I found a number of device IDs online, and my primitive analysis of their verification codes indicates that these aren't distributed equally. There is a noticeable affinity for certain prime numbers, so the algorithm behind them is likely a similar hack job as the other CS2 Network algorithms, throwing in mathematical operations and table lookups semi-randomly to make things look complicated. How long would this approach hold if somebody with actual cryptanalysis knowledge decided to figure this out?

Recommendations

So if you happen to own one of these cameras, what does all this mean to you? Even if you never disclosed the camera's device ID yourself, you cannot rely on it staying a secret. And this means that whatever your camera is recording is no longer private.

Are you using it as a security camera? Your security camera might now inform potential thieves of the stuff that you have standing around and the times when you leave home. It will also let them know where exactly you live.

Are you using it to keep an eye on your child? Just… don't. Even if you think that you yourself have a right to violate your child's privacy, you really don't want anybody else to watch.

And even if you "have nothing to hide": somebody could compromise the camera in order to hack other devices on your network or to simply make it part of a botnet. Such things happened before, many times actually.

So the best solution is to dispose of this camera ASAP. Don't sell it please because this only moves the problem to the next person. The main question is: how do you know that the camera you get instead will do better? I can only think of one indicator: if you want to access the camera from outside your network it should involve explicit setup steps, likely changing router configuration. The camera shouldn't just expose itself to the internet automatically.

But if you actually paid hundreds of dollars for that camera and dumping it isn't an option: running it in a safe manner is complicated. As I mentioned already, simply blocking internet access for the camera won't work. This can be worked around but it's complex enough to be not worth doing. You should be better off by installing a custom firmware. I haven't tried it but at least this one looks like somebody actually thought about security.

Further reading

As far as I am aware, the first research on the PPPP protocol was published by Paul Marrapese in 2019. He found a number of vulnerabilities, including one brand of cameras shipping their algorithm to generate verification codes with their client application. Knowing this algorithm, device IDs could be enumerated easily. Paul used this flaw to display the locations of millions of affected devices. His DEF CON talk is linked from the website and well worth watching.

A paper from the Warwick University (2023) researched LookCam app specifically. In additions to some vulnerabilities I mentioned here it contains a number of details on how these cameras operate.

This Elastic Labs article (2024) took a close look at some other PPPP-based cameras, finding a number of issues.

The CS2 Network sales presentation (2016) offers a fascinating look into the thinking of PPPP protocol designers and into how their system was meant to work.

08 Sep 2025 1:00pm GMT

05 Sep 2025

feedPlanet Mozilla

Mozilla Thunderbird: VIDEO: Thunderbird Accessibility Study

Welcome back to another edition of the Community Office Hours! This month, we're taking a closer look at accessibility in the Thunderbird desktop and mobile apps. We're chatting with Rebecca Taylor and Solange Valverde, members of our designer, about a recent accessibility (often shortened as a11y) study. We wanted to find out where Thunderbird was doing well, and where we could improve. Rebecca and Solange walk us through the study and answer our questions!

We'll be back next month with the latest Community Office Hours! If you have a suggestion for a topic or team you'd love us to cover, please let us know in the comments!

August Office Hours: Thunderbird Accessibility Study

The Thunderbird Team wants to make desktop and mobile apps that maximizes everyone's productivity and freedom. This means making Thunderbird accessible for all of our users, and the first step is finding where we can do better. Thanks to our relationship with Mozilla, our design team commissioned a study with Fable, who connects companies building inclusive products to experienced testers with disabilities. We asked participants to evaluate the Thunderbird desktop app using assistive tech, including screen readers, alternative navigation, and magnification. And we also asked a user on the cognitive spectrum to evaluate how our language, layouts, and reminders helped or hindered their use of the app.

Members of the design team then conducted 60 minute moderated interviews with study participants. In these talks, participants pointed out where they struggled with accessibility roadblocks, and what strategies they used to try and work through them.

Screen Reader Users

Screen readers convert on-screen text to either speech or Braille, and help blind or low-vision users navigate and access digital content. Our study participants, many of whom switch between multiple screen readers, let us know where Thunderbird falls short.

Some issues were common to all screen readers. Keyboard shortcuts didn't follow common norms, and workflows in search and filter results made for a confusing experience. Thunderbird could benefit from a table view with ARIA, a W3C specification created to improve accessibility.

Other issues were specific to the individual screen reader programs. In Narrator, for example, expected confirmation for actions like moving messages was missing, and the screen reader didn't recognize menu stage changes in submenus. In JAWS, meanwhile, message bodies were unreadable in email and compose windows with Braille display, and filter menus opened silently, not announcing the content or state to the user. Finally, with NVDA, users noted confusing structures and organization that lacked the structure and context they expected, as well as poor content prioritization.

Cognitive Usability

In a previous office hours, we talked about how we wanted to make Thunderbird more cognitively accessible with our work on the Message Context Menu. Cognition relates to how we think, learn, understand, remember, and pay attention, and clear language, regular layouts, and meaningful reminders all improve cognitive accessibility. Our cognitive accessibility tester expressed concerns about a lack of a quick, non-technical setup, imbalances in our whitespace, and unpredictable layout controls, among other issues.

Alternative Navigation and Magnification

Our alternative navigation users tested how well they could use Thunderbird with voice controls and eye tracking software. Our voice control testers found room for improvement with menu action labels, better autofocus shift when scrolling through emails, and a larger font size for more comfortable voice-driven use. Likewise, our eye tracking software tester found issues with font sizes. They also noted concerns with composition workflow and focus, too-small controls, and a drag-and-drop bug.

Our magnification tester found where we could improve visual contrast and pane layouts. They also found off-screen elements could steal focus from new messages, and that folder paths and hierarchies could use more clarification.

Conclusions and Next Steps

We're incredibly grateful for the insights we learned from this study on the many aspects of accessibility we want to improve in all of our apps. We want to thank Mozilla for their helping us take the next step in accessibility research, and Fable for providing a fantastic platform for accessibility testing. We're also so grateful to our study participants for all their time and sharing their perspectives, concerns, and insights.

This is far from the end of our accessibility journey. We're looking forward to working what we learned in this study into deeper research and ultimately our desktop roadmap. We can't wait to start accessibility research on our mobile apps. And we hope this study can help other open source projects start their own accessibility research to improve their projects.

One way you can get involved is to report accessibility bugs on the desktop app. Go to the Thunderbird section on Bugzilla, and under 'Component' select 'Disability Access.' Additionally, click 'Show Advanced Fields' and enter 'access' into the 'Details > Keywords' section. Add screenshots when possible. Be sure to describe the bug so others can try and reproduce the it for better troubleshooting.

If you want to learn more about our accessibility efforts, please join our User Experience mailing list! If you think you're ready to get involved, please join our dedicated Matrix channel. We hope you help us make Thunderbird available, and accessible, everywhere!

VIDEO (Also on Peertube):

Slides:

Resources:

The post VIDEO: Thunderbird Accessibility Study appeared first on The Thunderbird Blog.

05 Sep 2025 3:14pm GMT

Mozilla Future Releases Blog: Firefox 32-bit Linux Support to End in 2026

For many years, Mozilla has continued to provide Firefox for 32-bit Linux systems long after most other browsers and operating systems ended support. We made this choice because we care deeply about keeping Firefox available to as many people as possible, helping our users extend the life of their hardware and reduce unnecessary obsolescence.

Today, however, 32-bit Linux (on x86) is no longer widely supported by the vast majority of Linux distributions, and maintaining Firefox on this platform has become increasingly difficult and unreliable. To focus our efforts on delivering the best and most modern Firefox, we are ending support for 32-bit x86 Linux with the release of Firefox 144 (or to rephrase, Firefox 145 will not have 32-bit Linux support).

If you are currently using Firefox on a 32-bit x86 Linux system, we strongly encourage you to move to a 64-bit operating system and install the 64-bit version of Firefox, which will continue to be supported and updated.

For users who cannot transition immediately, Firefox ESR 140 will remain available - including 32-bit builds - and will continue to receive security updates until at least September 2026.

[Updated on 2025-09-09 to clarify the affected builds are 32-bit x86]

The post Firefox 32-bit Linux Support to End in 2026 appeared first on Future Releases.

05 Sep 2025 9:18am GMT

Karl Dubost: Did you open a bug?

Wall with broken tiles.

If you are a webdev…

and you had an issue on the website you were working on, because of a web browser…

Why didn't you file a bug on a browser bug tracker? What are the frictions?

(not asking those who did, because they already do the right thing ❤️)

Or Webcompat.com. A cross-browsers bug tracker.

PS: do not hesitate to ask around you, your colleagues, mates, etc.

This was initially posted on mastodon, you can contact me there. Also on GitHub.

Otsukare!

05 Sep 2025 8:50am GMT

04 Sep 2025

feedPlanet Mozilla

Mozilla Future Releases Blog: Extended Firefox ESR 115 Support for Windows 7, 8, and 8.1 and macOS 10.12-10.14

Mozilla has continued to support Firefox on Windows 7, Windows 8, and Windows 8.1 long after these operating systems reached end of life, helping users extend the life of their devices and reduce unnecessary obsolescence. We originally announced that security updates for Firefox ESR 115 would end in September 2024, later extending that into 2025.

Today, we are extending support once again: Firefox ESR 115 will continue to receive security updates on Windows 7, 8, and 8.1 until March 2026. This extension gives users more time to transition while ensuring critical security protections remain available. We still strongly encourage upgrading to a supported operating system to access the latest Firefox features and maintain long-term stability.

Note that this extension is also applicable for macOS 10.12-10.14 users running Firefox ESR 115.

The post Extended Firefox ESR 115 Support for Windows 7, 8, and 8.1 and macOS 10.12-10.14 appeared first on Future Releases.

04 Sep 2025 2:50pm GMT

03 Sep 2025

feedPlanet Mozilla

This Week In Rust: This Week in Rust 615

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is aehobak, a transcoder for bsdiff binary patches.

Thanks to David Michael Barr for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

383 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

A relatively quiet week. #144841 added an optimization for incremental builds that provided a very nice win for the nalgebra crate. #143290 should help avoid instantiating async functions repeatedly in downstream crates.

Triage done by @kobzol..- Revision range: ee361e8f..75ee9ffd

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.2%, 0.4%] 7
Regressions ❌
(secondary)
2.0% [0.1%, 13.6%] 30
Improvements ✅
(primary)
-1.9% [-7.0%, -0.3%] 17
Improvements ✅
(secondary)
-0.7% [-1.7%, -0.1%] 23
All ❌✅ (primary) -1.2% [-7.0%, 0.4%] 24

1 Regression, 3 Improvements, 6 Mixed; 5 of them in rollups 45 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-09-03 - 2025-10-01 🦀

Virtual
Africa
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Bugs like this are the worst! It's almost impossible to catch them in development, because there is never enough load on the system to force the scheduler to move the execution to another thread. So, you end up with one of these "impossible to reproduce, fails sometimes, but never for you" bugs.

It's mind-blowingly cool that the Rust compiler can detect something like this. And that seemingly unrelated parts of the language, like mutexes, lifetimes and async operations form such a coherent system.

- Bernard Kolobara on their blog

Thanks to llogiq for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

03 Sep 2025 4:00am GMT

The Rust Programming Language Blog: Welcoming the Rust Innovation Lab

TL;DR: Rustls is the inaugural project of the Rust Innovation Lab, which is a new home for Rust projects under the Rust Foundation.

At the Rust Foundation's August meeting, the Project Directors and the rest of the Rust Foundation board voted to approve Rustls as the first project housed under the newly formed Rust Innovation Lab. Prior to the vote, the Project Directors consulted with the Leadership Council who confirmed the Project's support for this initiative.

The Rust Innovation Lab (RIL) is designed to provide support for funded Rust-based open source projects from the Rust Foundation in the form of governance, legal, networking, marketing, and administration, while keeping the technical direction solely in the hands of the current maintainers. As with the other work of the Rust Foundation (e.g. its many existing initiatives), the purpose of the RIL is to strengthen the Rust ecosystem generally.

The Foundation has been working behind the scenes to establish the Rust Innovation Lab, which includes setting up infrastructure under the Foundation to ensure smooth transition for Rustls into RIL. More details are available in the Foundation's announcement and on the Rust Innovation Lab's page.

We are all excited by the formation of the Rust Innovation Lab. The support this initiative will provide to Rustls (and, eventually, other important projects that are using Rust) will improve software security for the entire industry. The Rust Project is grateful for the support of the Rust Foundation corporate members who are making this initiative possible for the benefit of everyone.

More information on the criteria for projects wishing to become part of the RIL and the process for applying will be coming soon. The Project Directors and Leadership Council have been and will continue working with the Foundation to communicate information, questions, and feedback with the Rust community about the RIL as the details are worked out.

03 Sep 2025 12:00am GMT

01 Sep 2025

feedPlanet Mozilla

The Rust Programming Language Blog: Faster linking times with 1.90.0 stable on Linux using the LLD linker

TL;DR: rustc will start using the LLD linker by default on the x86_64-unknown-linux-gnu target starting with the next stable release (1.90.0, scheduled for 2025-09-18), which should significantly reduce linking times. Test it out on beta now, and please report any encountered issues.

Some context

Linking time is often a big part of compilation time. When rustc needs to build a binary or a shared library, it will usually call the default linker installed on the system to do that (this can be changed on the command-line or by the target for which the code is compiled).

The linkers do an important job, with concerns about stability, backwards-compatibility and so on. For these and other reasons, on the most popular operating systems they usually are older programs, designed when computers only had a single core. So, they usually tend to be slow on a modern machine. For example, when building ripgrep 13 in debug mode on Linux, roughly half of the time is actually spent in the linker.

There are different linkers, however, and the usual advice to improve linking times is to use one of these newer and faster linkers, like LLVM's lld or Rui Ueyama's mold.

Some of Rust's wasm and aarch64 targets already use lld by default. When using rustup, rustc ships with a version of lld for this purpose. When CI builds LLVM to use in the compiler, it also builds the linker and packages it. It's referred to as rust-lld to avoid colliding with any lld already installed on the user's machine.

Since improvements to linking times are substantial, it would be a good default to use in the most popular targets. This has been discussed for a long time, for example in issues #39915 and #71515.

To expand our testing, we have enabled rustc to use rust-lld by default on nightly, in May 2024. No major issues have been reported since then.

We believe we've done all the internal testing that we could, on CI, crater, on our benchmarking infrastructure and on nightly, and plan to enable rust-lld to be the linker used by default on x86_64-unknown-linux-gnu for stable builds in 1.90.0.

Benefits

While this also enables the compiler to use more linker features in the future, the most immediate benefit is much improved linking times.

Here are more details from the ripgrep example mentioned above: for an incremental rebuild, linking is reduced 7x, resulting in a 40% reduction in end-to-end compilation times. For a from-scratch debug build, it is a 20% improvement.

Before/after comparison of a ripgrep incremental debug build

Most binaries should see some improvements here, but it's especially significant with e.g. bigger binaries, or for incremental rebuilds, or when involving debuginfo. These usually see bottlenecks in the linker.

Here's a link to the complete results from our benchmarks.

Possible drawbacks

From our prior testing, we don't really expect issues to happen in practice. It is a drop-in replacement for the vast majority of cases, but lld is not bug-for-bug compatible with GNU ld.

In any case, using rust-lld can be disabled if any problem occurs: use the -C linker-features=-lld flag to revert to using the system's default linker.

Some crates somehow relying on these differences could need additional link args, though we also expect this to be quite rare. Let us know if you encounter problems, by opening an issue on GitHub.

Some of the big gains in performance come from parallelism, which could be undesirable in resource-constrained environments, or for heavy projects that are already reaching hardware limits.

Summary, and call for testing

rustc will use rust-lld on x86_64-unknown-linux-gnu, starting with the 1.90.0 stable release, for much improved linking times. Rust 1.90.0 will be released next month, on the 18th of September 2025.

This linker change is already available on the current beta (1.90.0-beta.6). To help everyone prepare for this landing on stable, please test your projects on beta and let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can revert to the default linker with the -C linker-features=-lld flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[target.x86_64-unknown-linux-gnu]
rustflags = ["-Clinker-features=-lld"]

01 Sep 2025 12:00am GMT

28 Aug 2025

feedPlanet Mozilla

The Mozilla Blog: Speeding up Firefox Local AI Runtime

Last year we rolled out the Firefox AI Runtime, the engine that quietly powers features such as PDF.js generated alt text and, more recently, our smart tab grouping. The system worked, but not quite at the speed we wanted.

This post explains how we accelerated inference by replacing the default onnxruntime‑web that powers Transformers.js with its native C++ counterpart that now lives inside Firefox.

Where we started

Transformers.js is the JavaScript counterpart to Hugging Face's Python library. Under the hood it relies on onnxruntime‑web, a WebAssembly (WASM) build of ONNX Runtime.

A typical inference cycle:

  1. Pre‑processing in JavaScript (tokenization, tensor shaping)
  2. Model execution in WASM
  3. Post‑processing back in JavaScript

Even with warm caches, that dance crosses multiple layers. The real hotspot is the matrix multiplications, implemented with generic SIMD when running on CPU.

Why plain WASM wasn't enough

WASM SIMD is great, but it can't beat hardware‑specific instructions such as NEON on Apple Silicon or AVX‑512 on modern Intel chips.

Firefox Translations (uses Bergamot) already proves that diving to native code speeds things up: it uses WASM built‑ins which are small hooks that let WASM call into C++ compiled with those intrinsics. The project, nicknamed gemmology, works brilliantly.

We tried porting that trick to ONNX, but the huge number of operators made a one‑by‑one rewrite unmaintainable. And each cold start still paid the JS/WASM warm‑up tax.

Switching to ONNX C++

Transformers.js talks to ONNX Runtime through a tiny surface. It creates a session, pushes a Tensor, and pulls a result. It makes it simple to swap the backend without touching feature code.

Our steps to achieve this were:

  1. Vendor ONNX Runtime C++ directly into the Firefox tree.
  2. Expose it to JavaScript via a thin WebIDL layer.
  3. Wire Transformers.js to the new backend.

From the perspective of a feature like PDF alt‑text, nothing changed, it still calls await pipeline(…). Underneath, tensors now go straight to native code.

Integration of ONNX Runtime to the build system

Upstream ONNX runtime does not support all of our build configuration, and it's a large amount of code. As a consequence we chose not to add it in-tree. Instead, a configuration flag can be used to provide a compiled version of the ONNX runtime. It is eventually automatically downloaded from Taskcluster (where we build it for a selection of supported configuration) or provided by downstream developers. This provides flexibility while not slowing down our usual build and requiring low maintenance.

Building ONNX on Taskcluster required some configuration changes and upstream patches. The goal was to find a balance between speed and binary size, while being compatible with native code requirements from the Firefox repo.

Most notably:

The payoff

Because the native backend is a drop‑in replacement, we can enable it feature by feature and gather real‑world numbers. Early benchmarks shows from 2 to 10 × faster inference, with zero WASM warm‑up overhead.

For example, the Smart Tab Grouping topic suggestion, which can be laggy on first run, is now quite snappy, and this is the first feature we gradually moved to this backend for Firefox 142.

graph showing the difference between WASM and c++ backend. the C++ being way faster

The image to text model used for PDF.js alt-text feature also benefited from this change. On the same hardware the latency went from from 3.5s to 350ms.

What's next

We're gradually rolling out this new backend to additional features throughout the summer, so all capabilities built on Transformers.js can take advantage of it.

And with the C++ API at hand, we're planning to tackle a few long‑standing pain points, and enable GPU.

Those changes will ship in our vendored ONNX Runtime and offer us the best possible performance for Transformers.js-based features in our runtime in the future.

1. DequantizeLinear goes multi‑threaded

The DequantizeLinear operation is single‑threaded and often dominated inference time. While upstream work recently merged an improvement (PR #24818), we built a patch to spread the work across cores, letting the compiler auto‑vectorize the inner loops. The result is an almost linear speedup, especially on machines with many cores.

2. Matrix transposition goes multi-threaded

Similarly, it is typical to have to transpose very large (multiple dozen megabytes) matrices when performing an inference task. This operation was done naively with nested for loops. Switching to a multi-threaded cache-aware tiled transposition scheme, and leveraging SIMD allowed to take advantage of modern hardware and speed up this operation by a supra-linear factor, typically twice the number of threads allocated to this task, for example a 8x speedup using 4 threads.

This can be explained by the fact that the naive for loop was auto-vectorized, but otherwise did poor usage of CPU caches.

3. Caching the compiled graph

Before an inference can run, ONNX Runtime compiles the model graph for the current platform. On large models such as Qwen 2.5 0.5B this can cost up to five seconds every launch.

We can cache the compiled graph separately from the weights on the fly, shaving anywhere from a few milliseconds to the full five seconds.

4. Using GPUs

Currently, we've integrated only CPU-based providers. The next step is to support GPU-accelerated ONNX backends, which will require more effort. This is because GPU support demands additional sandboxing to safely and securely interact with the underlying hardware.

Conclusion

What is interesting about this migration is the fact that we could improve performance that much, while migrating features gradually, and all that in complete isolation, without having to change any feature code.

While the speed ups are already visible from a UX standpoint, we believe that a lot of improvement can and will happen in the future, further improving the efficiency of the ML-based features, and making them more accessible to a wider audience.

Have ideas, questions or bug reports? Ping us on Discord in the firefox-ai channel (https://discord.gg/TBZXDKnz) or file an issue on Bugzilla, we're all ears.

The post Speeding up Firefox Local AI Runtime appeared first on The Mozilla Blog.

28 Aug 2025 5:51pm GMT