11 Nov 2025

feedPlanet Mozilla

Firefox Developer Experience: Firefox WebDriver Newsletter 145

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we've done as part of the Firefox 145 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 145, a new contributor landed two patches in our codebase. Thanks to Khalid AlHaddad for the following fixes:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

WebDriver BiDi

11 Nov 2025 1:50pm GMT

10 Nov 2025

feedPlanet Mozilla

Niko Matsakis: Just call clone (or alias)

Continuing my series on ergonomic ref-counting, I want to explore another idea, one that I'm calling "just call clone (or alias)". This proposal specializes the clone and alias methods so that, in a new edition, the compiler will (1) remove redundant or unnecessary calls (with a lint); and (2) automatically capture clones or aliases in move closures where needed.

The goal of this proposal is to simplify the user's mental model: whenever you see an error like "use of moved value", the fix is always the same: just call clone (or alias, if applicable). This model is aiming for the balance of "low-level enough for a Kernel, usable enough for a GUI" that I described earlier. It's also making a statement, which is that the key property we want to preserve is that you can always find where new aliases might be created - but that it's ok if the fine-grained details around exactly when the alias is created is a bit subtle.

The proposal in a nutshell

Part 1: Closure desugaring that is aware of clones and aliases

Consider this move future:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(async move {
        //                   ---- move future
        manage_io(cx.io_system.alias(), cx.request_name.clone());
        //        --------------------  -----------------------
    });
    ...
}

Because this is a move future, this takes ownership of cx.io_system and cx_request_name. Because cx is a borrowed reference, this will be an error unless those values are Copy (which they presumably are not). Under this proposal, capturing aliases or clones in a move closure/future would result in capturing an alias or clone of the place. So this future would be desugared like so (using explicit capture clause strawman notation):

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            //     --------------------  -----------------------
            //     capture alias/clone respectively

            manage_io(cx.io_system.alias(), cx.request_name.clone());
        }
    );
    ...
}

Part 2: Last-use transformation

Now, this result is inefficient - there are now two aliases/clones. So the next part of the proposal is that the compiler would, in newer Rust editions, apply a new transformat called the last-use transformation. This transformation would identify calls to alias or clone that are not needed to satisfy the borrow checker and remove them. This code would therefore become:

fn spawn_services(cx: &Context) {
    tokio::task::spawn(
        async move(cx.io_system.alias(), cx.request_name.clone()) {
            manage_io(cx.io_system, cx.request_name);
            //        ------------  ---------------
            //        converted to moves
        }
    );
    ...
}

The last-use transformation would apply beyond closures. Given an example like this one, which clones id even though id is never used later:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id.clone());
    //                                       ----------
    //                                       unnecessary
    send_request(request)
}

the user would get a warning like so1:

warning: unnecessary `clone` call will be converted to a move
 --> src/main.rs:7:40
  |
8 |     let request = Request::ProcessIdentifier(id.clone());
  |                                              ^^^^^^^^^^ unnecessary call to `clone`
  |
  = help: the compiler automatically removes calls to `clone` and `alias` when not
    required to satisfy the borrow checker
help: change `id.clone()` to `id` for greater clarity
  |
8 -     let request = Request::ProcessIdentifier(id.clone());
8 +     let request = Request::ProcessIdentifier(id);
  |

and the code would be transformed so that it simply does a move:

fn send_process_identifier_request(id: String) {
    let request = Request::ProcessIdentifier(id);
    //                                       --
    //                                   transformed
    send_request(request)
}

Mental model: just call "clone" (or "alias")

The goal of this proposal is that, when you get an error about a use of moved value, or moving borrowed content, the fix is always the same: you just call clone (or alias). It doesn't matter whether that error occurs in the regular function body or in a closure or in a future, the compiler will insert the clones/aliases needed to ensure future users of that same place have access to it (and no more than that).

I believe this will be helpful for new users. Early in their Rust journey new users are often sprinkling calls to clone as well as sigils like & in more-or-less at random as they try to develop a firm mental model - this is where the "keep calm and call clone" joke comes from. This approach breaks down around closures and futures today. Under this proposal, it will work, but users will also benefit from warnings indicating unnecessary clones, which I think will help them to understand where clone is really needed.

Experienced users can trust the compiler to get it right

But the real question is how this works for experienced users. I've been thinking about this a lot! I think this approach fits pretty squarely in the classic Bjarne Stroustrup definition of a zero-cost abstraction:

"What you don't use, you don't pay for. And further: What you do use, you couldn't hand code any better."

The first half is clearly satisfied. If you don't call clone or alias, this proposal has no impact on your life.

The key point is the second half: earlier versions of this proposal were more simplistic, and would sometimes result in redundant or unnecessary clones and aliases. Upon reflection, I decided that this was a non-starter. The only way this proposal works is if experienced users know there is no performance advantage to using the more explicit form.This is precisely what we have with, say, iterators, and I think it works out very well. I believe this proposal hits that mark, but I'd like to hear if there are things I'm overlooking.

The last-use transformation codifies a widespread intuition, that clone is never necessary

I think most users would expect that changing message.clone() to just message is fine, as long as the code keeps compiling. But in fact nothing requires that to be the case. Under this proposal, APIs that make clone significant in unusual ways would be more annoying to use in the new Rust edition and I expect ultimately wind up getting changed so that "significant clones" have another name. I think this is a good thing.

Frequently asked questions

I think I've covered the key points. Let me dive into some of the details here with a FAQ.

Can you summarize all of these posts you've been writing? It's a lot to digest!

I get it, I've been throwing a lot of things out there. Let me begin by recapping the motivation as I see it:

I then proposed a set of three changes to address these issues, authored in individual blog posts:

What would it feel like if we did all those things?

Let's look at the impact of each set of changes by walking through the "Cloudflare example", which originated in this excellent blog post by the Dioxus folks:

let some_value = Arc::new(something);

// task 1
let _some_value = some_value.clone();
tokio::task::spawn(async move {
    do_something_with(_some_value);
});

// task 2:  listen for dns connections
let _some_a = self.some_a.clone();
let _some_b = self.some_b.clone();
let _some_c = self.some_c.clone();
tokio::task::spawn(async move {
       do_something_else_with(_some_a, _some_b, _some_c)
});

As the original blog post put it:

Working on this codebase was demoralizing. We could think of no better way to architect things - we needed listeners for basically everything that filtered their updates based on the state of the app. You could say "lol get gud," but the engineers on this team were the sharpest people I've ever worked with. Cloudflare is all-in on Rust. They're willing to throw money at codebases like this. Nuclear fusion won't be solved with Rust if this is how sharing state works.

Applying the Alias trait and explicit capture clauses makes for a modest improvement. You can now clearly see that the calls to clone are alias calls, and you don't have the awkward _some_value and _some_a variables. However, the code is still pretty verbose:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value);
});

// task 2:  listen for dns connections
tokio::task::spawn(async move(
    self.some_a.alias(),
    self.some_b.alias(),
    self.some_c.alias(),
) {
       do_something_else_with(self.some_a, self.some_b, self.some_c)
});

Applying the Just Call Clone proposal removes a lot of boilerplate and, I think, captures the intent of the code very well. It also retains quite a bit of explicitness, in that searching for calls to alias reveals all the places that aliases will be created. However, it does introduce a bit of subtlety, since (e.g.) the call to self.some_a.alias() will actually occur when the future is created and not when it is awaited:

let some_value = Arc::new(something);

// task 1
tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

// task 2:  listen for dns connections
tokio::task::spawn(async move {
       do_something_else_with(
        self.some_a.alias(),
        self.some_b.alias(),
        self.some_c.alias(),
    )
});

I'm worried that the execution order of calls to alias will be too subtle. How is thie "explicit enough for low-level code"?

There is no question that Just Call Clone makes closure/future desugaring more subtle. Looking at task 1:

tokio::task::spawn(async move {
    do_something_with(some_value.alias());
});

this gets desugared to a call to alias when the future is created (not when it is awaited). Using the explicit form:

tokio::task::spawn(async move(some_value.alias()) {
    do_something_with(some_value)
});

I can definitely imagine people getting confused at first - "but that call to alias looks like its inside the future (or closure), how come it's occuring earlier?"

Yet, the code really seems to preserve what is most important: when I search the codebase for calls to alias, I will find that an alias is creating for this task. And for the vast majority of real-world examples, the distinction of whether an alias is creating when the task is spawned versus when it executes doesn't matter. Look at this code: the important thing is that do_something_with is called with an alias of some_value, so some_value will stay alive as long as do_something_else is executing. It doesn't really matter how the "plumbing" worked.

What about futures that conditionally alias a value?

Yeah, good point, those kind of examples have more room for confusion. Like look at this:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value.alias());
    }
});

In this example, there is code that uses some_value with an alias, but only under if false. So what happens? I would assume that indeed the future will capture an alias of some_value, in just the same way that this future will move some_value, even though the relevant code is dead:

tokio::task::spawn(async move {
    if false {
        do_something_with(some_value);
    }
});

Can you give more details about the closure desugaring you imagine?

Yep! I am thinking of something like this:

Examples that show some edge cased:

if consume {
    x.foo().
}

Why not do something similar for non-move closures?

In the relevant cases, non-move closures will already just capture by shared reference. This means that later attempts to use that variable will generally succeed:

let f = async {
    //  ----- NOT async move
    self.some_a.alias()
};

do_something_else(self.some_a.alias());
//                ----------- later use succeeds

f.await;

This future does not need to take ownership of self.some_a to create an alias, so it will just capture a reference to self.some_a. That means that later uses of self.some_a can still compile, no problem. If this had been a move closure, however, that code above would currently not compile.

There is an edge case where you might get an error, which is when you are moving:

let f = async {
    self.some_a.alias()
};

do_something_else(self.some_a);
//                ----------- move!

f.await;

In that case, you can make this an async move closure and/or use an explicit capture clause:

Can you give more details about the last-use transformation you imagine?

Yep! We would during codegen identify candidate calls to Clone::clone or Alias::alias. After borrow check has executed, we would examine each of the callsites and check the borrow check information to decide:

If the answer to both questions is no, then we will replace the call with a move of the original place.

Here are some examples:

fn borrow(message: Message) -> String {
    let method = message.method.to_string();

    send_message(message.clone());
    //           ---------------
    //           would be transformed to
    //           just `message`

    method
}
fn borrow(message: Message) -> String {
    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `message.method` is
    //           referenced later

    message.method.to_string()
}
fn borrow(message: Message) -> String {
    let r = &message;

    send_message(message.clone());
    //           ---------------
    //           cannot be transformed
    //           since `r` may reference
    //           `message` and is used later.

    r.method.to_string()
}

Why are you calling it the last-use transformation and not optimization?

In the past, I've talked about the last-use transformation as an optimization - but I'm changing terminology here. This is because, typically, an optimization is supposed to be unobservable to users except through measurements of execution time (or though UB), and that is clearly not the case here. The transformation would be a mechanical transformation performed by the compiler in a deterministic fashion.

Would the transformation "see through" references?

I think yes, but in a limited way. In other words I would expect

Clone::clone(&foo)

and

let p = &foo;
Clone::clone(p)

to be transformed in the same way (replaced with foo), and the same would apply to more levels of intermediate usage. This would kind of "fall out" from the MIR-based optimization technique I imagine. It doesn't have to be this way, we could be more particular about the syntax that people wrote, but I think that would be surprising.

On the other hand, you could still fool it e.g. like so

fn identity<T>(x: &T) -> &T { x }

identity(&foo).clone()

Would the transformation apply across function boundaries?

The way I imagine it, no. The transformation would be local to a function body. This means that one could write a force_clone method like so that "hides" the clone in a way that it will never be transformed away (this is an important capability for edition transformations!):

fn pipe<Msg: Clone>(message: Msg) -> Msg {
    log(message.clone()); // <-- keep this one
    force_clone(&message)
}

fn force_clone<Msg: Clone>(message: &Msg) -> Msg {
    // Here, the input is `&Msg`, so the clone is necessary
    // to produce a `Msg`.
    message.clone()
}

Won't the last-use transformation change behavior by making destructors run earlier?

Potentially, yes! Consider this example, written using explicit capture clause notation and written assuming we add an Alias trait:

async fn process_and_stuff(tx: mpsc::Sender<Message>) {
    tokio::spawn({
        async move(tx.alias()) {
            //     ---------- alias here
            process(tx).await
        }
    });

    do_something_unrelated().await;
}

The precise timing when Sender values are dropped can be important - when all senders have dropped, the Receiver will start returning None when you call recv. Before that, it will block waiting for more messages, since those tx handles could still be used.

So, in process_and_stuff, when will the sender aliases be fully dropped? The answer depends on whether we do the last-use transformation or not:

Most of the time, running destructors earlier is a good thing. That means lower peak memory usage, faster responsiveness. But in extreme cases it could lead to bugs - a typical example is a Mutex<()> where the guard is being used to protect some external resource.

How can we change when code runs? Doesn't that break stability?

This is what editions are for! We have in fact done a very similar transformation before, in Rust 2021. RFC 2229 changed destructor timing around closures and it was, by and large, a non-event.

The desire for edition compatibility is in fact one of the reasons I want to make this a last-use transformation and not some kind of optimization. There is no UB in any of these examples, it's just that to understand what Rust code does around clones/aliases is a bit more complex than it used to be, because the compiler will do automatic transformation to those calls. The fact that this transformation is local to a function means we can decide on a call-by-call basis whether it should follow the older edition rules (where it will always occur) or the newer rules (where it may be transformed into a move).

Does that mean that the last-use transformation would change with Polonius or other borrow checker improvements?

In theory, yes, improvements to borrow-checker precision like Polonius could mean that we identify more opportunities to apply the last-use transformation. This is something we can phase in over an edition. It's a bit of a pain, but I think we can live with it - and I'm unconvinced it will be important in practice. For example, when thinking about the improvements I expect under Polonius, I was not able to come up with a realistic example that would be impacted.

Isn't it weird to do this after borrow check?

This last-use transformation is guaranteed not to produce code that would fail the borrow check. However, it can affect the correctness of unsafe code:

let p: *const T = &*some_place;

let q: T = some_place.clone();
//         ---------- assuming `some_place` is
//         not used later, becomes a move

unsafe {
    do_something(p);
    //           -
    // This now refers to a stack slot
    // whose value is uninitialized.
}

Note though that, in this case, there would be a lint identifying that the call to some_place.clone() will be transformed to just some_place. We could also detect simple examples like this one and report a stronger deny-by-default lint, as we often do when we see guaranteed UB.

Shouldn't we use a keyword for this?

When I originally had this idea, I called it "use-use-everywhere" and, instead of writing x.clone() or x.alias(), I imagined writing x.use. This made sense to me because a keyword seemed like a stronger signal that this was impacting closure desugaring. However, I've changed my mind for a few reasons.

First, Santiago Pastorino gave strong pushback that x.use was going to be a stumbling block for new learners. They now have to see this keyword and try to understand what it means - in contrast, if they see method calls, they will likely not even notice something strange is going on.

The second reason though was TC who argued, in the lang-team meeting, that all the arguments for why it should be ergonomic to clone a ref-counted value in a closure applied equally well to clone, depending on the needs of your application. I completely agree. As I mentioned earlier, this also [addresses the concern I've heard with the Alias trait], which is that there are things you want to ergonomically clone but which don't correspond to "aliases". True.

In general I think that clone (and alias) are fundamental enough to how Rust is used that it's ok to special case them. Perhaps we'll identify other similar methods in the future, or generalize this mechanism, but for now I think we can focus on these two cases.

What about "deferred ref-counting"?

One point that I've raised from time-to-time is that I would like a solution that gives the compiler more room to optimize ref-counting to avoid incrementing ref-counts in cases where it is obvious that those ref-counts are not needed. An example might be a function like this:

fn use_data(rc: Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

This function requires ownership of an alias to a ref-counted value but it doesn't actually do anything but read from it. A caller like this one…

use_data(source.alias())

…doesn't really need to increment the reference count, since the caller will be holding a reference the entire time. I often write code like this using a &:

fn use_data(rc: &Rc<Data>) {
    for datum in rc.iter() {
        println!("{datum:?}");
    }
}

so that the caller can do use_data(&source) - this then allows the callee to write rc.alias() in the case that it wants to take ownership.

I've basically decided to punt on adressing this problem. I think folks that are very performance sensitive can use &Arc and the rest of us can sometimes have an extra ref-count increment, but either way, the semantics for users are clear enough and (frankly) good enough.


  1. Surprisingly to me, clippy::pedantic doesn't have a dedicated lint for unnecessary clones. This particular example does get a lint, but it's a lint about taking an argument by value and then not consuming it. If you rewrite the example to create id locally, clippy does not complain. ↩︎

10 Nov 2025 6:55pm GMT

The Mozilla Blog: Firefox expands fingerprint protections: advancing towards a more private web

With Firefox 145, we're rolling out major privacy upgrades that take on browser fingerprinting - a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you're in private browsing. These protections build on Mozilla's long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.

Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup - ranging from your time zone to your operating system settings - that together create a "fingerprint" identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser's private browsing mode.

Protecting people's privacy has always been core to Firefox. Since 2020, Firefox's built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.

Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren't in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.

How we built stronger defenses

Drawing from a global analysis of how real people's browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.

How Firefox protects you

These fingerprinting protections work on multiple layers, building on Firefox's already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection.

Beyond blocking trackers, Firefox also limits the information it makes available to websites - a privacy-by-design approach - that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer. But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.

Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.

Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.

Our research shows these improvements cut the percentage of users seen as unique by almost half.

Firefox's new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox's approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don't get in your way.

What's next for your privacy

If you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox's fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically - no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.

Take control of your internet

Download Firefox

The post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.

10 Nov 2025 2:00pm GMT

The Rust Programming Language Blog: Announcing Rust 1.91.1

The Rust team has published a new point release of Rust, 1.91.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.91.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.91.1

Rust 1.91.1 includes fixes for two regressions introduced in the 1.91.0 release.

Linker and runtime errors on Wasm

Most targets supported by Rust identify symbols by their name, but Wasm identifies them with a symbol name and a Wasm module name. The #[link(wasm_import_module)] attribute allows to customize the Wasm module name an extern block refers to:

#[link(wasm_import_module = "hello")]
extern "C" {
    pub fn world();
}

Rust 1.91.0 introduced a regression in the attribute, which could cause linker failures during compilation ("import module mismatch" errors) or the wrong function being used at runtime (leading to undefined behavior, including crashes and silent data corruption). This happened when the same symbol name was imported from two different Wasm modules across multiple Rust crates.

Rust 1.91.1 fixes the regression. More details are available in issue #148347.

Cargo target directory locking broken on illumos

Cargo relies on locking the target/ directory during a build to prevent concurrent invocations of Cargo from interfering with each other. Not all filesystems support locking (most notably some networked ones): if the OS returns the Unsupported error when attempting to lock, Cargo assumes locking is not supported and proceeds without it.

Cargo 1.91.0 switched from custom code interacting with the OS APIs to the File::lock standard library method (recently stabilized in Rust 1.89.0). Due to an oversight, that method always returned Unsupported on the illumos target, causing Cargo to never lock the build directory on illumos regardless of whether the filesystem supported it.

Rust 1.91.1 fixes the oversight in the standard library by enabling the File::lock family of functions on illumos, indirectly fixing the Cargo regression.

Contributors to 1.91.1

Many people came together to create Rust 1.91.1. We couldn't have done it without all of you. Thanks!

10 Nov 2025 12:00am GMT

07 Nov 2025

feedPlanet Mozilla

The Mozilla Blog: Introducing early access for Firefox Support for Organizations

Multiple Firefox logos forming a curved trail on a dark background.

Increasingly, businesses, schools, and government institutions deploy Firefox at scale for security, resilience, and data sovereignty. Organizations have fine-grained administrative and orchestration control of the browser's behavior using policies with Firefox and the Extended Support Release (ESR). Today, we're opening early access to Firefox Support for Organizations, a new program that begins operation in January 2026.

What Firefox Support for Organizations offers

Support for Organizations is a dedicated offering for teams who need private issue triage and escalation, defined response times, custom development options, and close collaboration with Mozilla's engineering and product teams.

Support for Organizations adds a new layer of help for teams and businesses that need confidential, reliable, and customized levels of support. All Firefox users will continue to have full access to existing public resources including documentation, the knowledge base, and community forums, and we'll keep improving those for everyone in future. Support plans will help us better serve users who rely on Firefox for business-critical and sensitive operations.

Get in touch for early access

If these levels of support are interesting for your organization, get in touch using our inquiry form and we'll get back to you with more information.

Multiple Firefox logos forming a curved trail on a dark background.

Firefox Support for Organizations

Get early access

The post Introducing early access for Firefox Support for Organizations appeared first on The Mozilla Blog.

07 Nov 2025 12:00pm GMT

05 Nov 2025

feedPlanet Mozilla

The Mozilla Blog: Under the hood: How Firefox suggests tab groups with local AI

Browser popup showing the “Create tab group” menu with color options and AI tab suggestions button.

Background

Mozilla launched Tab Grouping in early 2025, allowing tabs to be arranged and grouped with persistent labels. It was the most requested feature in the history of Mozilla Connect. While tab grouping provides a great way to manage tabs and reduce tab overload, it can be a challenge to locate which tabs to group when you have many open.

We sought to improve the workflows by providing an AI tab grouping feature that enables two key capabilities:

Of course, we wanted this to work without you needing to send any data of yours to Mozilla, so we used our local Firefox AI runtime and built an efficient model that delivers the features entirely on your own device. The feature is opt-in and downloads two small ML models when the user clicks to run it the first time.

Group title suggestion

Understanding the problem

Suggesting titles for grouped tabs is a challenge because it is hard to understand user intent when tabs are first grouped. Based on our interviews when we started the project, we found that while tab groups are sometimes generic terms like 'Shopping' or 'Travel', over half the time users' tabs were specific terms such as name of a video game, friend or town. We also found tab names to be extremely short - 1 or 2 words.

Diagram showing Firefox tab information processed by a generative AI model to label topics like Boston Travel

Generating a digest of the group

To address these challenges, we adopt a hybrid methodology that combines a modified TF-IDF-based textual analysis with keyword extraction. We identify terms that are statistically distinctive to the titles of pages within a tab group compared to those outside it. The three most prominent keywords, along with the full titles of three randomly selected pages, are then combined to produce a concise digest representing the group, which is used as input for the subsequent stage of processing using a language model.

Generating the label

The digest string is used as an input to a generative model that returns the final label. We used a T5 based encoder-decoder model (flan-t5-base) that was fine tuned on over 10,000 example situations and labels.

One of the key challenges in developing the model was generating the training data samples to tune the model without any user data. To do this, we defined a set of user archetypes and used an LLM API (OpenAI GPT-4) to create sample pages for a user performing various tasks. This was augmented by real page titles from the publicly available common crawl dataset. We then used the LLM to suggest short titles for those use cases. The process was first done at a small scale of several hundred group names. These were manually corrected and curated, adjusting for brevity and consistency. As the process scaled up, the initial 300 group names were used as examples passed to the LLM so that the additional examples created would meet those standards.

Shrinking things down

We need to get the model small enough to run on most computers. Once the initial model was trained, it was sampled to a smaller model using a process known as knowledge distillation. For distillation, we tuned a t5-efficient-tiny model from the token probability outputs of our teacher flan-t5-base model. Midway through the distillation process we also removed two encoder transformer layers and two decoder layers to further reduce the number of parameters.

Finally, the model parameters were quantized from floating point (4 bytes per parameter) to integer 8 bit. In the end this entire reduction process reduced the model from 1GB to 57 MB, with only a modest reduction in accuracy.

Suggesting tabs

Understanding the problem

For tab suggestions, we identified a couple of approaches on how people prefer grouping their tabs. Some people prefer grouping by domain to easily access all documents for work for instance. Others might prefer grouping all their tabs together when they are planning a trip. Others still might prefer separating their "work" and "personal" tabs.

Our initial approach on suggesting tabs was based on semantic similarity. Tabs that are topically similar are suggested.

Browser pop-up suggesting related tabs for a Boston trip using AI-based grouping

Identifying topically similar tabs

We first convert tab titles to a feature vector locally using a MiniLM embedding model. Embedding models are trained so that similar content produces vectors that are close together in embedding space. Using a similarity measure such as cosine similarity, we're able to assign how closely similar a tab title or url is to another.

The similarity score between an anchor tab chosen by the user and another tab is a linear combination of the candidate tab with the group title (if present) of the anchor tab, the anchor tab title and the anchor url. Using these values, we generate a similarity probability and tabs that have a high probability threshold are suggested to be part of the group.

Mathematical formula showing conditional probability using weighted similarity and sigmoid function

where,
w is the weight,
t_i is the candidate tab,
t_a is the anchor tab,
g_a is the anchor group title,
u_i is the candidate url
u_a is the anchor url, and,
σ is the sigmoid function

Optimizing the weights

In order to find the weights, we framed the problem as a classification task, where we calculate the precision and recall based on the tabs that were correctly classified given an anchor tab. We used synthetic data generated by OpenAI based on the user archetypes above.

We initially used a clustering approach to establish a baseline and switched to a logistic regression when we realized that treating the group, title and url features with varying importances improved our metrics.

Bar chart comparing DBScan and Logistic Regression by precision, recall, and F1 performance metrics

Using logistic regression, there was an 18% improvement against the baseline.

Performance

While the median number of tabs for people using the feature is relatively small (~25), there are some "power" users whose tab count reaches the thousands. This would cause the tab grouping feature to take uncomfortably long.

This was part of the reason why we switched from a clustering based approach to a linear model.

Using our performance framework, we found that the p99 of running logistic regression compared to a clustering based method such as KMeans improved by 33%.

Bar chart comparing KMeans and Logistic Regression using percentile metrics p50, p95, and p99

Future work here would involve improving F1 score. These could be by adding a time-related component as part of the inference (we are more likely to group tabs together that we've opened at the same time) or using a fine-tuned embedding model for our use case.

Thanks for reading

All of our work is open source. If you are a developer feel free to peruse our source code on our model training, or view our topic model on Huggingface.

Feel free to try the feature and let us know what you think!

Take control of your internet

Download Firefox

The post Under the hood: How Firefox suggests tab groups with local AI appeared first on The Mozilla Blog.

05 Nov 2025 3:41pm GMT

Wladimir Palant: An overview of the PPPP protocol for IoT cameras

My previous article on IoT "P2P" cameras couldn't go into much detail on the PPPP protocol. However, there is already lots of security research on and around that protocol, and I have a feeling that there is way more to come. There are pieces of information on the protocol scattered throughout the web, yet every one approaching from a very specific narrow angle. This is my attempt at creating an overview so that other people don't need to start from scratch.

While the protocol can in principle be used by any kind of device, so far I've only seen network-connected cameras. It isn't really peer-to-peer as advertised but rather relies on central servers, yet the protocol allows to transfer the bulk of data via a direct connection between the client and the device. It's hard to tell how many users there are but there are lots of apps, I'm sure that I haven't found all of them.

There are other protocols with similar approaches being used for the same goal. One is used by ThroughTek's Kalay Platform which has the interesting string "Charlie is the designer of P2P!!" in its codebase (32 bytes long, seems to be used as "encryption" key for some non-critical functionality). I recognize both the name and the "handwriting," it looks like PPPP protocol designer found a new home here. Yet PPPP seems to be still more popular than the competition, thanks to it being the protocol of choice for cheap low-end cameras.

Disclaimer: Most of the information below has been acquired by analyzing public information as well as reverse engineering applications and firmware, not by observing live systems. Consequently, there can be misinterpretations.

Update (2025-11-07): Added App2Cam Plus app to the table, representing a number of apps which all seem to be belong to ABUS Smartvest Wireless Alarm System.

Update (2025-11-07): This article originally grouped Xiaomi Home together with Yi apps. This was wrong, Xiaomi uses a completely different protocol to communicate with their PPPP devices. A brief description of this protocol has been added.

Contents

The general design

The protocol's goal is to serve as a drop-in replacement for TCP. Rather than establish a connection to a known IP address (or a name to be resolved via DNS), clients connect to a device identifier. The abstraction is supposed to hide away how the device is located (via a server that keeps track of its IP address), how a direct communication channel is established (via UDP hole punching) or when one of multiple possible fallback scenarios is being used because direct communication is not possible.

The protocol is meant to be resilient, so there are usually three redundant servers handling each network. When a device or client needs to contact a server, it sends the same message to all of them and doesn't care which one will reply. Note: In this article "network" generally means a PPPP network, i.e. a set of servers and the devices connecting to them. While client applications typically support multiple networks, devices are always associated with a specific one determined by their device prefix.

For what is meant to be a transport layer protocol, PPPP has some serious complexity issues. It encompasses device discovery on the LAN via UDP broadcasts, UDP communication between device/client and the server and a number of (not exactly trivial) fallback solutions. It also features multiple "encryption" algorithms which are more correctly described as obfuscators and network management functionality.

Paul Marrapese's Wireshark Dissector provides an overview of the messages used by the protocol. While it isn't quite complete, a look into the pppp.fdesc file shows roughly 70 different message types. It's hard to tell how all these messages play together as the protocol has not been designed as a state machine. The protocol implementation uses its previous actions as context to interpret incoming messages, but it has little indication as to which messages are expected when. Observing a running system is essential to understanding this protocol.

The complicated message exchange required to establish a connection between a device and a client has been described by Elastic Security Labs. They also provide the code of their client which implements that secret handshake.

I haven't seen any descriptions of how the fallback approaches work when a direct connection cannot be established. Neither could I observe these fallbacks in action, presumably because the network I observed didn't enable them. There are at least three such fallbacks: UDP traffic can be relayed by a network-provided server, it can be relayed by a "supernode" which is a device that agreed to be used as a relay, and it can be wrapped in a TCP connection to the server. The two centralized solutions incur significant costs for the network owners, rendering them unpopular. And I can imagine the "supernode" approach to be less than reliable with low-end devices like these cameras (it's also a privacy hazard but this clearly isn't a consideration).

I recommend going though the CS2 sales presentation to get an idea of how the protocol is meant to work. Needless to say that it doesn't always work as intended.

The network ports

I could identify the following network ports being used:

Note that while port 443 is normally associated with HTTPS, here it was apparently only chosen to fool firewalls. The traffic is merely obfuscated, not really encrypted.

The direct communication between the client and the device uses a random UDP port. In my understanding the ports are also randomized when this communication is relayed by a server or supernode.

The device IDs

The canonical representation of a device ID looks like this: ABC-123456-VWXYZ. Here ABC is a device prefix. While a PPPP network will often handle more than one device prefix, mapping a device prefix to a set of servers is supposed to be unambiguous. This rule isn't enforced across different protocol variants however, e.g. the device prefix EEEE is assigned differently by CS2 and iLnk.

The six digit number following the device prefix allows distinguishing different devices within a prefix. It seems that vendors can choose these numbers freely - some will assign them to devices sequentially, others go by some more complicated rules. A comment on my previous article even claims that they will sometimes reassign existing device IDs to new devices.

The final part is the verification code, meant to prevent enumeration of devices. It is generated by some secret algorithm and allows distinguishing valid device IDs from invalid ones. At least one such algorithm got leaked in the past.

Depending on the application a device ID will not always be displayed in its canonical form. It's pretty typical for the dashes to be removed for example, in one case I saw the prefix being shortened to one letter. Finally, there are applications that will hide the device ID from the user altogether, displaying only some vendor-specific ID instead.

The protocol variants

So far I could identify at least four variants of this protocol - if you count HLP2P which is questionable. These protocol implementations differ significantly and aren't really compatible. A number of apps can work with different protocol implementations but they generally do it by embedding multiple client libraries.

Variant Typical client library names Typical functions
CS2 Network libPPCS_API.so libobject_jni.so librtapi.so PPPP_Initialize PPPP_ConnectByServer
Yi Technology PPPP_API.so libmiio_PPPP_API.so PPPP_Initialize PPPP_ConnectByServer
iLnk libvdp.so libHiChipP2P.so XQP2P_Initialize XQP2P_ConnectByServer HI_XQ_P2P_Init
HLP2P libobject_jni.so libOKSMARTPPCS.so HLP2P_Initialize HLP2P_ConnectByServer

CS2 Network

The Chinese company CS2 Network is the original developer of the protocol. Their implementation can sometimes be recognized without even looking at any code just by their device IDs. The letters A, I, O and Q are never present in the verification code, there are only 22 valid letters here. Same seems to apply to the Yi Technology fork however which is generally very similar.

The other giveaway is the "init string" which encodes network parameters. Typically these init strings are hardcoded in the application (sometimes hundreds of them) and chosen based on device prefix, though some applications retrieve them from their servers. These init strings are obfuscated, with the function PPPP_DecodeString doing the decoding. The approach is typical for CS2 Network: a lookup table filled with random values and some random algebraic operations to make things seem more complex. The init strings look like this:

DRFTEOBOJWHSFQHQEVGNDQEXFRLZGKLUGSDUAIBXBOIULLKRDNAJDNOZHNKMJO:SECRETKEY

The part before the colon decodes into:

127.0.0.1,192.168.1.1,10.0.0.1,

This is a typical list of three server IPs. No, the trailing comma isn't a typo but required for correct parsing. Host names are occasionally used in init strings but this is uncommon. With CS2 Network generally distrusting DNS from the looks of it, they probably recommend vendors to sidestep it. The "secret" key behind the colon is optional and activates encryption of transferred data which is better described as obfuscation. Unlike the server addresses, this part isn't obfuscated.

Yi Technology

The Xiaomi spinoff Yi Technology appears to have licensed the code of the CS2 Network implementation. They made some moderate changes to it but it is still very similar to the original. For example, they still use the same code to decode init strings, merely with a different lookup table. Consequently, same init string as above would look slightly differently here:

LZERHWKWHUEQKOFUOREPNWERHLDLDYFSGUFOJXIXJMASBXANOTHRAFMXNXBSAM:SECRETKEY

As can be seen from Paul Marrapese's Wireshark Dissector, the Yi Technology fork added a bunch of custom protocol messages and extended two messages presumably to provide forward compatibility. The latter is a rather unusual step for the PPPP ecosystem where the dominant approach seems to be "devices and clients connecting to the same network always use the same version of the client library which is frozen for all eternity."

There is another notable difference: this PPPP implementation doesn't contain any encryption functionality. There seems to be some AES encryption being performed at the application layer (which is the proper way to do it), I didn't look too closely however.

iLnk

The protocol fork developed by Shenzhen Yunni Technology iLnkP2P seems to have been developed from scratch. The device IDs for legacy iLnk networks are easy to recognize because their verification codes only consist of the letters A to F. The algorithm generating these verification codes is public knowledge (CVE-2019-11219) so we know that these are letters taken from an MD5 hex digest. New iLnk networks appear to have verification codes that can contain all Latin letters, some new algorithm replaced the compromised one here. Maybe they use Base64 digests now?

An iLnk init string can be recognized by the presence of a dash:

ATBBARASAXAOAQAOAQAOARBBARAZASAOARAWAYAOARAOARBBARAQAOAQAOAQAOAR-$$

The part before the dash decodes into:

3;127.0.0.1;192.168.1.1;10.0.0.1

Yes, the first list entry has to specify how many server IPs there are. The decoding approach (function HI_DecStr or XqStrDec depending on the implementation) is much simpler here, it's a kind of Base26 encoding. The part after the dash can encode additional parameters related to validation of device IDs but typically it will be $$ indicating that it is omitted and network-specific device ID validation can be skipped. As far as I can tell, iLnk networks will always send all data as plain text, there is no encryption functionality of any kind.

Going through the code, the network-level changes in the iLnk fork are extensive, with only the most basic messages shared with the original PPPP protocol. Some message types are clashing like for example MSG_DEV_MAX that uses the same type as MSG_DEV_LGN_CRC in the CS2 implementation. This fork also introduces new magic numbers: while PPPP messages normally start with 0xF1, some messages here start with 0xA1 and one for some reason with 0xF2.

Unfortunately, I haven't seen any comprehensive analysis of this protocol variant yet, so I'll just list the message types along with their payload sizes. For messages with 20 bytes payloads it can be assumed that the payload is a device ID. Don't ask me why two pairs of messages share the same message type.

Message Message type Payload size
MSG_HELLO F1 00 0
MSG_RLY_PKT F1 03 0
MSG_DEV_LGN F1 10 IPv4: 40
IPv6: 152
MSG_DEV_MAX F1 12 20
MSG_P2P_REQ F1 20 IPv4: 36
IPv6: 152
MSG_LAN_SEARCH F1 30 0
MSG_LAN_SEARCH_EXT F1 32 0
MSG_LAN_SEARCH_EXT_ACK F1 33 52
MSG_DEV_UNREACH F1 35 20
MSG_PUNCH_PKT F1 41 20
MSG_P2P_RDY F1 42 20
MSG_RS_LGN F1 60 28
MSG_RS_LGN_EX F1 62 44
MSG_LST_REQ F1 67 20
MSG_RLY_HELLO F1 70 0
MSG_RLY_HELLO_ACK F1 71 0
MSG_RLY_PORT F1 72 0
MSG_RLY_PORT_ACK F1 73 8
MSG_RLY_PORT_EX_ACK F1 76 264
MSG_RLY_REQ_EX F1 77 288
MSG_RLY_REQ F1 80 IPv4: 40
IPv6: 160
MSG_HELLO_TO_ACK F1 83 28
MSG_RLY_RDY F1 84 20
MSG_SDEV_LGN F1 91 20
MSG_MGM_ADMIN F1 A0 160
MSG_MGM_DEVLIST_CTRL F1 A2 20
MSG_MGM_HELLO F1 A4 4
MSG_MGM_MULTI_DEV_CTRL F1 A6 variable
MSG_MGM_DEV_DETAIL F1 A8 24
MSG_MGM_DEV_VIEW F1 AA 4
MSG_MGM_RLY_LIST F1 AC 12
MSG_MGM_DEV_CTRL F1 AE 24
MSG_MGM_MEM_DB F1 B0 264
MSG_MGM_RLY_DETAIL F1 B2 24
MSG_MGM_ADMIN_LGOUT F1 BA 4
MSG_MGM_ADMIN_CHG F1 BC 164
MSG_VGW_LGN F1 C0 24
MSG_VGW_LGN_EX F1 C0 24
MSG_VGW_REQ F1 C3 20
MSG_VGW_REQ_ACK F1 C4 4
MSG_VGW_HELLO F1 C5 0
MSG_VGW_LST_REQ F1 C6 20
MSG_DRW F1 D0 variable
MSG_DRW_ACK F1 D1 variable
MSG_P2P_ALIVE F1 E0 0
MSG_P2P_ALIVE_ACK F1 E1 0
MSG_CLOSE F1 F0 0
MSG_MGM_DEV_LGN_DETAIL_DUMP F1 F4 12
MSG_MGM_DEV_LGN_DUMP F1 F4 12
MSG_MGM_LOG_CTRL F1 F7 12
MSG_SVR_REQ F2 10 0
MSG_DEV_LV_HB A1 00 20
MSG_DEV_SLP_HB A1 01 20
MSG_DEV_QUERY A1 02 20
MSG_DEV_WK_UP_REQ A1 04 20
MSG_DEV_WK_UP A1 06 20

HLP2P

While I've seen a few of apps with HLP2P code and the corresponding init strings, I am not sure whether these are still used or merely leftovers from some past adventure. All these apps use primarily networks that rely on other protocol implementations.

HLP2P init strings contain a dash which follows merely three letters. These three letters are ignored and I am unsure about their significance as I've only seen one variant:

DAS-0123456789ABCDEF

The decoding function is called from HLP2P_Initialize function and uses the most elaborate approach of all. The hex-encoded part after the dash is decrypted using AES-CBC where the key and initialization vector are derived from a zero-filled buffer via some bogus MD5 hashing. The decoded result is a list of comma-separated parameters like:

DCDC07FF,das,10000001,a+a+a,127.0.0.1-192.168.1.1-10.0.0.1,ABC-CBA

The fifth parameter is a list of server IP addresses and the sixth appears to be the list of supported device prefixes.

On the network level HLP2P is an oddity here. Despite trying hard to provide the same API as other PPPP implementations, including concepts like init strings and device IDs, it appears to be a TCP-based protocol (connecting to server's port 65527) with little resemblance to PPPP. UDP appears to be used for local broadcasts only (on port 65531). I didn't spend too much time on the analysis however.

"Encryption"

The CS2 implementation of the protocol is the only one that bothers with encrypting data, though their approach is better described as obfuscation. When encryption is enabled, the function P2P_Proprietary_Encrypt is applied to all outgoing and the function P2P_Proprietary_Decrypt to all incoming messages. These functions take the encryption key (which is visible in the application code as an unobfuscated part of the init string) and mash it into four bytes. These four bytes are then used to select values from a static table that the bytes of the message should be XOR'ed with.

There is at least one public implementation of this "encryption" though this one chose to skip the "key mashing" part and simply took the resulting four bytes as its key. A number of articles mention having implemented this algorithm however, it's not really complicated.

The same obfuscation is used unconditionally for TCP traffic (TCP communication on port 443 as fallback). Here each message header contains two random bytes. The hex representation of these bytes is used as key to obfuscate message contents.

All *_CRC messages like MSG_DEV_LGN_CRC have an additional layer of obfuscation, performed by the functions PPPP_CRCEnc and PPPP_CRCDec. Unlike P2P_Proprietary_Encrypt which is applied to the entire message including the header, PPPP_CRCEnc is only applied to the payload. As normally only messages exchanged between the device and the server are obfuscated in this way, the corresponding key tends to be contained only in the device firmware and not in the application. Here as well the key is mashed into four bytes which are then used to generate a byte sequence that the message (extended by four + signs) is XOR'ed with. This is effectively an XOR cipher with a static key which is easy to crack even without knowing the key.

"Secret" messages

The CS2 implementation of the protocol contains a curiosity: two messages starting with 338DB900E559 being processed in a special way. No, this isn't a hexadecimal representation of the bytes - it's literally the message contents. No magic bytes, no encryption, the messages are expected to be 17 bytes long and are treated as zero-terminated strings.

I tried sending 338DB900E5592B32 (with a trailing zero byte) to a PPPP server and, surprisingly, received a response (non-ASCII bytes are represented as escape sequences):

\x0e\x0ay\x07\x08uT_ChArLiE@Cs2-NeTwOrK.CoM!

This response was consistent for this server, but another server of the same network responded slightly differently:

\x0e\x0ay\x07\x08vT_ChArLiE@Cs2-NeTwOrK.CoM!

A server from a different network which normally encrypts all communication also responded:

\x17\x06f\x12fDT_ChArLiE@Cs2-NeTwOrK.CoM!

It doesn't take a lot of cryptanalysis knowledge to realize that an XOR cipher with a constant key is being applied here. Thanks to my "razor sharp deduction" I could conclude that the servers are replying with their respective names and these names are being XOR'ed with the string CS2MWDT_ChArLiE@Cs2-NeTwOrK.CoM!. Yes, likely the very same Charlie already mentioned at the start of this article. Hi, Charlie!

I didn't risk sending the other message, not wanting to shut down a server accidentally. But maybe Shodan wants to extend their method of detecting PPPP servers: their current approach only works when no encryption is used, yet this message seems to get replies from all CS2 servers regardless of encryption.

Applications

Once a connection between the client and the device is established, MSG_DRW messages are exchanged in both directions. The messages will be delivered in order and retransmitted if lost, giving application developers something resembling a TCP stream if you don't look too closely. In addition, each message is tagged with a channel ID, a number between 0 and 7. It looks like channel IDs are universally ignored by devices and are only relevant in the other direction. The idea seems to be that a client receiving a video stream should still be able to send commands to the device and receive responses over the same connection.

The PPPP protocol doesn't make any recommendations about how applications should encode their data within that stream, and so they developed a number of wildly different application-level protocols. As a rule of thumb, all devices and clients on a particular PPPP network will always speak the same application-level protocol, though there might be slight differences in the supported capabilities. Different networks can share the same protocol, allowing them to be supported within the same application. Usually, there will be multiple applications implementing the same application-level protocol and working with the same PPPP networks, but I haven't yet seen any applications supporting different protocols.

This allows grouping the applications by their application-level protocol. Applications within the same group are largely interchangeable, same devices can be accessed from any application. This doesn't necessarily mean that everything will work correctly, as there might still be subtle differences. E.g. an application meant for visual doorbells probably accesses somewhat different functionality than one meant for security cameras even if both share the same protocol. Also, devices might be tied to the cloud infrastructure of a specific application, rendering them inaccessible to other applications working with the same PPPP network.

Fun fact: it is often very hard to know up front which protocol your device will speak. There is a huge thread with many spin-offs where people are attempting to reverse engineer A9 Mini cameras so that these can be accessed without an app. This effort is being massively complicated by the fact that all these cameras look basically the same, yet depending on the camera one out of at least four extremely different protocols could be used: HDWifiCamPro variant of SHIX JSON, YsxLite variant of iLnk binary, JXLCAM variant of CGI calls, or some protocol I don't know because it isn't based on PPPP.

The following is a list of PPPP-based applications I've identified so far, at least the ones with noteworthy user numbers. Mind you, these numbers aren't necessarily indicative of the number of PPPP devices - some applications listed only use PPPP for some devices, likely using other protocols for most of their supported devices (particularly the ones that aren't cameras). I try to provide a brief overview of the application-level protocol in the footnotes. Disclaimer: These applications tend to support a huge number of device prefixes in theory, so I mostly chose the "typical" ones based on which ones appear in YouTube videos or GitHub discussions.

Application Typical device prefixes Application-level protocol
Xiaomi Home XMSYSGB JSON (MISS) 1
Kami Home
Yi Home
Yi iot
TNPCHNA TNPCHNB TNPUSAC TNPUSAM TNPXGAC binary 2
Tuya - Smart Life,Smart Living TUYASA binary (Tuya SDK) 3
365Cam
CY365
Goodcam
HDWifiCamPro
PIX-LINK CAM
VI365
X-IOT CAM
DBG DGB DGO DGOA DGOC DGOE NMSA PIXA PIZ JSON (SHIX) 4
Eye4
O-KAM Pro
Veesky
EEEE VSTA VSTB VSTC VSTD VSTF VSTJ CGI calls 5
CamHi
CamHipro
AAFF EEEE MMMM NNNN PPPP SSAA SSAH SSAK SSAT SSSS TTTT binary 6
CloudEdge
ieGeek Cam
ECIPCM binary (Meari SDK) 7
YsxLite BATC BATE PTZ PTZA PTZB TBAT binary (iLnk) 8
FtyCamPro FTY FTYA FTYC FTZ FTZW binary (iLnk) 9
JXLCAM ACCQ BCCA BCCQ CAMA CGI calls 10
LookCam BHCC FHBB GHBB JSON 11
HomeEye
LookCamPro
StarEye
AYS AYSA TUT JSON (SHIX) 12
minicam CAM888 CGI calls 13
App2Cam Plus CGAG CMAG CTAI WGAG binary (Jsw SDK) 14

  1. Each message starts with a 4 byte command ID. The initial authorization messages (command ID 0x100 and 0x101) contain plain JSON data. Other messages contain ChaCha20-encoded data: first 8 bytes nonce, then the ciphertext. The encryption key is negotiated in the authorization phase. The decrypted plaintext again starts with a 4 byte command ID, followed by JSON data. There is even some Chinese documentation of this interface though it is rather underwhelming. ↩︎

  2. The device-side implementation of the protocol is available on the web. This doesn't appear to be reverse engineered, it's rather the source code of the real thing complete with Chinese comments. No idea who or why published this, I found it linked by the people who develop own changes to the stock camera firmware. The extensive tnp_eventlist_msg_s structure being sent and received here supports a large number of commands. ↩︎

  3. Each message is preceded by a 16 byte header: 78 56 34 12 magic bytes, request ID, command ID, payload size. This is a very basic interface exposing merely 10 commands, most of which are requesting device information while the rest control video/audio playback. As Tuya SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  4. Messages are preceded by an 8 byte binary header: 06 0A A0 80 magic bytes, four bytes payload size (there is a JavaScript-based implementation). The SHIX JSON format is a translation of this web API interface: /check_user.cgi?user=admin&pwd=pass becomes {"pro": "check_user", "cmd": 100, "user": "admin", "pwd": "pass"}. The pro and cmd fields are redundant, representing a command both as a string and as a number. ↩︎

  5. The binary message headers are similar to the ones used by apps like 365Cam: 01 0A 00 00 magic bytes, four bytes payload size. The payload is however a web request loosely based on this web API interface: GET /check_user.cgi?loginuse=admin&loginpas=pass&user=admin&pwd=pass. Yes, user name and password are duplicated, probably because not all devices expect loginuse/loginpas parameters? You can see in this article what the requests looks like. ↩︎

  6. Each message is preceded by a 24 byte header starting with the magic bytes 99 99 99 99, payload size and command ID. The other 12 bytes of the header are unused. Not trusting PPPP, CamHi encrypts the payload using AES. It looks like the encryption key is an MD5 hash of a string containing the user name and password among other things. Somebody published some initial insights into the application code. ↩︎

  7. Each message is preceded by a 52 byte header starting with the magic bytes 56 56 50 99. Bulk of this header is taken up by an authentication token: a SHA1 hex digest hashing the username (always admin), device password, sequence number, command ID and payload size. The implemented interface provides merely 14 very basic commands, essentially only exposing access to recordings and the live stream. So the payload even where present is something trivial like a date. As Meari SDK also communicates with devices by means other than PPPP, more advanced functionality is probably exposed elsewhere. ↩︎

  8. The commands and their binary representation are contained within libvdp.so which is the iLnk implementation of the PPPP protocol. Each message is preceded by a 12 bytes header starting with the 11 0A magic bytes. The commands are two bytes long with the higher byte indicating the command type: 2 for SD card command, 3 for A/V command, 4 for file command, 5 for password command, 6 for network command, 7 for system command. ↩︎

  9. While FtyCamPro app handles different networks than YsxLite, it relies on the same libvdp.so library, meaning that the application-level protocol should be the same. It's possible that some commands are interpreted differently however. ↩︎

  10. The protocol is very similar to the one used by VStarcam apps like O-KAM Pro. The payload has only one set of credentials however, the parameters user and pwd. It's also a far more limited and sometimes different set of commands. ↩︎

  11. Each message is wrapped in binary data: a prefix starting with A0 AF AF AF before it, the bytes F4 F3 F2 F1 after. For some reason the prefix length seems to be different depending on whether the message is sent to the device (26 bytes) or received from it (25 bytes). I don't know what most of it is yet everything but the payload length at the end of the prefix seems irrelevant. This Warwick University paper has some info on the JSON payload. It's particularly notable that the password sent along with each command isn't actually being checked. ↩︎

  12. LookCamPro & Co. share significant amounts of code with the SHIX apps like 365Cam, they implement basically the same application-level protocol. There are differences in the supported commands however. It's difficult to say how significant these differences are because all apps contain significant amounts of dead code, defining commands that are never used and probably not even supported. ↩︎

  13. The minicam app seems to use almost the same protocol as VStarcam apps like O-KAM Pro. It handles other networks however. Also, a few of the commands seem different from the ones used by O-KAM Pro, though it is hard to tell how significant these incompatibilities really are. ↩︎

  14. Each message is preceded by a 4 bytes header: 3 bytes payload size, 1 byte I/O type (1 for AUTH, 2 for VIDEO, 3 for AUDIO, 4 for IOCTRL, 5 for FILE). The payload starts with a type-specific header. If I read the code correctly, the first 16 bytes of the payload are encrypted with AES-ECB (unpadded) while the rest is sent unchanged. There is an "xor byte" in the payload header which is changed with every request seemingly to avoid generating identical ciphertexts. Payloads smaller than 16 bytes are not encrypted. I cannot see any initialization of the encryption key beyond filling it with 32 zero bytes, which would mean that this entire mechanism is merely obfuscation. ↩︎

05 Nov 2025 3:11pm GMT

Niko Matsakis: But then again...maybe alias?

Hmm, as I re-read the post I literally just posted a few minutes ago, I got to thinking. Maybe the right name is indeed Alias, and not Share. The rationale is simple: alias can serve as both a noun and a verb. It hits that sweet spot of "common enough you know what it means, but weird enough that it can be Rust Jargon for something quite specific". In the same way that we talk about "passing a clone of foo" we can talk about "passing an alias to foo" or an "alias of foo". Food for thought! I'm going to try Alias on for size in future posts and see how it feels.

05 Nov 2025 1:57pm GMT

Niko Matsakis: Bikeshedding `Handle` and other follow-up thoughts

There have been two major sets of responses to my proposal for a Handle trait. The first is that the Handle trait seems useful but doesn't over all the cases where one would like to be able to ergonomically clone things. The second is that the name doesn't seem to fit with our Rust conventions for trait names, which emphasize short verbs over nouns. The TL;DR of my response is that (1) I agree, this is why I think we should work to make Clone ergonomic as well as Handle; and (2) I agree with that too, which is why I think we should find another name. At the moment I prefer Share, with Alias coming in second.

Handle doesn't cover everything

The first concern with the Handle trait is that, while it gives a clear semantic basis for when to implement the trait, it does not cover all the cases where calling clone is annoying. In other words, if we opt to use Handle, and then we make creating new handles very ergonomic, but calling clone remains painful, there will be a temptation to use the Handle when it is not appropriate.

In one of our lang team design meetings, TC raised the point that, for many applications, even an "expensive" clone isn't really a big deal. For example, when writing CLI tools and things, I regularly clone strings and vectors of strings and hashmaps and whatever else; I could put them in an Rc or Arc but I know it just doens't matter.

My solution here is simple: let's make solutions that apply to both Clone and Handle. Given that I think we need a proposal that allows for handles that are both ergonomic and explicit, it's not hard to say that we should extend that solution to include the option for clone.

The explicit capture clause post already fits this design. I explicitly chose a design that allowed for users to write move(a.b.c.clone()) or move(a.b.c.handle()), and hence works equally well (or equally not well…) with both traits

The name Handle doesn't fit the Rust conventions

A number of people have pointed out Handle doesn't fit the Rust naming conventions for traits like this, which aim for short verbs. You can interpret handle as a verb, but it doesn't mean what we want. Fair enough. I like the name Handle because it gives a noun we can use to talk about, well, handles, but I agree that the trait name doesn't seem right. There was a lot of bikeshedding on possible options but I think I've come back to preferring Jack Huey's original proposal, Share (with a method share). I think Alias and alias is my second favorite. Both of them are short, relatively common verbs.

I originally felt that Share was a bit too generic and overly associated with sharing across threads - but then I at least always call &T a shared reference1, and an &T would implement Share, so it all seems to work well. Hat tip to Ariel Ben-Yehuda for pushing me on this particular name.

Coming up next

The flurry of posts in this series have been an attempt to survey all the discussions that have taken place in this area. I'm not yet aiming to write a final proposal - I think what will come out of this is a series of multiple RFCs.

My current feeling is that we should add the Hand^H^H^H^H, uh, Share trait. I also think we should add explicit capture clauses. However, while explicit capture clauses are clearly "low-level enough for a kernel", I don't really think they are "usable enough for a GUI" . The next post will explore another idea that I think might bring us closer to that ultimate ergonomic and explicit goal.


  1. A lot of people say immutable reference but that is simply accurate: an &Mutex is not immutable. I think that the term shared reference is better. ↩︎

05 Nov 2025 1:15pm GMT

This Week In Rust: This Week in Rust 624

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is dioxus, a framework for building cross-platform apps.

Thanks to llogiq for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Compiler
Library
Cargo
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Mostly positive week. We saw a great performance win implemented by #148040 and #148182, which optimizes crates with a lot of trivial constants.

Triage done by @kobzol.

Revision range: 23fced0f..35ebdf9b

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.8% [0.1%, 2.9%] 22
Regressions ❌
(secondary)
0.5% [0.1%, 1.7%] 48
Improvements ✅
(primary)
-2.8% [-16.4%, -0.1%] 102
Improvements ✅
(secondary)
-1.9% [-8.0%, -0.1%] 51
All ❌✅ (primary) -2.1% [-16.4%, 2.9%] 124

4 Regressions, 6 Improvements, 7 Mixed; 7 of them in rollups 36 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Compiler Team (MCPs only)

Language Reference

Leadership Council

No Items entered Final Comment Period this week for Cargo, Rust RFCs, Language Team or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-11-05 - 2025-12-03 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If someone opens a PR introducing C++ to your Rust project, that code is free as in "use after"

- Predrag Gruevski on Mastodon

Thanks to Brett Witty for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

05 Nov 2025 5:00am GMT

04 Nov 2025

feedPlanet Mozilla

Firefox Add-on Reviews: Supercharge your productivity with a Firefox extension

With more work and education happening online you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right Firefox extension can give you an edge in the art of efficiency.

I need help saving and organizing a lot of web content

Raindrop.io

Organize anything you find on the web with Raindrop.io - news articles, videos, PDFs, and more.

Raindrop.io makes it simple to gather clipped web content by subject matter and organize with ease by applying tags, filters, and in-app search. This extension is perfectly suited for projects that require gathering and organizing lots of mixed media.

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research.

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension's toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams.

On your Gyazo homepage you can easily browse and sort everything you've clipped; and organize it all into shareable topics or collections.

<figcaption class="wp-element-caption">With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo and Raindrop.io, Evernote Web Clipper offers a kindred feature set - clip, save, and share web content - albeit with some nice user interface distinctions.

Evernote makes it easy to annotate images and articles for collaborative projects. It also has a strong internal search feature, allowing you to look for specific words and phrases that might appear across scattered collections of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages.

Notefox

Wouldn't it be great if you could leave yourself little sticky notes anywhere you wanted around the web? Well now you can with Notefox.

Leave notes on specific web pages or entire domains. You can access all your notes from a central repository so everything is easy to find. The extension also includes a helpful auto-save feature so you'll never lose a note.

Print Edit WE

If you need to save or print an important web page - but it's mucked up with a bunch of unnecessary clutter like ads, sidebars, and other peripheral distractions - Print Edit WE lets you easily remove those unwanted elements.

Along with a host of great features like the option to save web pages as either HTML or PDF files, automatically delete graphics, and the ability to alter text or add notes, Print Edit WE also provides an array of productivity optimizations like keyboard shortcuts and mouse gestures. This is the ideal productivity extension for any type of work steeped in web research and cataloging.

Focus! Focus! Focus!

Anti-distraction and decluttering extensions can provide a major boon for online workers and students…

Block Site

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits.

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely) and Block Site won't let you access them until you're out of the focus zone. There's also a fun redirection feature where you're automatically redirected to a more productive website anytime you try to visit a time waster.

<figcaption class="wp-element-caption">Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site.</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features.

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities - like blocking just portions of websites (e.g. you can't access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potential productivity killing decision.

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals.

The premise is simple: it assumes everyone's productive attention span is limited, so break up your work into manageable "tomato" chunks. Let's say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it's break time (which is also time customizable). It's a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Time Tracker

See how much time you spend on every website you visit. Time Tracker provides a granular view of your web habits.

If you find you're spending too much time on certain websites, Time Tracker offers a block site feature to break the bad habit.

Tabby - Window & Tab Manager

Are you overwhelmed by lots of open tabs and windows? Need an easy way to overcome desktop chaos? Tabby - Window & Tab Manager to the rescue.

Regain control of your ever-sprawling open tabs and windows with an extension that lets you quickly reorganize everything. Tabby makes it easy to find what you need in a chaotic sea of open tabs - you can word/phrase search for what you're looking for, of use Tabby's visual preview feature to see little thumbnail images of your open tabs without actually navigating to them. And whenever you need a clean slate but want to save your work, you can save and close all of your open tabs with a single mouse click and return to them later.

<figcaption class="wp-element-caption">Access all of Tabby's features in one convenient pop-up. </figcaption>

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away - no more distracting images, ads, tempting links to related stories, nothing - just the words you're there to read. That's Tranquility Reader.

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later, customizable font size and colors, add annotations to saved pages, and more.

Checker Plus for Gmail

Stop wasting time bouncing between the web and your Gmail app. Checker Plus for Gmail puts your inbox and more right into Firefox's toolbar so it's with you wherever you go on the internet.

See email notifications, read, reply, delete, mark as 'read' and more - all within a convenient browser pop-up.

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions that can help with productivity - everything from ways to organize tons of open tabs to translation tools to bookmark managers and more.

04 Nov 2025 1:42am GMT

03 Nov 2025

feedPlanet Mozilla

Chris H-C: Ten-Year Moziversary

I'm a few days late publishing this, but this October marks the tenth anniversary of my first day working at Mozilla. I'm on my third hardware refresh (a Dell XPS which I can't recommend), still just my third CEO, and now 68 reorgs in.

For something as momentous as breaking into two-digit territory, there's not really much that's different from last year. I'm still trying to get Firefox Desktop to use Glean instead of Legacy Telemetry and I'm still not blogging nearly as much as I'd like. Though, I did get promoted earlier this year. I am now a Senior Staff Software Engineer, which means I'm continuing on the journey of doing fewer things myself and instead empowering other people to do things.

As for predictions, I was spot on about FOG Migration actually taking off a little - in fact, quite a lot. All data collection in Firefox Desktop now either passes through Glean to get to Legacy Telemetry, has Glean mirroring alongside it, or has been removed. This is in large part thanks to a big help from Florian Quèze and his willingness to stop asking when we could start and just migrate the codebase. Now we're working on moving the business data calculations onto Glean-sent data, and getting individual teams to change over too. If you're reading this and were looking for an excuse to remove Legacy Telemetry from your component, this is your excuse.

My prediction that there'd be an All Hands was wrong. Mozilla Leadership has decided that the US is neither a place they want to force people to travel to nor is it a place they want to force people to travel out of (and then need to attempt to return to) in the current political climate. This means that business gatherings of any size are… complicated. Some teams have had simultaneous summits in cities both within and without the US. Some teams have had one or the other side call in virtually from their usual places of work. And our team… well, we've not gathered at all. Which is a bummer, since we've had a few shuffles in the ranks and it'd be good to get us all in one place. (I will be in Toronto with some fellow senior Data Engineering folks before the end of the year, but that's the extent of work travel.) I'm broadly in favour of removing the requirement and expectation of travel over the US border - too many people have been disappeared in too many ways. We don't want to make anyone feel as though they have to risk it. But it seems as though we're also leaning away from allowing people to risk it if they want to, which is a level of paternalism that I didn't want to see.

I did have one piece of "work" travel in that I attended CSV Conf in Bologna, Italy. Finally spent my Professional Development budget, and wow what a great investment. I learned so much and had a great time, and that was despite the heat and humidity (goodness, Italy. I was in your North (ish). In September. Why you gotta 30degC me like this?). I'm on the lookout for other great conferences to attend in 2026, so if you know any, get in touch.

My prediction that I'd still be three CEOs in because the search for a new one wouldn't have completed by now: spot on. Ditto on executing my hardware refresh, though I'm still using a personal monitor at work. I should do something about that.

My prediction that we'd stop putting AI in everything has partially come true. There's been a noticeable shift away from "Put genAI in it and find a problem for it to (maybe) solve" towards "If you find a problem that genAI can help with, give it a try." You wouldn't notice it, necessarily, looking at feature announcements for Firefox, as quite a lot of the integration infrastructure all landed in the past couple of months, making headlines. My feelings on LLMs and genAI have gained layers and nuance since last year. They're still plagiarism machines that are illegally built by the absolute worst people in ways that worsen the climate catastrophe and entrench existing inequalities. But now they've apparently become actually useful in some ways. I've read reports from very senior developers about use cases that LLMs have been able to assist with. They are narrow use cases - you must only use it to work on components you understand well, you must only use it on tasks you would do yourself if you had the time and energy - but they're real. And that means my usual hard line of "And even if you ignore the moral, ethical, environmental, economic, and industry concerns about using LLMs: they don't even work" no longer applies. And in situations like a for-profit corporation lead by people from industry… ignoring the moral, ethical, environmental, economic, and industry concerns is de rigeur.

Add these to the sorta-kinda-okay things LLMs can do like natural language processing and aiding in training and refinement of machine translation models, and it looks as though we're figuring out the "reheat the leftovers" and "melt butter and chocolate" use cases for these microwave ovens.

It still remains to be seen if, after the bubble pops, these nuclear-powered lake-draining art-stealing microwaves will find a home in many kitchens. I expect the fully-burdened cost will be awfully prohibitive for individuals who just want it to poorly regurgitate Wikipedia articles in a chat interface. It might even be too spicy for enterprises who think (likely erroneously) that they confer some instantaneous and generous productivity multiplier. Who knows.

All I know is that I still don't like it. But I'll likely find myself using one before the end of the year. If so, I intend to write up the experience and hopefully address my blogging drought by publishing it here.

Another thing that happened this year that I alluded to in last year's post was the Google v DOJ ruling in the US. Well, the first two rulings anyway. Still years of appeal to come, but even the existing level of court seemed to agree that the business model that allows Mozilla to receive a bucketload of dollabux from Google for search engine placement in Firefox (aka, the thing that supplies most of my paycheque) should not be illegal at this time. Which is a bit of a relief. One existential threat to the business down… for now.

But mostly? This year has been feeling a little like 2016 again. Instead of The Internet of Things (IoT, where the S stands for Security), it's genAI. Instead of Mexico and Muslims it's Antifa and Trans people. The Jays are in the postseason again. Shit's fucked and getting worse. But in all that, someone still has to rake the leaves and wash the dishes. And if I don't do it, it won't get done.

With that bright spot highlighted, what are my predictions for the new year:

To another year of supporting the Mission!

:chutten

03 Nov 2025 9:47pm GMT

Mozilla Localization (L10N): L10n Report: November Edition 2025

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

What's new or coming up in Firefox desktop

Firefox Backup

Firefox backup is a new feature being introduced in Firefox 145, currently testable in Beta and Nightly behind a preference flag. See here for instructions on how to test this feature.

This feature allows users to save a backup of their Firefox data to their local device at regular intervals, and later use that backup to restore their browser data or migrate their browser to a new device. One of the use cases is for current Windows 10 users who may be migrating to a new Windows 11 device. The user can save their Firefox backup to OneDrive, and later after setting up their new device can then install Firefox and restore their browsing data from the backup saved in OneDrive.

This is an alternative to using the sync functionality in combination with a Mozilla account.

Settings Redesign

Coming up in future releases, the current settings menu is being re-organized and re-designed to be more user friendly and easier to understand. New strings will be rolling out with relative frequency, but they can't be viewed or tested in Beta or Nightly yet. If you encounter anything where you need additional context, please feel free to use the request context button in Pontoon or drop into our localization matrix channel where you can get the latest updates and engage with your fellow localizers from around the world.

What's new or coming up in mobile

Here's what's been going on in Firefox for Android land lately: you may have noticed strings landing for the Toolbar refresh, the tab tray layout, as well as for a homepage revamp. All of this is work is ongoing, so expect to see more strings landing soon!

On the Firefox for iOS side, there have been improvements to Search along with a revamp of the menu and tab tray. Ongoing work continues on the Translations feature integration, the homepage revamp, and the toolbar refresh.

More updates coming soon - stay tuned!

What's new or coming up in web projects

AMO and AMO Frontend

The team has been working on identifying and removing obsolete strings to minimize unnecessary translation effort especially the locales that are still catching on. Recently they removed an additional 160 or so strings.

To remain in production, a locale must have both projects at or above 80% completion. If only one project meets the threshold, neither will be enabled. This policy helps prevent users from unintentionally switching between their preferred language and English. Please review your locale to confirm both projects are localized and in good standing.

If a locale already in production falls below the threshold, the team will be notified. Each month, they will review the status of all locales and manually add or remove them from production as needed.

Mozilla accounts

The Mozilla accounts team has been working on the ability to customize surfaces for the various projects that rely on Mozilla accounts for account management such as sync, Mozilla VPN, and others. This customization applies only to a predetermined set of pages (such as sign-in, authentication, etc.) and emails (sign-up confirmation, sign-in verification code, etc.) and is managed through a content management system. This CMS process bypasses the typical build process and as a result changes are shown in production within a very short time-frame (within minutes). Each customization requires an instance of a string, even if that value hasn't changed, so this can result in a large number of identical strings being created.

This project will be managed in a new "Mozilla accounts CMS" project within Pontoon instead of the main "Mozilla accounts" project. We are doing this for a couple reasons:

Newly published localizer facing documentation

We've recently updated our testing instructions for Firefox for Android and for Firefox for iOS! If you spot anything that could be improved, please file an issue - we'd love your feedback.

Friends of the Lion

Image by Elio Qoshi

Want to learn more from your fellow contributors? Who would you like to be featured? You are invited to nominate the next candidate!

Know someone in your l10n community who's been doing a great job and should appear here? Contact us and we'll make sure they get a shout-out!

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

03 Nov 2025 8:07pm GMT

31 Oct 2025

feedPlanet Mozilla

Mozilla Privacy Blog: Pathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act

The Digital Fairness Act (DFA) is a defining opportunity to modernise Europe's consumer protection framework for the digital age. Mozilla welcomes the European Commission's ambition to ensure that digital environments are fair, open, and respecting of user autonomy.

As online environments are increasingly shaped by manipulative design, pervasive personalization, and emerging AI systems, traditional transparency and consent mechanisms are no longer sufficient. The DFA must therefore address how digital systems are designed and operated - from interface choices to system-level defaults and AI-mediated decision-making.

Mozilla believes the DFA, if designed in a smart way, will complement existing legislation (such as GDPR, DSA, DMA, AI Act) by closing long-recognized legal and enforcement gaps. When properly scoped, the DFA can simplify the regulatory landscape, reduce fragmentation, and enhance legal certainty for innovators, while also enabling consumers to exercise their choices online and bolster overall consumer protection. Ensuring effective consumer choice is at the heart of contestable markets, encouraging innovation and new entry.

Policy recommendations

1. Recognize and outlaw harmful design practices at the interface and system levels.

2. Establish substantive fairness standards for personalization and online advertising.

3. Strengthen centralized enforcement and cooperation across regulators.

A strong, harmonized DFA would modernize Europe's consumer protection architecture, strengthen trust, and promote a fairer, more competitive digital economy. By closing long-recognized legal gaps, it would reinforce genuine user choice, simplify compliance, enhance legal certainty, and support responsible innovation.

You can read our position in more detail here.

The post Pathways to a fairer digital world: Mozilla shares views on the EU Digital Fairness Act appeared first on Open Policy & Advocacy.

31 Oct 2025 12:54pm GMT

30 Oct 2025

feedPlanet Mozilla

The Rust Programming Language Blog: Announcing Rust 1.91.0

The Rust team is happy to announce a new version of Rust, 1.91.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.91.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.91.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.91.0 stable

aarch64-pc-windows-msvc is now a Tier 1 platform

The Rust compiler supports a wide variety of targets, but the Rust Team can't provide the same level of support for all of them. To clearly mark how supported each target is, we use a tiering system:

Rust 1.91.0 promotes the aarch64-pc-windows-msvc target to Tier 1 support, bringing our highest guarantees to users of 64-bit ARM systems running Windows.

Add lint against dangling raw pointers from local variables

While Rust's borrow checking prevents dangling references from being returned, it doesn't track raw pointers. With this release, we are adding a warn-by-default lint on raw pointers to local variables being returned from functions. For example, code like this:

fn f() -> *const u8 {
    let x = 0;
    &x
}

will now produce a lint:

warning: a dangling pointer will be produced because the local variable `x` will be dropped
 --> src/lib.rs:3:5
  |
1 | fn f() -> *const u8 {
  |           --------- return type of the function is `*const u8`
2 |     let x = 0;
  |         - `x` is part the function and will be dropped at the end of the function
3 |     &x
  |     ^^
  |
  = note: pointers do not have a lifetime; after returning, the `u8` will be deallocated
    at the end of the function because nothing is referencing it as far as the type system is
    concerned
  = note: `#[warn(dangling_pointers_from_locals)]` on by default

Note that the code above is not unsafe, as it itself doesn't perform any dangerous operations. Only dereferencing the raw pointer after the function returns would be unsafe. We expect future releases of Rust to add more functionality helping authors to safely interact with raw pointers, and with unsafe code more generally.

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Platform Support

Refer to Rust's platform support page for more information on Rust's tiered platform support.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.91.0

Many people came together to create Rust 1.91.0. We couldn't have done it without all of you. Thanks!

30 Oct 2025 12:00am GMT

29 Oct 2025

feedPlanet Mozilla

Mozilla Privacy Blog: California’s Opt Me Out Act is a Win for Privacy

It's no secret that privacy and user empowerment have always been core to Mozilla's mission.

Over the years, we've consistently engaged with policymakers to advance strong privacy protections. We were thrilled when the California Consumer Privacy Act (CCPA) was signed into law, giving people the ability to opt-out and send a clear signal to websites that they don't want their personal data tracked or sold. Despite this progress, many browsers and operating systems still failed to make these controls available or offer the tools to do so without third-party support. This gap is why we've pushed time and time again for additional legislation to ensure people can easily exercise their privacy rights online.

Last year, we shared our disappointment when California's AB 3048 was not signed into law. This bill was a meaningful step toward empowering consumers. When it failed to pass, we urged policymakers to continue efforts to advance similar legislation, to close gaps and strengthen enforcement.

We can't stress this enough: Legislation must prioritize people's privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information.

That's why we joined allies to support AB 566, the California Opt Me Out Act, mandating that browsers include an opt-out setting so Californians can easily communicate their privacy preferences. Earlier this month, we were happy to see it pass and Governor Newsom sign it into law.

Mozilla has long advocated for easily accessible universal opt-out mechanisms; it's a core feature built into Firefox through our Global Privacy Control (GPC) mechanism. By requiring browsers to provide tools like GPC, California is setting an important precedent that brings us closer to a web where privacy controls are consistent, effective, and easy to use.

We hope to see similar steps in other states and at the federal level, to advance meaningful privacy protections for everyone online - the issue is more urgent than ever. We remain committed to working alongside policymakers across the board to ensure it happens.

The post California's Opt Me Out Act is a Win for Privacy appeared first on Open Policy & Advocacy.

29 Oct 2025 11:53pm GMT