24 Dec 2025

feedPlanet Mozilla

This Week In Rust: This Week in Rust 631

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is arcshift, an Arc replacement for read-heavy workloads that supports lock-free atomic replacement.

Thanks to rustkins for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

475 pull requests were merged in the last week

Compiler
Library
Rustdoc
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

Very quiet week, with essentially no change in performance.

Triage done by @simulacrum. Revision range: 21ff67df..e1212ea7

1 Regression, 1 Improvement, 3 Mixed; 2 of them in rollups 36 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Cargo

Compiler Team (MCPs only)

Leadership Council

No Items entered Final Comment Period this week for Rust RFCs, Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-24 - 2026-01-21 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

they should just rename unsafe to C so people can shut up

- /u/thisismyfavoritename on /r/rust

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

24 Dec 2025 5:00am GMT

William Durand: You can lead a horse to water but...

It's an unpleasant pattern, one I'm deeply aware of: the tendency to use my regular 1:1s with my manager as an outlet for pent-up frustration. While I strive for constructive dialogue, the reality is that the various challenges my team has faced over the past 3 years have created a reservoir of exasperation that sometimes spills over. It doesn't happen every time but I wouldn't exclude it happened more often than I am willing to concede…

One particularly vivid discussion centered on the acute challenge of driving a project forward when the contributors appeared distracted, pulled in too many directions, or simply disengaged. I don't remember the specifics but, as a tech lead, this is a recurring problem. I possess all the technical vision and planning responsibility, but none of the formal "authority" of a manager. I am tasked with orchestration, but lack the levers of performance reviews or task assignment to enforce focus.

My manager, having listened patiently to my description of the sheer effort required to maintain momentum, made the following comment:

You can lead a horse to water but you can't force it to drink.

At the time, my sole obsession was the completion of the project. This singular focus blinded me to a set of important leadership principles, which this saying kinda illuminated.

First, there is a stark division between what I can control and what I cannot. Since I am not a manager, I cannot mandate what tasks my teammates prioritize, and I cannot force them to allocate their time optimally either. These are management functions that reside elsewhere.

However, recognizing this limitation is liberating, as it directs my energy toward my spheres of influence. I can absolutely control the clarity of the project's priority. I can ensure that every relevant stakeholder understands why this work is a top-priority initiative. Furthermore, and critically, I can control the environment for execution. I can be the first to identify and remove blockers, to clarify ambiguities, and to ensure no one is "stuck" awaiting a decision or a piece of information. My role shifts from pushing people to clearing the path for them.

The second insight is about the nature of my intervention. I can generate far greater impact by adopting a posture of "enablement" and opportunity creation rather than rescue or takeover. It's tempting, in the face of (likely) perceived slowness, to simply get things done myself. This provides immediate, but shallow, relief. The deeper, more structural impact comes from always creating new opportunities.

This approach requires a profound acceptance of risk, though. It means accepting that an opportunity provided might be fumbled, that a delegated task might not be executed perfectly, or that a teammate might initially choose the wrong path. If I take over every difficult or risky task, I rob my colleagues of the opportunity to grow, to demonstrate ownership, and to ultimately drink the water on their own terms.

And that's why I find this saying pretty good and relevant. As a tech lead, I am responsible for the water. I must ensure the goal is clear, the path is accessible, the resources are available, and the environment is conducive to success. I must exhaust every possible avenue to supercharge my teammates through inspiration, clear communication, and strategic support.

But I am not, and cannot be, responsible for the drinking part. If, after all my efforts, a team member consistently chooses a path of disengagement, lack of focus, or resistance to the collective goal, that crosses the boundary of my direct control. At that point, the issue moves into a different domain. It becomes an issue for their own manager to address or for the individual themselves.

Accepting this boundary helps me preserve my energy for where it can truly effect change.

24 Dec 2025 12:00am GMT

19 Dec 2025

feedPlanet Mozilla

Mozilla Privacy Blog: Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025)

Welcome to the blog series "Behind the Manifesto," where we unpack core issues that are critical to Mozilla's mission. The Mozilla Manifesto represents our commitment to advancing an open, global internet that gives people meaningful choice in their online experiences, promotes transparency and innovation and protects the public interest over private walled gardens. This blog series digs deeper on our vision for the web and the people who use it and how these goals are advanced in policymaking and technology.

In 2025, global tech policy raced to keep up with technological change and opportunity. In the midst of this evolution, Mozilla sought to ensure that solutions remained centered on openness, competition and user agency.

From AI Agents and the future of the open web to watershed antitrust cases, competition debates surged. Efforts to drive leadership and innovation in AI led governments across the globe to evaluate priorities. Perennial privacy and security questions remained on the radar, with US states intensifying efforts to pass laws and the EU working to streamline rules on AI, cybersecurity and data. Debates amongst industry, civil society and policymakers reflected the intensity of these moments.

Just as we have for over 20 years, Mozilla showed up to build, convene, debate and advocate. It's clear that more than ever, there must be urgency to truly put people first. Below are a selection of some key moments we're reflecting on, as we head into 2026.

FEBRUARY 2025

Mozilla Participates in Paris AI Action Summit as Part of the Steering Committee

Mozilla participated in the Paris AI Action Summit as Part of the Steering Committee with an 'action packed' schedule that included appearances on panels, a live recording of the podcast "Computer Says Maybe" and a reception to reflect on discussions and thank all the officials and researchers who had worked so hard to make the Summit a success.

Additionally, Mozilla and other partners, including Hugging Face, Microsoft and OpenAI, launched Robust Open Online Safety Tools (ROOST) at the Paris AI Action Summit. The entity is designed to create open source foundations for safer and more responsible AI development, ensuring that safety and transparency remain central to innovation.

The launch of ROOST happened at exactly the right time and in the right place. The Paris AI Action Summit provided a global backdrop for launching work that will ultimately help make AI safety a field that everyone can shape and improve.

Mozilla Event: AI & Competition featuring the President of the German Competition Authority
On February 12, we hosted a public event in Berlin on AI & competition, in partnership with German daily newspaper Tagesspiegel. Addressing the real risk of market concentration at various elements of the AI stack, the President of the German competition authority (Bundeskartellamt), Andreas Mundt, delivered a keynote address setting out his analysis of competition in AI and the role of his authority in ensuring contestable markets as technology rapidly evolves.

MARCH 2025

America's AI Action Plan

In March, Mozilla responded to the White House's request for information on AI policy, urging policymakers to ensure that AI remained open, competitive and accountable. The comments also warned that concentrated control by a few tech giants threatened innovation and public trust, and called for stronger support of open source AI, public AI infrastructure, transparent energy use and workforce development. Mozilla underscored these frameworks are essential to building an AI ecosystem that serves the public interest rather than purely corporate bottom lines.

Mozilla Mornings: Promoting a privacy-preserving online ads ecosystem

The same month, we also hosted a special edition of Mozilla Mornings focused on the future of online advertising and the role Privacy-Enhancing Technologies (PETs) can play in reshaping it. The conversation came at a critical moment in Europe, amidst discussions on updating privacy legislation while enforcing existing rules.

The session brought together policymakers, technologists, and civil-society experts to examine how Europe can move toward a fairer and more privacy-respecting advertising ecosystem. Speakers explored the limitations of today's surveillance-driven model and outlined how PETs and Privacy-Preserving Technologies (PPTs) could offer a viable alternative that protects users while sustaining the economic foundations of the open web. The event underscored Mozilla's commitment to advancing privacy-respecting technologies and ensuring that both policy and technical design converge toward a healthier online advertising ecosystem.

MAY 2025

CPDP: The Evolution of PETs in Digital Ads

At the Brussels 2025 International CPDP Conference, Mozilla organized and participated in a panel titled "The Evolution of PETs in Digital Ads: Genuine Privacy Innovation or Market Power Play?" The discussion explored how Privacy-Enhancing Technologies (PETs) - tools designed to minimize data collection and protect user privacy - are reshaping the digital advertising landscape. Panelists debated how to encourage genuine privacy innovation without reinforcing existing power structures, and how regulations like the GDPR and the Digital Markets Act (DMA) can help ensure PETs foster transparency and competition.

Competition in Focus: U.S. vs Google

The U.S. v. Google remedies trial was a defining moment - not just for 2025, but for the future of browser and search competition. While the remedies phase was about creating competition in the search market, some of the proposed remedies risked weakening independent browsers like Firefox, the very players that make real choice possible.

In early May, Mozilla's CFO, Eric Muhlheim, testified to this very point. Muhlheim's testimony, and Mozilla's amicus brief in the case, spoke to the vital role of small, independent browsers in driving competition and innovation across the web and warned about the risks of harming their ability to select the search default that best serves their users. Ensuring a competitive search ecosystem while avoiding harm to browser competition remains an important issue in 2026.

JUNE 2025

Open by Design: How Nations Can Compete in the Age of AI

The choices governments make today, about who gets to build, access and benefit from AI, will shape economic competitiveness, national security and digital rights for decades. In June, Mozilla supported a new report by the UK think tank Demos, exploring how and why embracing openness in key AI resources can spur innovation and adoption. Enabling safer, more transparent development and boosting digital sovereignty is a recipe, if there ever was one, for 'winning' at AI.

EU Digital Summit: Advocating for Open and Secure Digital Ecosystems

Digital competitiveness depends on open, secure, and interoperable ecosystems that foster innovation while respecting users' rights. We spoke at the 2025 European Digital Summit-a flagship forum bringing together policymakers, regulators, industry leaders, and civil society-and argued that openness and security reinforce each other, that smart regulation has the potential to lower entry barriers and curb gatekeeping power, and that innovation does not require sacrificing privacy when incentives are aligned toward rights-respecting designs. The takeaway was clear: enforcing interoperability, safeguarding pro-competition rules, and embedding privacy-by-design incentives are essential to a resilient, innovative, and trustworthy open web.

JULY 2025

Joint Letter to the UK Secretary of State on DMCCA

When choice disappears, innovation stalls. In July, Mozilla sent an open letter to UK Ministers and the Competition & Markets Authority to urge faster implementation of the UK Digital Markets, Competition & Consumers Act (DMCCA). As an organisation that exists to create an internet that is open and accessible to all, Mozilla has long supported competitive digital markets. Since the EU Digital Markets Act took effect in 2024, users have begun to benefit from genuine choice for the first time, with interventions like browser choice screens offering people browser choice. The result? People are choosing independent alternatives to gatekeepers defaults: Firefox daily active users on iOS rose by 150% across the EU. The UK's DMCCA could be similarly revolutionary for UK consumers and the many challenger businesses taking on market dominance.

SEPTEMBER 2025

Digital Bootcamp: Bringing Internet Architecture to the Heart of EU Policymaking

In September, Mozilla officially launched its Digital Bootcamp initiative, developed in partnership with Cloudflare, Proton and CENTR, to strengthen policymakers' understanding of how the internet actually works and why this technical foundation is essential for effective regulation. We delivered interactive sessions across EU institutions, including a workshop for Members of the European Parliament, the European Commission, and representatives of the EU member states.

Across these workshops, we demystified the layered architecture of the internet, explained how a single website request moves through the stack, and clarified which regulatory obligations apply at each layer. By bridging the gap between engineering and policymaking, Digital Bootcamp is helping ensure EU digital laws remain grounded in technical reality, supporting evidence-based decisions that protect innovation, security and the long-term health of the open web.

OCTOBER 2025

Mozilla Meetup: The Future of Competition

On October 8, Mozilla hosted a Meetup on Competition in Washington, D.C., bringing together leading voices in tech policy - including Alissa Cooper (Knight-Georgetown Institute), Amba Kak (AI Now Institute), Luke Hogg (Foundation for American Innovation) and Kush Amlani (Mozilla) - to discuss the future of browser competition, antitrust enforcement and AI's growing influence on the digital landscape. Moderated by Bloomberg's Leah Nylen, the event reinforced our ongoing efforts to establish a more open and competitive internet, highlighting how policy decisions in these areas directly shape user choice, innovation, and the long-term health of the open web.

Global Encryption Day

On October 21, Mozilla marked Global Encryption Day by reaffirming our commitment to strong encryption as a cornerstone of online privacy, security, and trust. For years, Mozilla has played an active role in shaping the broader policy debate on encryption by consistently pushing back against efforts to weaken it and working with partners around the world to safeguard the technology that helps to keep people secure online - from joining the Global Encryption Coalition Steering Committee, to challenging U.S. legislation like the EARN IT Act and leading multi-year efforts in the EU to address encryption risks in the eIDAS Regulation.

California's Opt Me Out Act: A Continuation of the Fight For Privacy

The passage of California's Opt Me Out Act (AB 566) marked a major step forward in Mozilla's ongoing effort to strengthen digital privacy and give users control of their personal data. For years, Mozilla has spoken in support of Global Privacy Control (GPC) - a tool already integrated into Firefox - as a model for privacy-by-design solutions that can be both effective and user-friendly.

NOVEMBER 2025

Mozilla Submits Recommendations on the Digital Fairness Act

In November, Mozilla submitted its response to the European Commission's consultation on the Digital Fairness Act (DFA), framing it as a key opportunity to modernise consumer protection for AI-driven and highly personalised digital services. Mozilla argued that effective safeguards must tackle both interface design and underlying system choices, prohibit harmful design practices, and set clear fairness standards for personalization and advertising. A well-designed DFA can complement existing EU laws, strengthen user autonomy, provide legal certainty for innovators, and support a more competitive digital ecosystem built on genuine user choice.

Mozilla hosts AI breakfast in UK Parliament

Mozilla President, Mark Surman, hosted MPs and Peers for a breakfast in Parliament to discuss how policymakers can nurture AI that supports public good. As AI policy moves from principle to implementation, the breakfast offered insight into the models, trade-offs and opportunities that will define the next phase of the UK's AI strategy.

DECEMBER 2025

Mozilla Joins Tech Leaders at US House AI Caucus Briefing

Internet Works, an association of "Middle Tech" companies, organized a briefing with the Congressional AI Caucus. The goal was to provide members of congress and their staff a better understanding of the Middle Tech ecosystem and how smaller companies are adopting and scaling AI technologies. Mozilla spoke on the panel, lending valued technical expertise and setting out how we're thinking about keeping the web open for innovation, competition and user choice with this new technology stack.

eIDAS2 Regulation: Defending Web Security and Trust

In December, the EU published the final implementing rules for eIDAS2, closing a multi-year fight over proposals that would have required browsers to automatically trust government-mandated website certificates-putting encryption, user trust, and the open web at risk. Through sustained advocacy and deep technical engagement, Mozilla helped secure clear legal safeguards preserving independent browser root programs and strong TLS security. We also ensured that the final standards respect existing security norms and reflect how the web actually works. With all rules now published, users can continue to rely on browsers to verify websites independently with strict security requirements, governments are prevented from weakening web encryption by default, and a dangerous global precedent for state-controlled trust on the internet has been avoided.

This blog is part of a larger series. Be sure to follow Jenn Taylor Hodges on LinkedIn for further insights into Mozilla's policy priorities.

The post Behind the Manifesto: Moments that Mattered in our Fight for the Open Web (2025) appeared first on Open Policy & Advocacy.

19 Dec 2025 3:23pm GMT

Firefox Nightly: Closing out 2025 Strong – These Weeks in Firefox: Issue 193

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
WebExtension APIs

DevTools

WebDriver

New Tab Page

Search and Navigation

Urlbar
Firefox Suggest
Places

Storybook/Reusable Components/Acorn Design System

Settings Redesign

19 Dec 2025 3:19pm GMT

Mozilla Privacy Blog: Australia’s Social Media Ban: Why Age Limits Won’t Fix What Is Wrong With Online Platforms

On December 10th, Australia's controversial law banning access for under 16-year-olds to certain social media platforms entered into force. Since its adoption in 2024, the law has sparked a global debate on age verification online and has inspired governments across the world to restrict minors' access to parts of the web.

At Mozilla, privacy and user empowerment have always formed a core part of our mission. Mozilla supports strong, proportionate safeguards for minors, but we caution against approaches that rely on invasive identity checks, surveillance-based enforcement, or exclusionary defaults. Such interventions rely on the collection of personal and sensitive data and, thus, introduce major privacy and security risks. By following an approach of abstinence and access control, they undermine the rights of young people to express themselves online but do little to address the child safety risks policymakers might seek to address, such as insufficient content moderation, irresponsible data practices, and addictive design.

Rather than simply restricting access to some online platforms, policymakers should focus on fixing the systemic issues at play and incentivize the creation of online spaces that benefit young people and their development.

We are therefore disappointed by the blunt and disproportionate approach taken by the Australian government. We are also concerned about the impact this law, and others like it, will have on online privacy and security, on people's ability to express themselves and access information, and therefore on the health of the web itself.

The Australian law designates certain services as "age-restricted social media platforms". This category includes social media platforms like Instagram and TikTok, and video-sharing platforms like YouTube, and excludes certain categories of services, such as messaging providers, email services, and online games. Designated services must ensure that people under 16 years of age do not have accounts on their platforms. To do so, the ages of all users must be determined.

The Australian law provides almost no guidance on how service providers should balance privacy, security, and the robustness of age assurance technologies when performing age checks. Providers are thus left to choose from bad options. In the UK, a similar approach has resulted in users having to entrust some of their most sensitive data to a plethora of newly emerged commercial age assurance providers in order to retain access to the various services they use. These actors often ask for a lot of information while providing little accountability or transparency about their data handling practices. Beyond serious data breaches, this has also led to users losing access to messaging features and the censorship of content deemed sensitive, such as posts about the situation in Gaza or the war in Ukraine. But UK users have also demonstrated how ineffective the age-gating mechanisms of even some of the largest platforms are, using VPNs and video game features to bypass age barriers easily.

While many technologies exist to verify, estimate, or infer users' ages, fundamental tensions around effectiveness, accessibility, privacy, and security have not been resolved. Rather, the most common forms of age assurance technologies all come with their own significant limitations:

The Australian approach sends a worrying signal: That mandatory age verification and blanket bans are magical solutions to complex societal challenges, regardless of their implications for fundamental rights online. We are convinced, however, that there are rights-respecting alternatives policymakers can pursue to empower young people online and improve their safety and well-being:

In Australia and elsewhere, we are committed to work alongside policymakers to advance meaningful protections for everyone online, while upholding fundamental rights, accessibility and user choice.

With special thanks to Martin Thomson, Distinguished Engineer at Mozilla, for his contributions to this blog.

The post Australia's Social Media Ban: Why Age Limits Won't Fix What Is Wrong With Online Platforms appeared first on Open Policy & Advocacy.

19 Dec 2025 8:59am GMT

The Rust Programming Language Blog: What do people love about Rust?

Rust has been named Stack Overflow's Most Loved (now called Most Admired) language every year since our 1.0 release in 2015. That means people who use Rust want to keep using Rust1--and not just for performance-heavy stuff or embedded development, but for shell scripts, web apps, and all kinds of things you wouldn't expect. One of our participants captured it well when they said, "At this point, I don't want to write code in any other language but Rust."

When we sat down to crunch the vision doc data, one of the things we really wanted to explain was: What is it that inspires that strong loyalty to Rust?2 Based on the interviews, the answer is at once simple and complicated. The short version is that Rust empowers them to write reliable and efficient software. If that sounds familiar, it should: it's the slogan that we have right there on our web page. The more interesting question is how that empowerment comes about, and what it implies for how we evolve Rust.

What do people appreciate about Rust?

The first thing we noticed is that, throughout every conversation, no matter whether someone is writing their first Rust program or has been using it for years, no matter whether they're building massive data clusters or embedded devices or just messing around, there are a consistent set of things that they say they like about Rust.

The first is reliability. People love that "if it compiles, it works" feeling:

"What I really love about Rust is that if it compiles it usually runs. That is fantastic, and that is something that I'm not used to in Java." -- Senior software engineer working in automotive embedded systems

"Rust is one of those languages that has just got your back. You will have a lot more sleep and you actually have to be less clever." -- Rust consultant and open source framework developer

Another, of course, is efficiency. This comes up in particular at the extremes, both very large scale (data centers) and very small scale (embedded):

"I want to keep the machine resources there for the [main] computation. Not stealing resources for a watchdog." -- Software engineer working on data science platforms

"You also get a speed benefit from using Rust. For example, [..] just the fact that we changed from this Python component to a Rust component gave us a 100fold speed increase." -- Rust developer at a medical device startup

Efficiency comes up particularly often when talking to customers running "at-scale" workloads, where even small performance wins can translate into big cost savings:

"We have a library -- effectively it's like an embedded database -- that we deploy on lots of machines. It was written in Java and we recently rewrote it from Java to Rust and we got close to I think 9x to 10x performance wins." -- Distinguished engineer working on cloud infrastructure services

"I'm seeing 4x efficiency in the same module between Java code that loads a VM and Rust. That's a lot of money you save in data center cost." -- Backend engineering company founder specializing in financial services

At the other end of the spectrum, people doing embedded development or working at low-levels of abstraction highlight Rust's ability to give low-level control and access to system details:

"Rust was that replacement for C I'd been looking for forever." -- Backend engineering company founder specializing in financial services

"If you're going to write something new and you do kind of low-level systemsy stuff, I think Rust is honestly the only real choice." -- Distinguished engineer

Many people cite the importance of Rust's supportive tooling, which helps them get up and going quickly, and in particular the compiler's error messages:

"I think a big part of why I was able to succeed at learning Rust is the tooling. For me, getting started with Rust, the language was challenging, but the tooling was incredibly easy." -- Executive at a developer tools company

"The tooling really works for me and works for us. The number one way that I think I engage with Rust is through its tooling ecosystem. I build my code through Cargo. I test it through Cargo. We rely on Clippy for everything." -- Embedded systems engineer working on safety-critical robotics

"I think the error messages and suggestions from the Rust compiler are super helpful also." -- Professor specializing in formal verification

Finally, one of Rust's most important virtues is its extensibility. Both in the language itself and through the crates.io ecosystem, Rust is designed to let end-users create libraries and abstractions that meet their needs:

"The crate ecosystem combined with the stability guarantees and the semantic versioning mean that it's the best grab and go ecosystem I've ever seen." -- Computer science professor and programming language designer

"I think proc macros are a really big superpower for Rust." -- Creator and maintainer of Rust networking libraries

"Rust is incredibly good at making it very very easy to get started, to reuse things, just to experiment quickly with new tools, new libraries, all the rest of it... so for me, as an experimentation platform, it's great." -- Rust expert and consultant focused on embedded and real-time systems

But what they love is the sense of empowerment and versatility

Reliability, efficiency, tooling, ecosystem-these are all things that people appreciate about Rust. But what they love isn't any one of those things. It's the way the combination makes Rust a trusted, versatile tool that you can bring to virtually any problem:

"When I got to know about it, I was like 'yeah this is the language I've been looking for'. This is the language that will just make me stop thinking about using C and Python. So I just have to use Rust because then I can go as low as possible as high as possible." -- Software engineer and community organizer in Africa

"I wanted a language that works well from top to bottom in a stacking all the way from embedded to very fancy applications" -- Computer science professor and programming language designer

"If [Rust] is going to try and sort of sell itself more in any particular way, I would probably be saying high performance, highly expressive, general purpose language, with the great aspect that you can write everything from the top to the bottom of your stack in it." -- Rust expert and consultant focused on embedded and real-time systems

Each piece is necessary for the whole to work

Take away the reliability, and you don't trust it: you're second-guessing every deployment, afraid to refactor, hesitant to let junior developers touch the critical paths.

"Rust just lowers that bar. It's a lot easier to write correct Rust code. As a leader on the team, I feel a lot safer when we have less experienced engineers contributing to these critical applications." -- Distinguished engineer working on cloud infrastructure services

"My experience with writing Rust software tends to be once you've got it working, it stays working. That's a combination of a lot of care taken in terms of backwards compatibility with the language and a lot of care taken around the general ecosystem." -- Rust expert and consultant focused on embedded and real-time systems

Reliability also provides guardrails that help people enter new domains-whether you're a beginner learning the ropes or an expert venturing into unfamiliar territory:

"Rust introduces you to all these things, like match and all these really nice functional programming methods." -- Software engineer with production Rust experience

"I think Rust ownership discipline is useful both for regular Rust programmers and also for verification. I think it allows you to within the scope of your function to know very clearly what you're modifying, what's not being modified, what's aliased and what's not aliased." -- Professor specializing in formal verification

"I discovered Rust... and was basically using it just to give myself a little bit more confidence being like a solo firmware developer" -- Software engineer working on automotive digital cockpit systems

Take away the efficiency and low-level control, and there are places you can't go: embedded systems, real-time applications, anywhere that cost-per-cycle matters.

"The performance in Rust is nutty. It is so much better and it's safe. When we rewrote C++ and C libraries or C applications into Rust, they would end up being faster because Rust was better at laying out memory." -- Senior Principal Engineer leading consumer shopping experiences

"9 times out of 10, I write microcontroller code and I only test it through unit testing. I put it on real hardware and it just works the first time." -- Embedded systems engineer working on safety-critical robotics

"I can confidently build systems that scale." -- Engineering manager with 20 years experience in media and streaming platforms

Take away the tooling and ecosystem, and you can't get started: or you can, but it's a slog, and you never feel productive.

"For me, getting started with Rust, the language was challenging, but the tooling was incredibly easy... I could just start writing code and it would build and run, and that to me made a huge difference." -- Founder and CEO of company creating developer tools

"Cargo is an amazing package manager. It is probably the best one I've ever worked with. I don't think I ever run into issues with Cargo. It just works." -- Software engineer with production Rust experience

"The Rust compiler is fantastic at kind of the errors it gives you. It's tremendously helpful in the type of errors it produces for it. But not just errors, but the fact it also catches the errors that other languages may not catch." -- Distinguished engineer working on cloud infrastructure services

The result: Rust as a gateway into new domains

When all these pieces come together, something interesting happens: Rust becomes a gateway into domains that would otherwise be inaccessible. We heard story after story of people whose careers changed because Rust gave them confidence to tackle things they couldn't before:

"I was civil engineering and I studied front-end development on my own, self taught. I had no computer background. I got interested in Rust and distributed systems and designs and systems around it. I changed my major, I studied CS and Rust at the same time." -- Software engineer transitioning to cryptography research

"I've been working with arbitrary subsidiaries of [a multinational engineering and technology company] for the last 25 years. Always doing software development mostly in the Java space... two years ago I started peeking into the automotive sector. In that context it was a natural consequence to either start working with C++ (which I did not want to do) or take the opportunity to dive into the newly established Rust ecosystem." -- Senior software engineer working in automotive embedded systems

"I started in blockchain. Currently I'm doing something else at my day job. Rust actually gave me the way to get into that domain." -- Rust developer and aerospace community leader

"Before that, I had 10 years of programming on some dynamic programming languages, especially Ruby, to develop web applications. I wanted to choose some language which focuses on system programming, so I chose Rust as my new choice. It is a change of my career." -- Rust consultant and author working in automotive systems and blockchain infrastructure

But the balance is crucial

Each of Rust's attributes are necessary for versatility across domains. But when taken too far, or when other attributes are missing, they can become an obstacle.

Example: Complex APIs and type complexity

One of the most powerful aspects of Rust is the way that its type system allows modeling aspects of the application domain. This prevents bugs and also makes it easier for noobs to get started3:

"Instead of using just a raw bit field, somebody encoded it into the type system. So when you'd have a function like 'open door', you can't pass an 'open door' if the door's already open. The type system will just kick that out and reject it." -- Software engineer working on automotive digital cockpit systems

"You can create contracts. For example, when you are allowed to use locks in which order." -- Senior embedded systems engineer working on automotive middleware development

The problem though is that sometimes the work to encode those invariants in types can create something that feels more complex than the problem itself:

"When you got Rust that's both async and generic and has lifetimes, then those types become so complicated that you basically have to be some sort of Rust god in order to even understand this code or be able to do it." -- Software engineer with production Rust experience

"Instead of spaghetti code, you have spaghetti typing" -- Platform architect at automotive semiconductor company

"I find it more opaque, harder to get my head around it. The types describe not just the interface of the thing but also the lifetime and how you are accessing it, whether it's on the stack or the heap, there's a lot of stuff packed into them." -- Software engineer working on data science platforms

This leads some to advocate for not using some of Rust's more complex features unless they are truly needed:

"My argument is that the hard parts of Rust -- traits, lifetimes, etc -- are not actually fundamental for being productive. There's a way to set up the learning curve and libraries to onboard people a lot faster." -- Creator and maintainer of Rust networking libraries

Example: Async ecosystem is performant but doesn't meet the bar for supportiveness

Async Rust has fueled a huge jump in using Rust to build network systems. But many commenters talked about the sense that "async Rust" was something altogether more difficult than sync Rust:

"I feel like there's a ramp in learning and then there's a jump and then there's async over here. And so the goal is to get enough excitement about Rust to where you can jump the chasm of sadness and land on the async Rust side." -- Software engineer working on automotive digital cockpit systems

"My general impression is actually pretty negative. It feels unbaked... there is a lot of arcane knowledge that you need in order to use it effectively, like Pin---like I could not tell you how Pin works, right?" -- Research software engineer with Rust expertise

For Rust to provide that "trusted tool that will help you tackle new domains" experience, people need to be leverage their expectations and knowledge of Rust in that new domain. With async, not only are there missing language features (e.g., async fn in traits only became available last year, and still have gaps), but the supportive tooling and ecosystem that users count on to "bridge the gap" elsewhere works less well:

"I was in favor of not using async, because the error messages were so hard to deal with." -- Desktop application developer

"The fact that there are still plenty of situations where you go that library looks useful, I want to use that library and then that immediately locks you into one of tokio-rs or one of the other runtimes, and you're like that's a bit disappointing because I was trying to write a library as well and now I'm locked into a runtime." -- Safety systems engineer working on functional safety for Linux

"We generally use Rust for services, and we use async a lot because a lot of libraries to interact with databases and other things are async. The times when we've had problems with this is like, um, unexplained high CPU usage, for example. The only really direct way to try to troubleshoot that or diagnose it is like, OK, I'm going to attach GDB and I'm gonna try to see what all of the threads are doing. GDB is -- I mean, this is not Rust's fault obviously -- but GDB is not a very easy to use tool, especially in a larger application. [..] And with async, it's, more difficult, because you don't see your code running, it's actually just sitting on the heap right now. Early on, I didn't actually realize that that was the case." -- Experienced Rust developer at a company using Rust and Python

Async is important enough that it merits a deep dive. Our research revealed a lot of frustration but we didn't go deep enough to give more specific insights. This would be a good task to be undertaken by the future User Research team (as proposed in our first post).

Example: The wealth of crates on crates.io are a key enabler but can be an obstacle

We mentioned earlier how Rust's extensibility is part of how it achieves versatility. Mechanisms like overloadable operators, traits, and macros let libraries create rich experiences for developers; a minimal standard library combined with easy package management encourage the creation of a rich ecosystem of crates covering needs both common and niche. However, particularly when people are first getting started, that extensibility can come at the cost of supportiveness, when the "tyranny of choice" becomes overwhelming:

"The crates to use are sort of undiscoverable. There's a layer of tacit knowledge about what crates to use for specific things that you kind of gather through experience and through difficulty. Everyone's doing all of their research." -- Web developer and conference speaker working on developer frameworks

"Crates.io gives you some of the metadata that you need to make those decisions, but it's not like a one stop shop, right? It's not like you go to crates.io and ask 'what I want to accomplish X, what library do I use'---it doesn't just answer that." -- Research software engineer

The Rust org has historically been reluctant to "bless" particular crates in the ecosystem. But the reality is that some crates are omnipresent. This is particular challenging for new users to navigate:

"The tutorial uses Result<Box<dyn Error>> -- but nobody else does. Everybody uses anyhow-result... I started off using the result thing but all the information I found has example code using anyhow. It was a bit of a mismatch and I didn't know what I should do." -- Software engineer working on data science platforms

"There is no clear recorded consensus on which 3P crates to use. [..] Sometimes it's really not clear---which CBOR crate do you use?[..] It's not easy to see which crates are still actively maintained. [..] The fact that there are so many crates on crates.io makes that a little bit of a risk." -- Rust team from a large technology company

Recommendations

Enumerate Rust's design goals and integrating them into our processes

We recommend creating an RFC that defines the goals we are shooting for as we work on Rust. The RFC should cover the experience of using Rust in total (language, tools, and libraries). This RFC could be authored by the proposed User Research team, though it's not clear who should accept it - perhaps the User Research team itself, or perhaps the leadership council.

This post identified how the real "empowering magic" of Rust arises from achieving a number of different attributes all at once -- reliability, efficiency, low-level control, supportiveness, and so forth. It would be valuable to have a canonical list of those values that we could collectively refer to as a community and that we could use when evaluating RFCs or other proposed designs.

There have been a number of prior approaches at this work that we could build on (e.g., this post from Tyler Mandry, the Rustacean Principles, or the Rust Design Axioms). One insight from our research is that we don't need to define which values are "most important". We've seen that for Rust to truly work, it must achieve all the factors at once. Instead of ranking, it may help to describe how it feels when you:

This "goldilocks" framing helps people recognize where they are and course-correct, without creating false hierarchies.

Double down on extensibility

We recommend doubling down on extensibility as a core strategy. Rust's extensibility - traits, macros, operator overloading - has been key to its versatility. But that extensibility is currently concentrated in certain areas: the type system and early-stage proc macros. We should expand it to cover supportive interfaces (better diagnostics and guidance from crates) and compilation workflow (letting crates integrate at more stages of the build process).

Rust's extensibility is a big part of how Rust achieves versatility, and that versatility is a big part of what people love about Rust. Leveraging mechanisms like proc macros, the trait system, and the borrow checker, Rust crates are able to expose high-level, elegant interfaces that compile down to efficient machine code. At its best, it can feel a bit like magic.

Unfortunately, while Rust gives crates good tools for building safe, efficient abstractions, we don't provide tools to enable supportive ones. Within builtin Rust language concepts, we have worked hard to create effective error messages that help steer users to success; we ship the compiler with lints that catch common mistakes or enforce important conventions. But crates benefit from none of this. RFCs like RFC #3368, which introduced the diagnostic namespace and #[diagnostic::on_unimplemented], Rust has already begun moving in this direction. We should continue and look for opportunities to go further, particularly for proc-macros which often create DSL-like interfaces.

The other major challenge for extensibility is concerned with the build system and backend. Rust's current extensibility mechanisms (e.g., build.rs, proc-macros) are focused on the early stages of the compilation process. But many extensions to Rust, ranging from interop to theorem proving to GPU programming to distributed systems, would benefit from being able to integrate into other stages of the compilation process. The Stable MIR project and the build-std project goal are two examples of this sort of work.

Doubling down on extensibility will not only make current Rust easier to use, it will enable and support Rust's use in new domains. Safety Critical applications in particular require a host of custom lints and tooling to support the associated standards. Compiler extensibility allows Rust to support those niche needs in a more general way.

Help users get oriented in the Rust ecosystem

We recommend finding ways to help users navigate the crates.io ecosystem. Idiomatic Rust today relies on custom crates for everything from error-handling to async runtimes. Leaning on the ecosystem helps Rust to scale to more domains and allows for innovative new approaches to be discovered. But finding which crates to use presents a real obstacle when people are getting started. The Rust org maintains a carefully neutral stance, which is good, but also means that people don't have anywhere to go for advice on a good "starter set" crates.

The right solution here is not obvious. Expanding the standard library could cut off further experimentation; "blessing" crates carries risks of politics. But just because the right solution is difficult doesn't mean we should ignore the problem. Rust has a history of exploring creative solutions to old tradeoffs, and we should turn that energy to this problem as well.

Part of the solution is enabling better interop between libraries. This could come in the form of adding key interop traits (particularly for async) or by blessing standard building blocks (e.g., the http crate, which provides type definitions for HTTP libraries). Changes to coherence rules can also help, as the current rules do not permit a new interop trait to be introduced in the ecosystem and incrementally adopted.

Conclusion

To sum up the main points in this post:

  1. In 2025, 72% of Rust users said they wanted to keep using it. In the past, Rust had a way higher score than any other language, but this year, Gleam came awfully close, with 70%! Good for them! Gleam looks awesome--and hey, good choice on the fn keyword. ;)

  2. And, uh, how can we be sure not to mess it up?

  3. ...for experienced devs operating on less sleep, who do tend to act a lot like noobs.

19 Dec 2025 12:00am GMT

18 Dec 2025

feedPlanet Mozilla

The Mozilla Blog: Welcoming John Solomon as Mozilla’s new Chief Marketing Officer

Mozilla has always believed that technology should serve people - not the other way around. As we enter a moment of rapid change in how people experience the internet and AI, we're focused on building products that are private, transparent, and put people in control. Today, we're excited to take an important step forward in that work by welcoming John Solomon as Mozilla's new Chief Marketing Officer.

Solomon joins Mozilla this week and will lead our global marketing and communications teams. His arrival marks the next chapter in strengthening how we tell Mozilla's story and how we bring our values to life in the products millions of people rely on every day.

Bringing more than two decades of experience building category-defining brands and leading global marketing teams, Solomon is a veteran brand builder with leadership roles at Therabody, Apple, and Beats by Dre. He was also named one of Forbes' 50 Most Entrepreneurial CMOs for 2025. Solomon has a track record of turning products into cultural touchpoints and brands into household names. This experience is essential as Mozilla works to remind hundreds of millions of people around the world that they have real choice in the technology they use.

Solomon's career spans companies that have shaped culture as much as they have shaped markets. At Therabody, he helped redefine and scale the company into a category-leading wellness brand with a mission to help people live healthier, happier lives longer. At Beats, he played a pivotal role in the brand's global rise and its breakthrough cultural relevance, later joining Apple's worldwide Marcom organization to launch some of the company's most iconic hardware, software, and digital services. Earlier in his career, he founded and sold enoVate, a consumer insights and strategy firm based in Shanghai.

For Mozilla, John steps into the role at a moment when trust in technology is eroding and AI is reshaping how people navigate the internet. Our responsibility - and our opportunity - is to build products that are private, transparent, and put people in control. Marketing plays a central role in making that mission visible, accessible, and relevant to a global audience. John not only understands the importance of this moment but the impact it will have on future generations.

Solomon will lead Mozilla's global marketing and communications teams, working closely with leaders across the company to build on the strong progress made this year.

Mozilla's mission has always been to ensure the internet remains open, accessible, and driven by human agency. As we enter a new era shaped by AI and renewed debates over consumer agency, John's experience - and his commitment to purpose-driven work - will help us meet this moment with clarity and ambition.

Please join us in welcoming John Solomon to Mozilla.

The post Welcoming John Solomon as Mozilla's new Chief Marketing Officer appeared first on The Mozilla Blog.

18 Dec 2025 1:01pm GMT

Mozilla Thunderbird: Thunderbird 2025 Review: Building Stronger for the Future

2025 was an exciting year for Thunderbird. Many improvements were shipped throughout the year, from faster updates with a new release cadence, to a modernized codebase for the desktop app. We made big strides on our mobile apps and introduced the upcoming Thunderbird Pro to the world.

As we wrap up the year, a huge thank you to our community and volunteer contributors, and to our donors whose financial support keeps the lights on for the dedicated team working on Thunderbird. Here's what we accomplished in 2025 and what's to come in the new year.

A Stronger Core, Built to Last

This year marked the release of Thunderbird 140 "Eclipse", our latest Extended Support Release. Eclipse was more than a visual refresh. It was a deep clean of Thunderbird's core, removing long standing technical debt and modernizing large parts of the codebase.

The result is a healthier foundation that allows us to ship improvements more reliably and more often. Features like the new Account Hub, accessibility improvements, and cleaner visual controls are all part of this effort. They may look simple on the surface, but they represent significant behind the scenes progress that sets Thunderbird up for the long term.

Faster Updates, Delivered Monthly

Speaking of faster updates, in 2025 monthly releases became the default for Thunderbird desktop. This was a major shift from our focus on the annual cadence centered around the Extended Support Release.

Moving to monthly releases means new features land sooner, bug fixes arrive faster, and updates feel smoother instead of disruptive. Users no longer have to wait an entire year to benefit from improvements. Thunderbird now evolves continuously while maintaining the stability people expect.

Thunderbird Meets Exchange

One of the most requested features is finally here. Native Microsoft Exchange email support landed in Thunderbird's monthly release channel with 145.0.

You can now connect Exchange accounts directly without relying on third party add-ons for email. Setup is simpler, syncing is more reliable, and Thunderbird works more naturally in Exchange based environments. Calendar and address book support are still in progress, but native email support marks an important milestone toward broader compatibility.

Mobile Moves Forward

Thunderbird's mobile story continued to grow in 2025.

On Android, the team refined release processes, improved core experiences, and began breaking larger features into smaller, more frequently delivered updates. At the same time, Thunderbird for iOS took a major step forward with a basic developer testing app available via Apple TestFlight. This marked the first public signal that Thunderbird is officially expanding onto iOS, with active development well underway and headed toward iPhones in 2026.

Introducing Thunderbird Pro

In 2025, we announced Thundermail and Thunderbird Pro, the first ever email service from Thunderbird alongside new cloud based productivity features designed to work seamlessly with the app.

Thunderbird Pro will include:

These services are built to respect user privacy, remain open source, and offer additional functionality by subscription for those who need it, without compromising the forever free and powerful Thunderbird desktop and mobile apps. Throughout the year, we made significant progress across all three services and launched the Thunderbird Pro website, marking a major step toward early access and testing. The Early Bird beta is set to kick off in the first part of 2026. Catch up on the full details in our latest update and, if you're not on the waitlist yet, join in.

Looking Ahead to 2026

The work in 2025 set the stage for an even more ambitious year ahead.

In 2026, our desktop plans include updating our decades-old database, expanding Exchange and protocol support, and refreshing the Calendar UI. For Thunderbird Pro, we aim to release the Early Bird beta in the first part of the year. Our plans for Android focus on rearchitecture of old code, quality of life improvements, and a new UI. For iOS, we're moving closer to an initial beta release with expanded protocol support. Be sure to follow this blog for updates on the desktop and mobile apps and Thunderbird Pro.

Thunderbird is moving faster, reaching more platforms, and building a more complete ecosystem while staying true to our values. Thanks for being part of the journey, and wishing all of you a fantastic 2026.
Thunderbird is moving faster, reaching more platforms, and building a more complete ecosystem while staying true to our values.

Thanks for being part of the journey, and wishing all of you a fantastic 2026!

All of our work is funded solely by individual donations from our users and community.
Help support the future of Thunderbird!
For other ways to get involved with the Thunderbird project, visit our participate page.

The post Thunderbird 2025 Review: Building Stronger for the Future appeared first on The Thunderbird Blog.

18 Dec 2025 1:00pm GMT

Mozilla Localization (L10N): Contributor Spotlight: Andika

About You

My name is Andika. I'm from Indonesia, and I speak Indonesian, Javanese, and English. I've been contributing to Mozilla localization for a long time, long enough that I don't clearly remember when I started. I mainly focus on Firefox and Thunderbird, but I also contribute to many other open source projects.

Exploring Padar Island where Komodo dragons can be spotted.

Exploring Padar Island where Komodo dragons can be spotted.

Contributing to Mozilla Localization

Q: Can you tell us a bit about your background and how you found localization?

A: I started my open source journey in the 1990s. Early on, I helped others through mailing lists by troubleshooting problems and answering questions. I also tried filing bugs and maintaining packages, but over time I felt those contributions didn't always have a lasting impact.

Around 2005, I started translating open source software. Translation felt different - it felt like a contribution that could last longer than the technology itself. When I saw poor translation quality online, I felt I could do better, and that motivated me to get involved. Localization became the most meaningful way for me to give back.

Q: What does your contribution to Mozilla localization look like today?

A: I primarily work on Firefox and Thunderbird. Over the years, I've translated tens of thousands of strings although some of those strings no longer exist in the codebase and remain only in translation memory. I also contribute to many other open source organizations, but Mozilla remains one of my main areas of focus.

Even though I don't always use the products I localize - my professional work involves backend work, a lot of remote troubleshooting and maintenance - I stay connected to the quality of the translations through community collaboration and shared practices.

Workflow, Habits, and Collaboration

Q: How do you approach your localization work and collaborate with others?

A: Most of my localization work happens incrementally. I often carry unfinished translation files on my laptop so I can continue working offline, especially when the internet connection isn't reliable. When I have multiple modules to choose from, I usually start with the ones that have the fewest untranslated strings. Seeing a module reach full translation gives me a lot of satisfaction.

To avoid burnout, I set small, realistic goals, sometimes something as simple as translating 50 strings before switching to another task. I tend to use small pockets of free time throughout the day, like waiting at a public transportation station or an appointment, and those fragments add up.

Collaboration plays a big role in maintaining quality. Within the Indonesian localization community, we use Telegram to discuss difficult or new terms and work toward consensus. Terminology and style guides are maintained together; it's not a one-person responsibility.

I've also worked on localization in other projects like GNOME, where we translate module by module, we review each other's work, and then commit changes as a group. Compared to Pontoon's string-by-string approach, this workflow offers more flexibility, especially when working offline.

Perspective Across Open Source and Beyond

Q: You contribute to many open source projects. How does Mozilla localization compare, and what would you like to see improved?

A: For Indonesian localization, Mozilla is the most organized team I've worked with and has the largest active team. Some projects may appear larger on paper, but active participation matters more than numbers, and that's where Mozilla really stands out.

One improvement I'd like to see is better support for offline translation in Pontoon. Another area is shortcut conflict detection - translators often can't easily see whether keyboard shortcuts conflict unless all menu items or dialog elements are rendered together. Automated checks or rendered views of translated dialogs would make that process much easier.

That said, one thing Pontoon does very well, and that other projects could learn from, is the improving quality of online and AI-assisted translation suggestions.

Speaking at Fosdem in February 2024 on "Long Term Effort to Keep Translations Up-To-Date"

Professional Life and a Personal Note

Q: What do you do professionally, and how does it connect with your localization work?

A: I work as an IT security consultant. I started using a PC in 1984, learning to program in BASIC, Pascal, FORTRAN, Assembly, and C. C is my most favorite language up to now. I also tried various OSes from CP/M, DOS, OS/2, VMS, Netware, Windows, SCO, Solaris, then fell in love with Linux. I have been using Debian since version 1.3. Later I changed my focus from programming into IT security. My job requires staying up to date with security concepts and terminology, which helps when translating security-related strings. At the same time, localization sometimes introduces me to features I might later use professionally. The two areas complement each other in unexpected ways.

As for something more personal: I hate horror movies, I love cats, and I've had the chance to witness the rise and fall of many technologies over the years. I also maintain a personal wiki to keep track of my open source work though I keep telling myself I need to migrate it to GitHub one day.

18 Dec 2025 5:16am GMT

17 Dec 2025

feedPlanet Mozilla

Mozilla Addons Blog: Presenting 2025 Firefox Extension Developer Award Recipients

Extensions have long been at the heart of the Firefox - providing users with powerful options to personalize their browsing experience. Nearly half of all Firefox users have installed at least one extension. These incredible tools and features are built by a community of more than 10,000 developers. While all developers contribute to the depth and diversity of our ecosystem, some of the most popular extensions provide significant global impact.

Today we celebrate our first cohort of notable developers. Below are this year's recipients of the Firefox Extension Developer Award, presented to developers of some of the most popular Firefox extensions. The bespoke metal trophies were designed by Alper Böler, a California-based industrial designer and artist.

On behalf of Mozilla, and all Firefox users, thank you to all developers for your amazing contributions to the ecosystem!

Platinum

uBlock Origin - Ad blocker with 10M+ users. uBlock Origin has long been one of the most popular extensions for Firefox, providing a massive positive impact for users. This is a well-supported extension maintained by a passionate group of contributors, and we'd like to extend a special thank you to everyone who helps make this an exceptional extension.

(Reflecting astounding recent growth, uBlock Origin averaged 9.5M daily users when the awards were commissioned, which would have made it a Gold Award recipient; however it has since surpassed 10.5M daily users so we've elevated uBlock Origin to Platinum status.)

Silver

Ablock Plus - Debuted on Firefox all the way back in 2006.

Video DownloadHelper - Immensely capable media downloader.

Privacy Badger - "Privacy Badger is developed by the Electronic Frontier Foundation, a digital rights nonprofit with a 35-year history of defending online privacy. We created Privacy Badger over a decade ago to fight pervasive, nonconsensual tracking online. In the absence of strong privacy laws, surveillance has become the business model of the internet. Just browsing the web can expose sensitive data to advertisers, Big Tech companies, and data brokers. While we continue advocating for comprehensive privacy legislation, Privacy Badger gives people a quick, easy way to protect themselves. Privacy Badger is both a practical tool for individuals and part of EFF's broader effort to end online surveillance for everyone." - Lena Cohen, Staff Technologist at EFF

AdBlocker Ultimate - Also works beautifully on Firefox for Android.

AdGuard AdBlocker - Blocks ads and will also warn you about potentially malicious websites.

Dark Reader - "Working long hours in front of a bright computer screen made my eyes tired. LCD screens can feel like staring into a light bulb. Dark Reader started as a simple screen inverter to give my eyes a break. Over time, it evolved into a much more sophisticated tool, adapting to the growing needs of users." - Alexander Shutau

AdBlock for Firefox - Arrived to the Firefox ecosystem in 2014.

DuckDuckGo Search & Tracker Protection - "At DuckDuckGo, we want to help people take back control of their personal information - whether that be when they're making a search, using AI, emailing, or browsing. In 2017, we had a search engine, but we knew we wanted to extend privacy to the browsing experience. At that time we hadn't built our own browser, so we bundled private search, tracking and fingerprinting protections, and more, into an easy-to-add web extension." - Sam Macbeth

Bronze

Ghostery - "We wanted to create a truly user-focused ad blocker - one that doesn't compromise on effectiveness, doesn't maintain whitelists for advertisers, and gives people back control of their browsing experience. Many tools in the market were tied to ad industry interests, so our goal was to build a 100% independent, transparent solution. Ghostery was one of the first add-ons ever published on the Mozilla platform. Its original motivation was to bring transparency to the web." - Krzysztof Modras

Return YouTube Dislike - "(I made it) for my own convenience. I wanted to use this feature myself, first and foremost. I think YouTube misses a lot by making dislike counts invisible." - Dmitry Selivanov

Translate Web Pages - An effectively simple translation tool.

Bitwarden - "Back in 2015-'16, I was frustrated with the existing password management landscape. As a developer and engineer, I saw several problems that needed solving: complicated setup procedures, lack of cross-platform availability, and fragmented open source solutions that were hard to trust. I wanted to create a password manager that would meet the needs of someone like myself - a technologist who valued simplicity, transparency, and accessibility. The browser extension was one of the first components I built and it turned out to be crucial for Bitwarden since it made password management seamless for users across their daily web browsing." - Kyle Spearrin

To Google Translate - "When I was at university, I started learning English on my own. I used to read articles in English about security and programming, and whenever I didn't understand a word or was unsure about its pronunciation, I would copy and paste it into Google Translate to learn its meaning and how to say it. Over time, I realized this process was very manual and time-consuming, since I still had a lot of vocabulary to learn. That's when I thought: 'Is it possible to automate this to make it easier?' That insight led me to build an add-on. In short, it started as a personal need, and later I realized that many others shared the same challenge. I never imagined the extension would reach and help so many people." - Juan Escobar

IDM Integration Module - Companion extension to the popular desktop application.

Tampermonkey - "In 2008 I teamed up with a friend to develop a Greasemonkey userscript that automated parts of an online game. The script eventually grew into a full‑featured Firefox extension. When Chrome was released, I ported the extension to that browser and realized that the insights I gained about the WebExtension APIs could serve as the foundation for a new userscript manager. I later launched that manager, Tampermonkey, in May 2010. Firefox's switch to WebExtensions in 2015 gave me an opportunity to bring Tampermonkey to Firefox as well." - Jan Biniok

Grammarly: AI Writing and Grammar Checker - "When we first launched Grammarly, it was exclusively in our Grammarly editor, so users had to write directly into our web editor to get help with their writing. We realized there was so much more value in bringing Grammarly directly to where people write - in their browsers, on the sites they use every day for work and for school, and across 500,000 different apps and websites. Extensions became the natural way to meet people in their existing workflows rather than asking them to change how they already work, and it's part of what makes Grammarly one of the top AI tools." - Iryna Shamrai

Cisco Webex Extension - Companion extension for Cisco Webex Meetings or Webex App.

SponsorBlock - Skip Sponsorships on YouTube - "One of my favourite YouTube channels uploaded a video with a sponsor message that was deceptively placed into the video. It really made me frustrated. Then I had the idea that crowdsourcing sponsor timestamps could maybe just work." - Ajay

ClearURLs - "The idea for the extension actually came up quite spontaneously during a lunch break at university. While studying computer science, a friend and I started talking about how frustrating all those tracking elements in URLs can be. We wondered if there was already a browser add-on that could automatically clean them up, but after some research we realized there really wasn't anything like that out there." - Kevin Röbert

The post Presenting 2025 Firefox Extension Developer Award Recipients appeared first on Mozilla Add-ons Community Blog.

17 Dec 2025 5:53pm GMT

Mozilla Thunderbird: Thunderbird Monthly Development Digest: November/December 2025

Hello again from the Thunderbird development team as we start to wind down for the holidays! Over the past several weeks, our sprints have been focused on delivery and consolidation to clear our plates for a fresh start in the New Year.

Following our successful in-person work-week to discuss all things protocol, we've brought Exchange support (EWS) to our Monthly release channel, completed much of the final phases of the Account Hub experience, and laid the groundwork for what comes next. Alongside this feature work, the team has spent a significant amount of time adapting to upstream platform changes and supported our Services colleagues as we prepared for wider rollout. It's been a period of steady progress, prioritization, and planning for the next major milestones.

Exchange Email Support

Since the last update, we're so happy to finally announce that Exchange support for email has shipped to the Monthly release channel, accompanied by supporting blog posts, documentation and some fanfare. In the weeks leading up to and following that release, the team focused on closing out priority items, addressing stability issues, and ensuring the experience scales well as more users add their EWS-based Exchange accounts.

Work completed during this period includes:

In parallel, the team has begun work on Graph API support for email, which is now moving rapidly through its early stages, thanks in large part to the solid foundation laid for EWS. It's so nice when a plan comes together

This work represents the next major milestone for Exchange support and will inform broader architectural refactoring planned for future phases.

The Exchange team also met in person to plan out upcoming milestones. These sessions allowed us to break down future work and begin early research and prototyping for:

Keep track of our Graph API implementation here.

Account Hub

A major focus during this period was completing the Email Account Hub Phase 3 milestone, with the final bugs landing and remaining items either completed or moved into maintenance. This work was prioritized to improve the experience for users setting up new accounts, particularly Exchange accounts.

Notable improvements and fixes include:

With the primary Phase 3 goals now complete, the team has been able to shift attention back to other front-end initiatives while continuing to refine the Account Hub experience through targeted fixes and polish.

Follow progress in the meta bugs for phase 3 and telemetry

Calendar UI Rebuild

Calendar UI work progressed more slowly during this period due to competing priorities (hiring!), in-person meetups and planned time off, but planning and groundwork continued and development back underway. The team:

Stay tuned to our milestones here:

Maintenance, Upstream adaptations, Recent Features and Fixes

Throughout this period, the team also spent a considerable amount of time responding to upstream changes that affected build stability, tests, and CI. Sheriffing remained challenging, with frequent tree breakages requiring investigation to distinguish upstream regressions from local changes. In addition to these items, we've been blessed with help from the larger development community to deliver a variety of improvements over the past two months.

A very special shout out to a new contributor who worked with our senior team to solve a 19-year old problem relating to unread folders. Interactions like this are fuel for our team and we're incredibly grateful for the help.

If you would like to see new features as they land, and help us find some early bugs, you can try running daily and check the pushlog to see what has recently landed. This assistance is immensely helpful for catching problems early.

-

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: November/December 2025 appeared first on The Thunderbird Blog.

17 Dec 2025 5:23pm GMT

This Week In Rust: This Week in Rust 630

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is logos, a modern lexer generator.

Thanks to Sam O'Brien for the (partial self-)suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.

If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Let us know if you would like your feature to be tracked as a part of this list.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!

Updates from the Rust Project

482 pull requests were merged in the last week

Compiler
Library
Cargo
Clippy
Rust-Analyzer
Rust Compiler Performance Triage

This week we saw several regressions, partly from the compiler doing more work. The remaining regressions are being investigated.

Triage done by @kobzol. Revision range: 55495234..21ff67df

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.1%, 5.1%] 40
Regressions ❌
(secondary)
0.8% [0.0%, 3.0%] 63
Improvements ✅
(primary)
-0.7% [-1.5%, -0.1%] 35
Improvements ✅
(secondary)
-1.0% [-7.4%, -0.0%] 73
All ❌✅ (primary) -0.1% [-1.5%, 5.1%] 75

3 Regressions, 2 Improvements, 5 Mixed; 2 of them in rollups 36 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * Adding a crates.io Security tab

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

Rust

Rust RFCs

Cargo

Leadership Council

No Items entered Final Comment Period this week for Compiler Team (MCPs only), Language Team, Language Reference or Unsafe Code Guidelines.

Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.

New and Updated RFCs

Upcoming Events

Rusty Events between 2025-12-17 - 2026-01-14 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I allow my code to be used for training AI on GitHub. Not because I fear AI taking our jobs-but because I'm confident my code will slow it down enough to save us all.

- 王翼翔 on rust-users

Thanks to Moy2010 for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by:

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

17 Dec 2025 5:00am GMT

16 Dec 2025

feedPlanet Mozilla

Mozilla Attack & Defense: Attempting Cross Translation Unit Taint Analysis for Firefox

Preface

Browser security is a cutting edge frontier for exploit mitigations, addressing bug classes holistically, and identifying vulnerabilities. Not everything we try works, and we think it's important to document our shortcomings in addition to our successes. A responsible project uses all available tools to find bugs and vulnerabilities before you ship. Besides many other tools and techniques, Firefox uses Clang Tidy and the Clang Static Analyzer, including many customized checks for enforcing the coding conventions in the project. To extend these tools, Mozilla contacted Balázs, as one of the maintainers of the Clang Static Analyzer, to help address problems encountered when exploring Cross Translation Unit (CTU) Static Analysis. Ultimately, we weren't able to make as much headway with this project as we hoped, but we wanted to contribute our experience to the community and hopefully inspire future work. Be warned, this is a highly technical blog post.

The following sections describe some fundamental concepts, such as taint analysis, CTU, the Clang Static Analyzer engine. This will be followed by the problem statement and the solution. Finally, some closing words.

Disclaimer: The work described here was sponsored by Mozilla.

Static Analysis Fundamentals

Taint analysis

Vulnerabilities often root from using untrusted data in some way. Data from such sources is called "tainted" in static analysis, and "taint analysis" is the technique that tracks how such "tainted" values propagate or "flow" throughout the program.

In short, "Taint sources" introduce a flow, such as reading from a socket. If a tainted value reaches a "taint sink" then we should report an error. These "sources" and "sinks" are often configurable.

A YAML configuration file can be used with the Clang Static Analyzer configuring the taint rules.

Cross Translation Unit (CTU) analysis

The steps involved in bugs or vulnerabilities might cross file boundaries. Conventional static analysis tools that operate on a translation-unit basis would not find the issue. Luckily, the Clang Static Analyzer offers CTU mode that loads the relevant pieces of the required translation units to enhance the contextual view of the analysis target, thus increasing the covered execution paths. Running CTU needs a bit of setup, but luckily tools like scan-build or CodeChecker have built-in support.

Path-sensitive analysis

The Clang Static Analyzer implements a path-sensitive symbolic execution. Here is an excellent talk but let us give a refresher here.

Basically, it interprets the abstract syntax tree (AST) of the analyzed C/C++ program and builds up program facts statement by statement as it simulates different execution paths of the program. If it sees an if statement, it splits into two execution paths: one where the condition is assumed to be false, and another one where it's assumed to be true. Loops are handled slightly differently, but that's not the point of this post today.

When the engine sees a function call, it will jump to the definition of the callee (if available) and continue the analysis there with the arguments we had at the call-site. We call this "inlining" in the Clang Static Analyzer. This makes this engine inter-procedural, in other words, reason across functions. Of course, this only works if it knows the callee. This means that without knowing the pointee of a function pointer or the dynamic type of a polymorphic object (that has virtual functions), it cannot "inline" the callee, which in turn means that the engine must conservatively relax the program facts it gathered so far because they might be changed by the callee.

For example, if we have some allocated memory, and we pass that pointer to such a function, then the engine must assume that the pointer was potentially released, and not raise leak warnings after this point.

The conclusion here is that following the control-flow is critical, and virtual functions limit our ability to reason about this if we don't know the dynamic type of objects.

So, taint analysis for Firefox?

Firefox has a lot of virtual functions!

We discussed that control-flow is critical for taint analysis, and virtual functions ruin the control-flow. A browser has almost every code pattern you can imagine, and it so happens that many of the motivating use cases for this analysis involve virtual functions that also happen to cross file boundaries.

Once upon a time…

It all started by Tom creating a couple of GitHub issues, like #114270 (which prompted a couple smaller fixes that are not the subject of this port), and #62663.

This latter one was blocked by not being able to follow the callees of virtual functions, kicking off this whole subject and the prototype.

Plotting against virtual functions

The idea

Let's just look at the AST and build the inheritance graph. After that, if we see a virtual call to data(), we could check who overrides this method.

Let's say only class A and B overrides this method in the translation unit. This means we could split the path into 2 and assume that on one path we call A::data() and on the other one B::data().

// class A... class B deriving from Base
void func(Base *p) {
  p->data(); // 'p' might point to an object A or B here.
}

This looks nice and simple, and the core of the idea is solid. However, there are a couple of problems:

  1. One translation unit (TU) might define a class Derived, overriding data(), and then pass a Base pointer to the other translation unit. And when that TU is analyzed, it shouldn't be sure that only class A and B overrides data() just because it didn't see Derived from the other TU. This is the problem with inheritance, which is an "open-set" relation. One cannot be sure to see the whole inheritance graph all at once.

  2. It's not only that Derived might be in a different TU, but it might be in a 3rd party library, and dynamically loaded at runtime. In this case, assuming a finite set of callees for a virtual function would be wrong.

Refining the idea

Fixing problem (2) is easy, as we should just assume that the list of potential callees always has an extra unknown callee, to have an execution path where the call is conservatively evaluated and do the invalidations - just like before.

Fixing problem (1) is more challenging because we need whole-program analysis. We need to create the inheritance graphs of each TU and then merge them into a unified graph. Once we've built that, we can run the Clang Static Analyzer and start reasoning about the overriders of virtual functions in the whole project. Consequently, in the example we discussed before, we would know that class A, B and (crucially) Derived overrides data(). So after the call, we would have 4 execution paths: A, B, Derived and the last path is for the unknown case (like potentially dynamically loading some library that overrides this method).

It sounds great, but does it work?

It does! The analysis gives a list of the potential overriders of a virtual function. The Clang Static Analyzer was modified to do the path splits we discussed and remember the dynamic type constraints we learn on the way. There is one catch though.

Some taint flows cross file boundaries, and the Clang Static Analyzer has CTU to counter this, right?

CTU uses the "ASTImporter", which is known to have infinite recursion, crashes and incomplete implementation in terms of what constructs it can import. There are plenty of examples, but one we encountered was #123093.

Usually fixing one of these is time consuming and needs a deep understanding of the ASTImporter. And even if you fix one of these, there will be plenty of others to follow.

This patch for "devirtualizing" virtual function calls didn't really help with the reliability of the ASTImporter. As the interesting taint flows cross file boundaries, the benefits of this new feature are unfortunately limited by the ASTImporter for Firefox.

Is it available in the Clang Static Analyzer already?

Unfortunately no, and as the contract was over, it is unlikely that these patches would merge upstream without others splitting up the patches and doing the labor of proposing them upstream. Note that this whole program analysis is a brand new feature and it was just a quick prototype to check the viability.

Upstreaming would likely also need some wider consensus about the design.

Apparently, whole-project analyses could be important for other domains besides bug-finding, such as code rewriting tools, which was the motivation for a recently posted RFC. The proposed framework in that RFC could potentially also work for the use-case described in this blog post, but it's important to highlight that this prototype was built before that RFC and framework, consequently it's not using that.

Balázs shared that working on the prototype was really motivating at first, but as he started to hit the bugs in the ASTImporter - effectively blocking the prototype - development slowed down. All in all, the prototype proved that using project-level information, such as "overriders", could enable better control-flow modeling, but CTU analysis as we have it in Clang today will show its weaknesses when trying to resolve those calls. Without resolving these virtual calls, we can't track taint flows across file boundaries in the Clang Static Analyzer.

What does this mean for Firefox?

Not much, unfortunately. If the ASTImporter would work as expected, then finalizing the prototype would meaningfully improve taint analysis on code using virtual functions.

You can find the source code at Balázs' GitHub repo as steakhal/llvm-project/devirtualize-for-each-overrider, which served well for exploring and rapid prototyping but it is far from production quality.

Bonus: We need to talk about the ASTImporter

From the cases Balázs looked at, it seems like qualified names, such as std in std::unique_ptr for example, will trigger the import of a std DeclContext, which in turn triggers the import of all the declarations within that lexical declaration context. In other words, we start importing a lot more things than strictly necessary to make the std:: qualification work. This in turn increases the chances of hitting something that causes a crash or just fails to import, poisoning the original AST we wanted to import into. This is likely not how it should work, and might be a good subject to discuss in the future.

Note that the ASTImporter can be configured to do so-called "minimal imports" which is probably what we should have for the Clang Static Analyzer, however, this is not set, and setting it would lead to even more crashes. Balázs didn't investigate this further, but it might be something to explore in the future.

16 Dec 2025 9:06am GMT

The Rust Programming Language Blog: Project goals update — November 2025

The Rust project is currently working towards a slate of 41 project goals, with 13 of them designated as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

"Beyond the `&`"

Continue Experimentation with Pin Ergonomics (rust-lang/rust-project-goals#389)
Progress
Point of contact

Frank King

Champions

compiler (Oliver Scherer), lang (TC)

Task owners

Frank King

1 detailed update available.

Comment by @frank-king posted on 2025-11-21:

Status update:

Design a language feature to solve Field Projections (rust-lang/rust-project-goals#390)
Progress
Point of contact

Benno Lossin

Champions

lang (Tyler Mandry)

Task owners

Benno Lossin

TL;DR.
  • We have made lot's of progress with the novel place-based proposal made by @Nadrieril. Since the last update, he released his idea as a blog post and have had an immense amount of discussions on Zulip. There are still many open questions and problems left to solve. If you have any ideas, feel free to share them on Zulip.

  • At the beginning of this month, we explored moving projections and &own. We also looked into reducing the number of projection traits.

  • The PR https://github.com/rust-lang/rust/pull/146307 has been stale for this month, but will be picked up again in December.

3 detailed updates available.

Comment by @BennoLossin posted on 2025-11-01:

Moving Projections and &own

Moving projections are a third kind of projection that already exists in Rust today for Box as well as any local variable holding a struct. While we won't be including it in an MVP, we still want to make sure that we can extend the language with moving projections. Here is an example with Box:

fn destructure_box(mut b: Box<Struct>) -> Box<Struct> {
    let f1 = b.f1;
    b.f1 = F1::new();
    b
}

This projection moves the field out of the box, invalidating it in the process. To make it valid again, a new value has to be moved in for that field. Alternatively, the partially valid box can be dropped, this will drop all other fields of Struct and then deallocate the Box. Note that this last property is implemented by compiler magic today and moving projections would allow this special behavior for Box to be a library implementation instead.

To make this kind of projection available for all types, we can make it a proper operation by adding this trait:

pub unsafe trait ProjectMove: Projectable {
    type OutputMove<'a, F: Field<Base = Self::Target>>;
    
    unsafe fn project_move<'a, F: Field<Base = Self::Target>>(
        this: *mut Self,
    ) -> Self::OutputMove<'a, F>;
    
    unsafe fn drop_husk(husk: *mut Self);
}

Importantly, we also need a drop_husk function which is responsible for cleaning up the "husk" that remains when all fields have been move-projected. In the case of Box, it deallocates the memory. So for Box we could implement this trait like this:

impl<T> ProjectMove for Box<T> {
    type OutputMove<'a, F: Field<Base = T>> = F::Type;

    unsafe fn project_move<'a, F: Field<Base = T>>(
        this: *mut Self,
    ) -> F::Type {
        let ptr = unsafe { (*this).0.pointer.as_ptr() };
        ptr::read(unsafe {
            <*const T as Project>::project::<'a, F>(&raw const ptr)
        })
    }

    unsafe fn drop_husk(husk: *mut Self) {
        // this is exactly the code run by `Box::drop` today, as the compiler
        // drops the `T` before `Box::drop` is run.
        let ptr = (*husk).0;
        unsafe {
            let layout = Layout::for_value_raw(ptr.as_ptr());
            if layout.size() != 0 {
                (*husk).1.deallocate(From::from(ptr.cast()), layout);
            }
        }
    }
}

To support moving back into a value we have two options:

  1. Add a ProjectMoveBack trait that declares an operation which accepts a value that is moved back into the projected one, or
  2. Add &own references.

Until now, we have explored the second option, because there are lot's of other applications for &own.

&own References

A small interlude on &own references.

An &'a own T is a special kind of exclusive reference that owns the value it points to. This means that if you drop an &own T, you also drop the pointee. You can obtain an &own T by constructing it directly to local variable &own my_local or by deriving it from an existing &own via field projections. Smart pointers generally also allow creating &own T from &own SmartPtr<T>.

One important difference to &mut T is that &own is not only temporally unique (i.e. there are no other references to that value not derived from it) but also unique for that value. In other words, one can create at most one &own T to a local variable.

let mut val = Struct { ... };
let x = &own val; //~ HELP: ownership transferred here
drop(x);
let y = &own val; //~ ERROR: cannot own `val` twice

Since the drop(x) statement drops val, the borrow checker must disallow any future access. However, we are allowed to move a value back into the memory of val:

let mut val = Struct { ... };
let x = &own val;
drop(x);
val = Struct { ... };
let y = &own val;

The lifetime 'a in &'a own T is that of the backing memory. It means that when 'a expires, the memory also is no longer valid (or rather it cannot be proven that it is valid after 'a). For this reason an &'a own T has to be dropped (or forgotten) before 'a expires (since after that it cannot be dropped any more).

&own T itself supports moving projections (another indicator that having them is a good idea). However only for types that don't implement Drop (similar to normal struct destructuring -- there are also talks about lifting this requirement, but no new issues arise from projecting &own).

&own and pinning

To make &pin own T with !(T: Unpin) sound in the face of panics, we have to add drop flags or have unforgettable types. We explored a design using drop flags below; there are separate efforts to experimenting with a Leak/Forget trait ongoing, I think it might be a better solution than drop flags at least for &own.

We need drop flags to ensure the drop guarantee of pinned values. The drop flag will be stored when the original &own is created and it will live on the stack of the function that created it. They are needed for the following scenario:

fn foo() {
    let x = Struct { ... };
    bar(&pin own x);
}

fn bar(x: &pin own Struct) {
    if random() {
        std::mem::forget(x);
    }
    if random() {
        panic!()
    }
}

Since x is pinned on the stack, it needs to be dropped before foo returns (even if it unwinds). When bar forgets the owned reference, the destructor is not run, if it now panics, the destructor needs to be run in foo. But since it gave away ownership of x to bar, it is possible that bar already dropped x (this is the case when the first random() call returns false). To keep track of this, we need a drop flag in the stack frame of foo that gets set to true when x is dropped.

There are several issues with drop flags:

  • we can't have &'static own T pointing to non-static values (for example coming from a Box::leak_owned function).
  • field projections complicate things: if we project to a field, then we could possibly forget one field, but drop another
    • solution: just store drop flags not only for the whole struct, but also all transitive fields that implement Drop
  • there is different behavior between &own T and &pin own T, the former can be forgotten and the destructor will not run, the latter can also be forgotten, but the destructor runs regardless.

This last point convinces me that we actually want &pin own T: !Leak when T: !Leak; but IIUC, that wouldn't prevent the following code from working:

fn main() { 
    let x = Struct { ... };
    let x = &pin own x;
    Box::leak(Box::new(x));
}
DerefMove

The DerefMove operation & trait is something that has been discussed in the past (I haven't dug up any discussions on it though). It is the analogous operation of &own to Deref. We need to figure out the hierarchy wrt. Deref and DerefMut, but ignoring that issue for the moment, here is how DerefMove would look like:

trait DerefMove: DropHusk {
    trait Target: ?Sized;

    fn deref_move(&own self) -> &own Self::Target;
}

Note the super trait requirement DropHusk -- it provides a special drop operation for Self when the &own Self::Target reference has been dropped. Box<T> for example would deallocate the backing memory via DropHusk. Its definition looks like this:

pub unsafe trait DropHusk {
    unsafe fn drop_husk(husk: *mut Self);
}

We would of course also use this trait for ProjectMove. Implementing DropHusk on its own does nothing; implementing DerefMove or ProjectMove will make the compiler call drop_husk instead of Drop::drop when the value goes out of scope after it has been projected or DerefMove::deref_move has been called.

We observed that DerefMove is a lot more restrictive in its usability than Deref--- and we need projections to make it actually useful in the common case. The reason for this is that &own can only be created once, but one would like to be able to create it once per field (which is exactly what moving projections allow). Consider this example:

let b = Box::new(Struct { ... });
let field1 = &own b.field1; // desugars to `DerefMove::deref_move`
let field2 = &own b.field2; //~ ERROR: cannot own `b` twice

The "cannot own `b` twice error comes from the way the deref desugaring works:

let b = Box::new(Struct { ... });
let field1 = &own DerefMove::deref_move(&own b).f1;
let field2 = &own DerefMove::deref_move(&own b).f2;
//                                       ^^^ ERROR: cannot own `b` twice

Now it's clear that we're trying to create two &own to the same value and that can't work (the issue also arises for &mut, but that already is covered by ProjectExclusive).

We can write this instead:

let b = Box::new(Struct { ... });
let b = &own b;
let field1 = &own b.field1;
let field2 = &own b.field2;

But that's cumbersome.

We also note that ProjectMove is the correct projection for ArcRef, as it avoids any additional refcount updates. We can rely on the ergonomic refcounting proposal to provide ergonomic ways to clone the value & perform more projections.

Comment by @BennoLossin posted on 2025-11-02:

Having a single Project trait

The definition of the now 3 Project* traits are 100% verbatim the same (modulo renaming of course), so we spent some time trying to unify them into a single trait. While we cannot get rid of having to have three traits, we can merge them into a single one by adding a generic:

#[sealed]
pub trait ProjectKind {
    type Ptr<T: ?Sized>;
}

pub enum Shared {}
pub enum Exclusive {}

impl ProjectKind for Shared {
    type Ptr<T: ?Sized> = *const T;
}

impl ProjectKind for Exclusive {
    type Ptr<T: ?Sized> = *mut T;
}

pub trait Projectable {
    type Target;
}

pub unsafe trait Project<Kind: ProjectKind>: Projectable {
    type Output<'a, F: Field<Base = Self::Target>>;

    unsafe fn project<'a, F: Field<Base = Self::Target>>(
        this: Kind::Ptr<Self>,
    ) -> Self::Output<'a, F>;
}

We would need some more compiler magic to ensure that nobody implements this trait generically, so impl<K> Project<K> for MyType, to keep our approach extensible (this could be an attribute if it is also useful in other cases #[rustc_deny_generic_impls]).

The benefit of merging the definitions is that we only have one single trait that we need to document and we could also add documentation on the ProjectKind types. There are also ergonomic downsides, for example all output types are now called Output and thus need to be fully qualified if multiple projection impls exist (<MyType as Project<Exclusive>>::Output<'_, F> vs MyType::OutputExclusive<'_, F>).

To make this proposal compatible with moving projections, we also either need more compiler magic to ensure that if Kind = Move we require Self: DropHusk. Or we could use associated traits and add one to ProjectKind that's then used in Project (Kind = Shared would then set this to Pointee).

This approach also makes me think a bit more about the syntax, if we discover more projections in the future, it might make sense to go for an extensible approach, like @keyword expr{->,.@,.,~}ident (so for example @move x->y or @mut x.y).

Comment by @BennoLossin posted on 2025-11-06:

A new Perspective: Projections via Places

@Nadrieril opened this zulip thread with the idea that "The normal rust way to reborrow a field uses places". He then proceeded to brainstorm a similar design for field projections with a crucial difference: making places the fundamental building block. We had a very long discussion in that thread (exchanging the existing ideas about field projection and the novel place-involving ones) that culminated in this awesome writeup by @Nadrieril: https://hackmd.io/[@Nadrieril][]/HJ0tuCO1-e. It is a very thorough document, so I will only be able to summarize it partially here:

  • instead of the Project* traits, we have the Place* traits which govern what kind of place operations are possible on *x given x: MySmartPtr, those are reading, writing and borrowing.
  • we can allow custom smart pointer reborrowing possibly using the syntax @MySmartPtr <place-expr>
  • we need multi-projections to allow simultaneous existence of &mut x.field.a and &mut x.field.b

We still have many things to flesh out in this proposal (some of these pointed out by @Nadrieril):

  • how do FRTs still fit into the equation? And what are the types implementing the Projection trait?
  • What do we do about non-indirected place containers like MaybeUninit<T>, UnsafeCell<T> and ManuallyDrop<T>?
  • does BorrowKind work as a model for the borrow checker?
  • how do we make match ergonomics work nicely?
  • how do we get around the orphan rule limitations?
  • several smaller issues/questions...

This is a very interesting viewpoint and I'm inclined to make this the main proposal idea. The traits are not too different from the current field projection design and the special borrow checker behavior was also intended at least for the first level of fields. So this is a natural evolution of the field projection proposal. Thanks a lot to @Nadrieril for the stellar writeup!

Reborrow traits (rust-lang/rust-project-goals#399)
Progress
Point of contact

Aapo Alasuutari

Champions

compiler (Oliver Scherer), lang (Tyler Mandry)

Task owners

Aapo Alasuutari

1 detailed update available.

Comment by @aapoalas posted on 2025-11-11:

We've worked towards coherence checking of the CoerceShared trait, and have come to a conclusion that (at least as a first step) only one lifetime, the first one, shall participate in reborrowing. Problems abound with how to store the field mappings for CoerceShared.

"Flexible, fast(er) compilation"

build-std (rust-lang/rust-project-goals#274)
Progress
Point of contact

David Wood

Champions

cargo (Eric Huss), compiler (David Wood), libs (Amanieu d'Antras)

Task owners

Adam Gemmell, David Wood

1 detailed update available.

Comment by @davidtwco posted on 2025-11-22:

Our first RFC - rust-lang/rfcs#3873 - is in the FCP process, waiting on boxes being checked. rust-lang/rfcs#3874 and rust-lang/rfcs#3875 are receiving feedback which is being addressed.

Production-ready cranelift backend (rust-lang/rust-project-goals#397)
Progress Will not complete
Point of contact

Folkert de Vries

Champions

compiler (bjorn3)

Task owners

bjorn3, Folkert de Vries, [Trifecta Tech Foundation]

No detailed updates available.
Promoting Parallel Front End (rust-lang/rust-project-goals#121)
Progress
Point of contact

Sparrow Li

Task owners

Sparrow Li

No detailed updates available.
Relink don't Rebuild (rust-lang/rust-project-goals#400)
Progress Will not complete
Point of contact

Jane Lusby

Champions

cargo (Weihang Lo), compiler (Oliver Scherer)

Task owners

@dropbear32, @osiewicz

1 detailed update available.

Comment by @yaahc posted on 2025-11-21:

linking this here so people know why there hasn't been any progress on this project goal.

#t-compiler > 2025H2 Goal Review @ 💬

"Higher-level Rust"

Ergonomic ref-counting: RFC decision and preview (rust-lang/rust-project-goals#107)
Progress
Point of contact

Niko Matsakis

Champions

compiler (Santiago Pastorino), lang (Niko Matsakis)

Task owners

Niko Matsakis, Santiago Pastorino

2 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

Three new blog posts:

The most important conclusions from those posts are

  • Explicit capture clauses would be useful, I proposed one specific syntax but bikeshedding will be required. To be "ergonomic" we need the ability to refer to full places, e.g., move(cx.foo.clone()) || use(cx.foo).
  • We should consider Alias or Share as the name for Handle trait; I am currently leaning towards Alias because it can be used as both a noun and a verb and is a bit more comparable to clone -- i.e., you can say "an alias of foo" just like you'd say "a clone of foo".
  • We should look for solutions that apply well to clone and alias so that higher-level Rust gets the ergonomic benefits even when cloning "heavier-weight" types to which Alias does not apply.
Comment by @nikomatsakis posted on 2025-11-12:

New blog post:

  • https://smallcultfollowing.com/babysteps/blog/2025/11/10/just-call-clone/

Exploring one way to make things more ergonomic while remaining explicit, which is to make .clone() and .alias() (1) understood by move closure desugaring and (2) optimized away when redundant.

Stabilize cargo-script (rust-lang/rust-project-goals#119)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page), lang (Josh Triplett), lang-docs (Josh Triplett)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-11-21:

Key developments

  • rust-lang/rust#148051

Blockers:

  • rustdoc deciding on and implementing how they want frontmatter handled in doctests

"Unblocking dormant traits"

Evolving trait hierarchies (rust-lang/rust-project-goals#393)
Progress
Point of contact

Taylor Cramer

Champions

lang (Taylor Cramer), types (Oliver Scherer)

Task owners

Taylor Cramer, Taylor Cramer & others

No detailed updates available.
In-place initialization (rust-lang/rust-project-goals#395)
Progress
Point of contact

Alice Ryhl

Champions

lang (Taylor Cramer)

Task owners

Benno Lossin, Alice Ryhl, Michael Goulet, Taylor Cramer, Josh Triplett, Gary Guo, Yoshua Wuyts

1 detailed update available.

Comment by @Darksonn posted on 2025-11-14:

On Nov 12th, there was a mini-design meeting organized by Xiangfei Ding on inplace initialization. The attendees were Xiangfei Ding, Alice Ryhl, Benno Lossin, Tyler Mandry, and Taylor Cramer.

We discussed this document: https://hackmd.io/@rust-for-linux-/H11r2RXpgl

Next-generation trait solver (rust-lang/rust-project-goals#113)
Progress
Point of contact

lcnr

Champions

types (lcnr)

Task owners

Boxy, Michael Goulet, lcnr

1 detailed update available.

Comment by @lcnr posted on 2025-11-13:

The new solver is now officially used by Rust Analyzer: https://rust-analyzer.github.io/thisweek/2025/10/27/changelog-299.html. A huge shoutout to Jack Huey Chayim Refael Friedman Shoyu Vanilla and Laurențiu Nicola for that work.

On the rustc end Rémy Rakic spent a lot of time triaging the most recent crater run. This uncovered a bunch of new edge cases, resulting in 6 new tracked issues.

We've also merged fixes for 4 minor issues over the last 3 weeks: https://github.com/rust-lang/rust/pull/148292 https://github.com/rust-lang/rust/pull/148173 https://github.com/rust-lang/rust/pull/147840. Thanks to Jana Dönszelmann, tiif and @adwinwhite for implementing these. @adwinwhite was also instrumental in diagnosing the underlying issue of https://github.com/rust-lang/trait-system-refactor-initiative/issues/245.

Going forward, we intend to continue the crater triage while fixing remaining issues until we're ready for stabilization :> the remaining issues are tracked in https://github.com/orgs/rust-lang/projects/61/views/1.

Stabilizable Polonius support on nightly (rust-lang/rust-project-goals#118)
Progress
Point of contact

Rémy Rakic

Champions

types (Jack Huey)

Task owners

Amanda Stjerna, Rémy Rakic, Niko Matsakis

1 detailed update available.

Comment by @lqd posted on 2025-11-25:

Key developments:

  • I prototyped building blocks to fix the liveness soundness issue, but this was deemed too brittle.
  • so we prepared a meeting for the types team to discuss the problem, and possible solutions.
  • it turns out the issue is related to another soundness issue for opaque types in the new trait solver, https://github.com/rust-lang/trait-system-refactor-initiative/issues/159, and that tiif is already working on. The same solution is needed for both issues: with the full implied bounds available for opaque types in liveness, we'll able to require all the regions outliving the opaque lower bound to be live, while ignoring the unrelated regions (that the hidden type cannot use anyway). There will be no relevant dead region through which loans flow, and code relying on unused lifetimes being dead (like a lot of ed2024 code with the default capture changes) will still compile
  • we prepared another types-team meeting to discuss polonius in general, and the alpha algorithm in particular, to share knowledge among the team. This will also be helpful to then on apply member constraints in a location-sensitive manner, since right now they're applied at the SCC level and we need to make sure these constraints with the choice regions are present in the localized subset graph.
  • niko and tiif have made a lot of progress on adding support for borrow checking in a-mir-formality, so I've also joined these meetings, since we'll also want to model the alpha.
  • I've looked into Prusti's Place Capability Graphs, and plan to see how to integrate the alpha there, and if possible with the fuzzing capabilities mentioned in the paper, with the usual goal to expand testing as we've mentioned many times
  • we also had some discussion for a possible masters' student project, and thought about different practical and theoretical topics

Goals looking for help


Other goal updates

Add a team charter for rustdoc team (rust-lang/rust-project-goals#387)
Progress Completed
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-11-21:

Done in https://github.com/rust-lang/rust-forge/pull/852.

Borrow checking in a-mir-formality (rust-lang/rust-project-goals#122)
Progress
Point of contact

Niko Matsakis

Champions

types (Niko Matsakis)

Task owners

Niko Matsakis, tiif

3 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

tiif and I have been meeting weekly here and pushing changes to the living-large branch of a-mir-formality/nikomatsakis.

We are making progress, we have a minirust type checker and the start of a borrow checker. We've decided to try to use a "judgment-like" approach rather than modeling this as dataflow, as I believe it will give greater insight into the "structure" of the trait checker.

Comment by @nikomatsakis posted on 2025-11-12:

tiif, Jack Huey, and I met today and did more work on the "living-large" branch. The borrow checker judgments are taking shape. My expectation is that we will walk the CFG, tracking the sets of borrows that have occurred so far. At each statement, we will have a judgment that looks at (a) the subtyping relations generated by the type check (flow-insensitive, like NLL); (b) the loans issued so far and not killed; and (c) the live places that may be accessed later. We'll require then that if you are accessing a place P, then there are no loans accessible from a live place that have borrowed P in an incompatible way.

Comment by @nikomatsakis posted on 2025-11-19:

Continued work this week:

Elaborated some on the definition of the when an access or a statement is valid. We are working our way towards what we believe will be a "largely accurate" model of today's NLL -- obviously we'll then want to test it and compare behavior around various edge cases.

C++/Rust Interop Problem Space Mapping (rust-lang/rust-project-goals#388)
Progress
Point of contact

Jon Bauman

Champions

compiler (Oliver Scherer), lang (Tyler Mandry), libs (David Tolnay)

Task owners

Jon Bauman

1 detailed update available.

Comment by @baumanj posted on 2025-11-26:

Key developments: What has happened since the last time. It's perfectly ok to list "nothing" if that's the truth, we know people get busy.

Nothing! This is the first update and I have yet to focus attention on the project goal. For context, I am employed by the Rust Foundation leading the C++ Interoperability initiative and so far have been executing against the strategy detailed in the problem statement. Owing to greater than anticipated success and deadlines related to WG21 meetings, I've been focusing on the Social Interoperability strategy recently. I have just reached a point where I can turn more attention to the other strategies and so expect to make progress on this goal soon.

Blockers: List any Rust teams you are waiting on and what you are waiting for.

None; I'm getting excellent support from the Project in everything I'm doing. My successes thus far would not have been possible without them, and there are too many to enumerate in this space. There will be a blog post coming soon detailing the past year of work in the initiative where I intend to go into detail. Watch this space for updates.

Help wanted: Are there places where you are looking for contribution or feedback from the broader community?

I am always interested in contribution and feedback. If you're interested, please reach out via interop@rustfoundation.org or t-lang/interop.

Comprehensive niche checks for Rust (rust-lang/rust-project-goals#262)
Progress
Point of contact

Bastian Kersting

Champions

compiler (Ben Kimock), opsem (Ben Kimock)

Task owners

Bastian Kersting], Jakob Koschel

No detailed updates available.
Const Generics (rust-lang/rust-project-goals#100)
Progress
Point of contact

Boxy

Champions

lang (Niko Matsakis)

Task owners

Boxy, Noah Lev

2 detailed updates available.

Comment by @BoxyUwU posted on 2025-11-05:

Since the lang meeting most progress on this project goal has been unrelated to adt_const_params.

There's been a large amount of work on min_generic_const_args, specifically Noah Lev's PR (rust-lang/rust#139558) which once landed the core of the impl work for the feature will be done. I've reviewed it together with Oliver Scherer and it's pretty much ready to go other than some small reviews.

Once this PR lands I'm hoping that there should be a fair amount of "smallish" PRs that can be made which could be a good set of PRs to mentor new-ish contributors on.

Comment by @BoxyUwU posted on 2025-11-29:

Once again most progress here has been on min_generic_const_args.

Noah Lev's PR (rust-lang/rust#139558) has now landed, as well as an additional PR of his: rust-lang/rust#148716. Between the two of these the core impl should be "mostly done" now, atleast with no additional feature gates enabled :).

The next big step is to make the min_generic_const_args prototype work well with adt_const_params which I've implemented myself in rust-lang/rust#149136 and rust-lang/rust#149114. These PRs still need to be reviewed but the bulk of the impl work there is now done. These PRs allow for constructing ADTs where the field values may themselves be const parameters or non-concrete uses of type_consts (ie the values are const argument positions).

Once my PRs have landed I would consider mgca as a prototype to be truly "done" though not done as an actual feature. Huge thanks to camelid for sticking through a bunch of fairly painful PRs to get us to this point.

Continue resolving `cargo-semver-checks` blockers for merging into cargo (rust-lang/rust-project-goals#104)
Progress
Point of contact

Predrag Gruevski

Champions

cargo (Ed Page), rustdoc (Alona Enraght-Moony)

Task owners

Predrag Gruevski

2 detailed updates available.

Comment by @obi1kenobi posted on 2025-11-02:

Status update as of November 1

Key developments:

  • Draft PR for exposing implied bounds in rustdoc JSON: https://github.com/rust-lang/rust/pull/148379
  • A concrete plan for how that new info turns into dozens of new lints covering many kinds of bounds

Linting ?Sized and 'static bounds turned out to be quite a bit more complex than I anticipated. The key issue is that seeing T: Foo + ?Sized does not guarantee that T can be unsized, since we might have Foo: Sized which renders the ?Sized relaxation ineffective. Similarly, seeing T: Foo might also non-obviously imply T: 'static via a similar implied bound.

Failure to correctly account for implied bounds would lead to catastrophic false-positives and false-negatives. For example, changing T: Foo to T: Foo + 'static could be a major breaking change or a no-op, depending on whether we have Foo: 'static (either directly or implicitly via other trait bounds).

We cannot determine implied bounds using information present in rustdoc JSON today, so the rustdoc team and I have been iterating on the best way to compute and include that information in rustdoc JSON. Assuming something similar to the aforementioned PR becomes part of rustdoc JSON, cargo-semver-checks stands to gain several dozen new lints covering these tricky cases over trait associated types, generic type parameters, and APIT/RPIT/RPITIT.

Comment by @obi1kenobi posted on 2025-11-23:

Google Summer of Code 2025 is complete + finally some movement on cross-crate linting! 🚀

Key developments

  • Two students had a successful conclusion of Google Summer of Code working on cargo-semver-checks - find more details here!
  • rustdoc JSON now includes rlib information, following the design for cross-crate rustdoc JSON info created at RustWeek 2025: https://github.com/rust-lang/rust/pull/149043
  • A cargo issue was discovered that prevents this rlib info from being used; it's currently being triaged: https://github.com/rust-lang/cargo/issues/16291
  • Once that's resolved, we'll have enough here for a basic prototype. Getting features right in dependencies will likely require more work due to having many more cargo-related edge cases.
Develop the capabilities to keep the FLS up to date (rust-lang/rust-project-goals#391)
Progress
Point of contact

Pete LeVasseur

Champions

bootstrap (Jakub Beránek), lang (Niko Matsakis), spec (Pete LeVasseur)

Task owners

Pete LeVasseur, Contributors from Ferrous Systems and others TBD, t-spec and contributors from Ferrous Systems

2 detailed updates available.

Comment by @PLeVasseur posted on 2025-11-05:

Meeting minutes from meeting held on 2025-10-31 (thank you to Tomas Sedovic 🥰)

Top-level:

  • Keep high quality bar, merge small, well-vetted changes when possible
  • Need concentrated effort to get the 1.90 FLS updates merged
  • Once 1.90 merged, we attempt first go as a team at 1.91

Discussion:

  • Suggest that everyone read the Glossary as a starting point
  • How to best triage / handle incoming issues?
Comment by @PLeVasseur posted on 2025-11-21:

Meeting notes here: 2025-11-14 - t-fls Meeting

Key developments: PR merged for 1.90 update of the FLS. We're preparing now to work on the 1.91 update of the FLS. Blockers: None currently Help wanted: Anyone that's familiar with the Rust Reference is more than encouraged to read through the FLS to get a sense of it and where further alignment may be possible. Feel free to open issues on the FLS repo as you find things.

Emit Retags in Codegen (rust-lang/rust-project-goals#392)
Progress
Point of contact

Ian McCormack

Champions

compiler (Ralf Jung), opsem (Ralf Jung)

Task owners

Ian McCormack

1 detailed update available.

Comment by @icmccorm posted on 2025-11-11:

We've posted a pre-RFC for feedback, and we'll continue updating and expanding the draft here. This reflects most of the current state of the implementation, aside from tracking interior mutability precisely, which is still TBD but is described in the RFC.

Expand the Rust Reference to specify more aspects of the Rust language (rust-lang/rust-project-goals#394)
Progress
Point of contact

Josh Triplett

Champions

lang-docs (Josh Triplett), spec (Josh Triplett)

Task owners

Amanieu d'Antras, Guillaume Gomez, Jack Huey, Josh Triplett, lcnr, Mara Bos, Vadim Petrochenkov, Jane Lusby

1 detailed update available.

Comment by @joshtriplett posted on 2025-11-12:

We're putting together a prototype/demo of our reference changes at https://rust-lang.github.io/project-goal-reference-expansion/ . This includes a demonstration of tooling changes to provide stability markers (both "documenting unstable Rust" and "unstable documentation of stable Rust").

Finish the libtest json output experiment (rust-lang/rust-project-goals#255)
Progress
Point of contact

Ed Page

Champions

cargo (Ed Page)

Task owners

Ed Page

1 detailed update available.

Comment by @epage posted on 2025-11-21:

Key developments:

  • libtest2:
    • #[test] macro added
    • Support for should_panic
    • Support for ignore
    • Support for custom error types
    • compile-fail tests for macros

Blockers

  • None

Help wanted:

Finish the std::offload module (rust-lang/rust-project-goals#109)
Progress
Point of contact

Manuel Drehwald

Champions

compiler (Manuel Drehwald), lang (TC)

Task owners

Manuel Drehwald, LLVM offload/GPU contributors

1 detailed update available.

Comment by @ZuseZ4 posted on 2025-11-19:

Automatic Differentiation

Time for the next update. By now, we've had std::autodiff for around a year in upstream rustc, but not in nightly. In order to get some more test users, I asked the infra team to re-evaluate just shipping autodiff as-is. This means that for the moment, we will increase the binary size of rustc by ~5%, even for nightly users who don't use this feature. We still have an open issue to avoid this overhead by using dlopen, please reach out if you have time to help. Thankfully, my request was accepted, so I spent most of my time lately preparing that release.

  1. As part of my cleanup I went through old issues, and realized we now partly support rlib's! That's a huge improvement, because it means you can use autodiff not only in your main.rs file, but also in dependencies (either lib.rs, or even rely on crates that use autodiff). With the help of Ben Kimock I figured out how to get the remaining cases covered, hopefully the PR will land soon.
  2. I started documentation improvements in https://github.com/rust-lang/rust/pull/149082 and https://github.com/rust-lang/rust/pull/148201, which should be visible on the website from tomorrow onwards. They are likely still not perfect, so please keep opening issues if you have questions.
  3. We now provide a helpful error message if a user forgets enabling lto=fat: https://github.com/rust-lang/rust/pull/148855
  4. After two months of work, @sgasho managed to add Rust CI to enzyme! Unfortunately, Enzyme devs broke and disabled it directly, so we'll need to talk about maintaining it as part of shipping Enzyme in nightly.

I have the following elements on my TODO list as part shipping AD on nightly

  1. Re-enable macOS build (probably easy)
  2. Talk with Enzyme Devs about maintenance
  3. Merge rlib support (under review)
  4. upstream ADbenchmarks from r-l/enzyme to r-l/r as codegen tests (easy)
  5. Write a block post/article for https://blog.rust-lang.org/inside-rust/

GPU offload

  1. The llvm dev talk about GPU programming went great, I got to talk to a lot of other developers in the area of llvm offload. I hope to use some of the gained knowledge soon. Concrete steps planned are the integration of libc-gpu for IO from kernels, as well as moving over my code from the OpenMP API to the slightly lower level liboffload API.
  2. We confirmed that our gpu offload prototype works on more hardware. By now we have the latest AMD APU generation covered, as well as an MI 250X and an RTX 4050. My own Laptop with a slightly older AMD Ryzen 7 PRO 7840U unfortunately turned out to be not supported by AMD drivers.
  3. The offload intrinsic PR by Marcelo Domínguez is now marked as ready, and I left my second round of review. Hopefully, we can land it soon!
  4. I spend some time trying to build and potentially ship the needed offload changes in nightly, unfortunately I still fail to build it in CI: https://github.com/rust-lang/rust/pull/148671.

All in all, I think we made great progress over the last month, and it's motivating that we finally have no blockers left for flipping the llvm.enzyme config on our nightly builds.

Getting Rust for Linux into stable Rust: compiler features (rust-lang/rust-project-goals#407)
Progress
Point of contact

Tomas Sedovic

Champions

compiler (Wesley Wiser)

Task owners

(depending on the flag)

2 detailed updates available.

Comment by @tomassedovic posted on 2025-11-19:

Update from the 2025-11-05 meeting.

-Zharden-sls / rust#136597

Wesley Wiser left a comment on the PR, Andr

-Zno-jump-tables / rust#145974

Merged, expected to ship in Rust 1.93. The Linux kernel added support for the new name for the option (-Cjump-tables=n).

Comment by @tomassedovic posted on 2025-11-28:

Update form the 2025-11-19 meeting:

-Zharden-sls / rust#136597

Andrew addressed the comment and rebased the PR. It's waiting for a review again.

#![register_tool] / rust#66079

Tyler Mandry had an alternative proposal where lints would be defined in an external crate and could be brought in via use or something similar: https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/namespaced.20tool.20attrs.

A concern people had was the overhead of having to define a new crate and the potential difficulty with experimenting on new lints.

Tyler suggested adding this as a future possibility to RFC#3808 and FCPing it.

Getting Rust for Linux into stable Rust: language features (rust-lang/rust-project-goals#116)
Progress
Point of contact

Tomas Sedovic

Champions

lang (Josh Triplett), lang-docs (TC)

Task owners

Ding Xiang Fei

2 detailed updates available.

Comment by @tomassedovic posted on 2025-11-19:

Update from the 2025-11-05 meeting.

Deref/Receiver

Ding Xiang Fei posted his reasoning for the trait split in the Zulip thread and suggested adding a second RFC to explain.

TC recommended writing a Reference PR. The style forces one to explain the model clearly which should then make writing the RFC easier.

The lang experiment PR for arbitrary self types have feature gates for the two options we're exploring.

Arbitrary Self Types and derive(CoercePointee) / tracking issue #44874

theemathas opened an issue derive(CoercePointee) accepts ?Sized + Sized #148399. This isn't a critical issue, just an error that arguably should be a lint.

Boxy opened a fix for a derive(CoercePointee) blocker: Forbid freely casting lifetime bounds of dyn-types .

RFC #3851: Supertrait Auto-impl

Ding Xiang Fei is working on the implementation (the parser and HIR interface for it). Ding's also working on a more complete section dedicated to questions raised by obi1kenobi

Field projections

Benno Lossin has been posting super detailed updates on the tracking issue

We've discussed the idea of virtual places (see Zulip thread where they were proposed).

Inlining C code into Rust code

Matt Mauer had an idea to compile C code into LLVM bytecode (instead of object file) and then the llvm-link tool to merge them together and treat everything in the second bytecode file as a static inlined function. Matt suggested we could integrate this into the rustc passes.

This would make it easy to inline certain functions into Rust code without full LTO.

Relevant Zulip thread.

This sounds like a good candidate for the next Project Goals period.

Comment by @tomassedovic posted on 2025-11-28:

Update from the 2025-11-19 meeting.

rustdoc checking for private and hidden items (rust##149105 & rust#149106)

Miguel proposed Rust Doc checking for invalid links to items that are hidden or private even if no docs are built for them. This can help catch typos or dead links because the docs became out of date.

Guillaume was much more open to this being a toggle, lolbinarycat opened a PR here: https://github.com/rust-lang/rust/pull/141299

unsafe_op_in_unsafe_fn not respected in imported declarative macros rust#112504

This lint doesn't trigger when importing a declarative macro that's calling unsafe code without having an unsafe block and without a SAFETY comment.

The lint is only triggered when the macro was actually used.

Fix for imports_granularity is not respected for #[cfg]'d items / rustfmt#6666

Ding opened a PR to fix this: https://github.com/rust-lang/rustfmt/issues/6666

rustfmt trailing comma hack

Ding and Manish were talking about writing up a proper fix for the vertical layout that's currently being solved by the , //, hack

TypeId layout

This has been discussed in https://github.com/rust-lang/rust/pull/148265 and https://rust-lang.zulipchat.com/#narrow/channel/213817-t-lang/topic/TypeID.20design/near/560189854.

Apiraino proposed a compiler design meeting here: https://github.com/rust-lang/compiler-team/issues/941. That meeting has not been scheduled yet, though.

Deref / Receiver

Following TC's recommendation, Ding is drafting the Reference PR.

Arbitrary Self Types and derive(CoercePointee)

Ding opened a PR to fix unsoundness in the DispatchFromDyn trait: https://github.com/rust-lang/rust/pull/149068

Theemathas opened a question on whether Receiver should by dyn-compatible: https://github.com/rust-lang/rust/issues/149094

RFC #3848: Pass pointers to const in assembly

Merged!

In-place initialization

Benno noted that Effects and In-place Init are not compatible with each other: https://rust-lang.zulipchat.com/#narrow/channel/528918-t-lang.2Fin-place-init/topic/Fundamental.20Issue.20of.20Effects.20and.20In-place-init/with/558268061

This is going to affect any in-place init proposal.

Benno proposes fixing this with keyword generics. This is a topic that will receive a lot of discussion doing forward.

Alice has been nominated and accepted as language-advisor. Fantastic news and congratulations!

Implement Open API Namespace Support (rust-lang/rust-project-goals#256)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page), compiler (b-naber), crates-io (Carol Nichols)

Task owners

b-naber, Ed Page

No detailed updates available.
MIR move elimination (rust-lang/rust-project-goals#396)
Progress
Point of contact

Amanieu d'Antras

Champions

lang (Amanieu d'Antras)

Task owners

Amanieu d'Antras

1 detailed update available.

Comment by @Amanieu posted on 2025-11-15:

An RFC draft covering the MIR changes necessary to support this optimization has been written and is currently being reviewed by T-opsem. It has already received one round of review and the feedback has been incorporated in the draft.

Prototype a new set of Cargo "plumbing" commands (rust-lang/rust-project-goals#264)
Progress
Point of contact

Help Wanted

Task owners

Help wanted, Ed Page

No detailed updates available.
Prototype Cargo build analysis (rust-lang/rust-project-goals#398)
Progress
Point of contact

Weihang Lo

Champions

cargo (Weihang Lo)

Task owners

Help wanted Weihang Lo, Weihang Lo

2 detailed updates available.

Comment by @weihanglo posted on 2025-11-04:

Instead of using a full-fledged database like SQLite, we switched to a basic JSONL-based logging system to collect build metrics. A simple design doc can be found here: https://hackmd.io/K5-sGEJeR5mLGsJLXqsHrw.

Here are the recent pull requests:

  • https://github.com/rust-lang/cargo/pull/16150
  • https://github.com/rust-lang/cargo/pull/16179

To enable it, set CARGO_BUILD_ANALYSIS_ENABLED=true or set the Cargo config file like this:

[build.analysis]
enabled = true

As of today (nightly-2025-11-03), it currently emits build-started and timing-info two log events to $CARGO_HOME/log/ (~/.cargo/log/ by default). The shape of timing-info JSON is basically the shape of the unstable --timing=json. I anticipate when this is stabilized we don't need --timing=json.

The build.analysis.enable is a non-blocking unstable feature. Unless bugs, should be able to set unconditionally even on stable toolchain. When not supported, it would just warn the unknown config merely.

Comment by @weihanglo posted on 2025-11-24:

Key developments: Started emitting basic fingerprint information, and kicked off the refactor of rendering HTML timing report for future report replay through cargo report timings command.

  • https://github.com/rust-lang/cargo/pull/16203
  • https://github.com/rust-lang/cargo/pull/16282

Blockers: no except my own availability

Help wanted: Mendy on Zulip brought up log compression (#t-cargo > build analysis log format @ 💬) but I personally don't have time looking at it durnig this period. Would love to see people create an issue in rust-lang/cargo and help explore the idea.

reflection and comptime (rust-lang/rust-project-goals#406)
Progress
Point of contact

Oliver Scherer

Champions

compiler (Oliver Scherer), lang (Scott McMurray), libs (Josh Triplett)

Task owners

oli-obk

1 detailed update available.

Comment by @nikomatsakis posted on 2025-11-12:

Another related PR:

https://github.com/rust-lang/rust/pull/148820

Rework Cargo Build Dir Layout (rust-lang/rust-project-goals#401)
Progress
Point of contact

Ross Sullivan

Champions

cargo (Weihang Lo)

Task owners

Ross Sullivan

1 detailed update available.

Comment by @ranger-ross posted on 2025-11-21:

Status update November 21, 2025

October was largely spent working out design details of the build cache and locking design.

https://github.com/rust-lang/cargo/pull/16155 was opened with an initial implementation for fine grain locking for Cargo's build-dir however it needs to be reworked after the design clarifications mentioned above.

In November I had a change of employer so I my focus was largely on that. However, we did make some progress towards locking in https://github.com/rust-lang/cargo/pull/16230 which no longer lock the artifact-dir for cargo check. This is expected to land in 1.93.0.

I'm hoping to push fine grain locking forward later this month and in December.

Run more tests for GCC backend in the Rust's CI (rust-lang/rust-project-goals#402)
Progress Completed
Point of contact

Guillaume Gomez

Champions

compiler (Wesley Wiser), infra (Marco Ieni)

Task owners

Guillaume Gomez

1 detailed update available.

Comment by @GuillaumeGomez posted on 2025-11-19:

This project goal has been completed. I updated the first issue to reflect it. Closing the issue then.

Rust Stabilization of MemorySanitizer and ThreadSanitizer Support (rust-lang/rust-project-goals#403)
Progress
Point of contact

Jakob Koschel

Task owners

[Bastian Kersting](https://github.com/1c3t3a), [Jakob Koschel](https://github.com/jakos-sec)

1 detailed update available.

Comment by @jakos-sec posted on 2025-11-21:

We've had a bunch of discussions and I opened a MCP (link, zulip).

I think the final sentiment was creating new targets for the few sanitizers and platforms that are critical. I'm in the process of prototyping something to get new feedback on it.

Rust Vision Document (rust-lang/rust-project-goals#269)
Progress
Point of contact

Niko Matsakis

Task owners

vision team

1 detailed update available.

Comment by @nikomatsakis posted on 2025-11-05:

Update:

Jack Huey has been doing great work building out a system for analyzing interviews. We are currently looking at slicing the data along a few dimensions:

  • What you know (e.g., experience in other languages, how much experience with Rust)
  • What you are trying to do (e.g., application area)
  • Where you are trying to do it (e.g., country)

and asking essentially the same set of questions for each, e.g., what about Rust worked well, what did not work as well, what got you into Rust, etc.

Our plan is to prepare a draft of an RFC with some major conclusions and next steps also a repository with more detailed analysis (e.g., a deep dive into the Security Critical space).

rustc-perf improvements (rust-lang/rust-project-goals#275)
Progress
Point of contact

James

Champions

compiler (David Wood), infra (Jakub Beránek)

Task owners

James, Jakub Beránek, David Wood

1 detailed update available.

Comment by @Kobzol posted on 2025-11-19:

The new system has been running in production without any major issues for a few weeks now. In a few weeks, I plan to start using the second collector, and then announce the new system to Project members to tell them how they can use its new features.

Stabilize public/private dependencies (rust-lang/rust-project-goals#272)
Progress
Point of contact

Help Wanted

Champions

cargo (Ed Page)

Task owners

Help wanted, Ed Page

No detailed updates available.
Stabilize rustdoc `doc_cfg` feature (rust-lang/rust-project-goals#404)
Progress
Point of contact

Guillaume Gomez

Champions

rustdoc (Guillaume Gomez)

Task owners

Guillaume Gomez

No detailed updates available.
SVE and SME on AArch64 (rust-lang/rust-project-goals#270)
Progress
Point of contact

David Wood

Champions

compiler (David Wood), lang (Niko Matsakis), libs (Amanieu d'Antras)

Task owners

David Wood

2 detailed updates available.

Comment by @nikomatsakis posted on 2025-11-05:

Notes from our meeting today:

Syntax proposal: only keyword

We are exploring the use of a new only keyword to identify "special" bounds that will affect the default bounds applied to the type parameter. Under this proposal, T: SizeOfVal is a regular bound, but T: only SizeOfVal indicates that the T: const Sized default is suppressed.

For the initial proposal, only can only be applied to a known set of traits; one possible extension would be to permit traits with only supertraits to also have only applied to them:

trait MyDeref: only SizeOfVal { }
fn foo<T: only MyDeref>() { }

// equivalent to

trait MyDeref: only SizeOfVal { }
fn foo<T: MyDeref + only SizeOfVal>() { }

We discussed a few other syntactic options:

  • A ^SizeOfVal sigil was appealing due to the semver analogy but rejected on the basis of it being cryptic and hard to google.
  • The idea of applying the keyword to the type parameter only T: SizeOfVal sort of made sense, but it would not compose well if we add additional families of "opt-out" traits like Destruct and Forget, and it's not clear how it applies to supertraits.

Transitioning target

After testing, we confirmed that relaxing Target bound will result in significant breakage without some kind of transitionary measures.

We discussed the options for addressing this. One option would be to leverage "Implementable trait aliases" RFC but that would require a new trait (Deref20XX) that has a weaker bound an alias trait Deref = Deref20XX<Target: only SizeOfVal>. That seems very disruptive.

Instead, we are considering an edition-based approach where (in Rust 2024) a T: Target bound is defaulted to T: Deref<Target: only SizeOfVal> and (in Rust 20XX) T: Target is defaulted to T: Deref<Target: only Pointee>. The edition transition would therefore convert bounds to one of those two forms to be fully explicit.

One caveat here is that this edition transition, if implemented naively, would result in stronger bounds than are needed much of the time. Therefore, we will explore the option of using bottom-up analysis to determine when transitioning whether the 20XX bound can be used instead of the more conservative 2024 bound.

Supertrait bounds

We explored the implications of weakening supertrait bounds a bit, looking at this example

trait FooTr<T: ?Sized> {}

struct Foo<T: ?Sized>(std::marker::PhantomData<T>);

fn bar<T: ?Sized>() {}

trait Bar: FooTr<Self> /*: no longer MetaSized */ {
  //       ^^^^^^^^^^^ error!
    // real examples are `Pin` and `TypeOf::of`:
    fn foo(&self, x: Foo<Self>) {
        //        ^^^^^^^^^^^^ error!
        bar::<Self>();
        // ^^^^^^^^^^ error!
          
      
        // real examples are in core::fmt and core::iter:
        trait DoThing {
            fn do_thing() {}
        }
        
        impl<T: ?Sized> DoThing for T {
            default fn do_thing() {}
        }
        
        impl<T: Sized> DoThing for T {
            fn do_thing() {}
        }
        
        self.do_thing();
        // ^^^^^^^^^^^^^ error!
        // specialisation case is not an issue because that feature isn't stable, we can adjust core, but is a hazard with expanding trait hierarchies in future if stabilisation is ever stabilised
    }
}

The experimental_default_bounds work originally added Self: Trait bounds to default methods but moved away from that because it could cause region errors (source 1 / source 2). We expect the same would apply to us but we are not sure.

We decided not to do much on this, the focus remains on the Deref::Target transition as it has more uncertainty.

Comment by @davidtwco posted on 2025-11-22:

No progress since [Niko Matsakis's last comment](https://github.com/rust-lang/rust-project-goals/issues/270#issuecomment-3492255970) - intending to experiment with resolving challenges with Deref::Target and land the SVE infrastructure with unfinished parts for experimentation.

Type System Documentation (rust-lang/rust-project-goals#405)
Progress
Point of contact

Boxy

Champions

types (Boxy)

Task owners

Boxy, lcnr

2 detailed updates available.

Comment by @BoxyUwU posted on 2025-11-05:

A bit late on this update but I've sat down with lcnr a little while back and we tried to come up with a list of topics that we felt fell under type system documentation. This is an entirely unordered list and some topics may already be adequately covered in the dev guide already.

Regardless this effectively serves as a "shiny future" for everything I'd like to have documentation about somewhere (be it dev guide or in-tree module level documentation):

  • opaque types
    • non defining vs defining uses
    • member constraints (borrowck overlap)
    • checking item bounds
    • high level normalization/opaque type storage approach (new solver)
    • normalization incompleteness
    • method/function incompleteness
    • how does use<...> work
    • 'erased regions causes problems with outlives item bounds in liveness
    • consistency across defining scopes
    • RPITIT inference? does this have special stuff
    • capturing of bound vars in opaques under binders, Fn bounds are somewhat special in relation to this
    • opaques inheriting late bound function parameters
  • non opaque type, impl Trait
    • RPITIT in traits desugaring
    • impl Trait in bindings
    • APIT desugaring impl details
  • const generics
    • anonymous constants
    • ConstArgHasType
    • TSVs vs RVs and generally upstream doc from lang meeting to dev guide
    • deterministic CTFE requirement
  • HIR typeck
    • expectations (and how used incorrectly :3)
    • method lookup + assorted code cleanups
    • coercions
    • auto-deref/reborrows (in coercions/method selection)
    • closure signature inference
    • fudge_inference_if_ok :>
    • diverging block handling :3
    • fallback :3
  • MIR borrowck
    • MIR typeck
      • why do we want two typecks
      • region dependent goals in new solver (interaction with lack-of region uniquification)
    • overlaps with opaque types
    • compute region graph
    • closure requirements
    • borrowck proper
  • compare predicate entailment :>
    • param env jank
    • implied bounds handling
  • trait objects: recent FCPs :3
    • dyn compatibility soundness interactions (see coerce pointee/arbitrary self types stuff)
    • dyn compatibility for impl reasons (monomorphization)
    • projection bounds handling
    • args not required for wf
  • ty::Infer in ty overview
  • generalization
  • coroutines
    • deferred coroutine obligations
    • witness types?
    • why -Zhigher-ranked-assumptions exists
  • binders and universes existsA forallB A == B
    • build more of an intuition than current docs :thinking_face:
  • talk about hr implied bounds there/be more explicit/clear in https://rustc-dev-guide.rust-lang.org/traits/implied-bounds.html?highlight=implied#proving-implicit-implied-bounds
  • incompleteness
    • what is it
    • what kinds are OK (not entirely sure yet. small explanation and add a note)
  • trait solving
    • cycles
    • general overview of how trait solving works as a concept (probably with example and handwritten proof trees)
      • important: first go "prove stuff by recursively proving nested requirements", then later introduce candidates
      • clauses/predicates
    • running pending goals in a loop
    • what kinds of incompleteness (overlap with opaques)
    • builtin impls and how to add them
  • hir to ty lowering :>
    • itemctxt vs fnctxt behaviours
    • normalization in lowering
    • lowering should be lossy
    • idempotency(?)
    • cycles from param env construction
    • const generics jank about Self and no generic parameters allowed
  • well formedness checking + wf disambiguation page
  • normalization & aliases
    • be more clear about normalizing ambig aliases to infer vars :thinking_face:
    • normalize when equating infer vars with aliases (overlap with generalization?)
    • item bounds checking
    • interactions with implied bounds (overlap with implied bounds and hir ty lowering)
  • variance

Since making this list I've started working on writing documentation about coercions/adjustments. So far this has mostly resulted in spending a lot of time reading the relevant code in rustc. I've discovered a few bugs and inconsistencies in behaviour and made some nice code cleanups which should be valuable for people learning how coercions are implemented already. This can be seen in #147565

I intend to start actually writing stuff in the dev guide for coercions/adjustments now as that PR is almost done.

I also intend to use a zulip thread (#t-compiler/rustc-dev-guide > Type System Docs Rewrite) for more "lightweight" and informal updates on this project goal, as well as just miscellaneous discussion about work relating to this project goal

Comment by @BoxyUwU posted on 2025-11-29:

I've made a tracking issue on the dev guide repo for this project goal: rust-lang/rustc-dev-guide#2663. I've also written documentation for coercions: rust-lang/rustc-dev-guide#2662. There have been a few extra additions to the list in the previous update.

Unsafe Fields (rust-lang/rust-project-goals#273)
Progress
Point of contact

Jack Wrenn

Champions

compiler (Jack Wrenn), lang (Scott McMurray)

Task owners

Jacob Pratt, Jack Wrenn, Luca Versari

No detailed updates available.

16 Dec 2025 12:00am GMT

15 Dec 2025

feedPlanet Mozilla

Wladimir Palant: Unpacking VStarcam firmware for fun and profit

One important player in the PPPP protocol business is VStarcam. At the very least they've already accumulated an impressive portfolio of security issues. Like exposing system configuration including access password unprotected in the Web UI (discovered by multiple people independently from the look of it). Or the open telnet port accepting hardcoded credentials (definitely discovered by lots of people independently). In fact, these cameras have been seen used as part of a botnet, likely thanks to some documented vulnerabilities in their user interface.

Is that a thing of the past? Are there updates fixing these issues? Which devices can be updated? These questions are surprisingly hard to answer. I found zero information on VStarcam firmware versions, available updates or security fixes. In fact, it doesn't look like they ever even acknowledged learning about the existence of these vulnerabilities.

No way around downloading these firmware updates and having a look for myself. With surprising results. First of all: there are lots of firmware updates. It seems that VStarcam accumulated a huge number of firmware branches. And even though not all of them even have an active or downloadable update, the number of currently available updates goes into hundreds.

And the other aspect: the variety of update formats is staggering, and often enough standard tools like binwalk aren't too useful. It took some time figuring out how to unpack some of the more obscure variants, so I'm documenting it all here.

Warning: Lots of quick-and-dirty Python code ahead. Minimal error checking, use at your own risk!

Contents

ZIP-packed incremental updates

These incremental updates don't contain an image of the entire system, only the files that need updating. They always contain the main application however, which is what matters.

Recognizing this format is easy, the files start with the 32 bytes www.object-camera.com.by.hongzx. or www.veepai.com/design.rock-peng. (the old and the new variant respectively). The files end with the same string in reverse order. Everything in between is a sequence of ZIP files, with each file packed in its own ZIP file.

Each ZIP file is preceded by a 140 byte header: 64 byte directory name, 64 byte file name, 4 byte ZIP file size, 4 byte timestamp of some kind and 4 zero bytes. While binwalk can handle this format, having each file extracted into a separate directory structure isn't optimal. A simple Python script can do better:

#!/usr/bin/env python3
import datetime
import io
import struct
import os
import sys
import zipfile


def unpack_zip_stream(input: io.BytesIO, targetdir: str) -> None:
    targetdir = os.path.normpath(targetdir)
    while True:
        header = input.read(0x8c)
        if len(header) < 0x8c:
            break

        _, _, size, _, _ = struct.unpack('<64s64sLLL', header)
        data = input.read(size)

        with zipfile.ZipFile(io.BytesIO(data)) as archive:
            for member in archive.infolist():
                path = os.path.normpath(
                    os.path.join(targetdir, member.filename)
                )
                if os.path.commonprefix((path, targetdir)) != targetdir:
                    raise Exception('Invalid target path', path)

                try:
                    os.makedirs(os.path.dirname(path))
                except FileExistsError:
                    pass

                with archive.open(member) as member_input:
                    data = member_input.read()
                with open(path, 'wb') as output:
                    output.write(data)

                time = datetime.datetime(*member.date_time).timestamp()
                os.utime(path, (time, time))


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file target-dir', file=sys.stderr)
        sys.exit(1)

    if os.path.exists(sys.argv[2]):
        raise Exception('Target directory exists')

    with open(sys.argv[1], 'rb') as input:
        header = input.read(32)
        if (header != b'www.object-camera.com.by.hongzx.' and
                header != b'www.veepai.com/design.rock-peng.'):
            raise Exception('Wrong file format')
        unpack_zip_stream(input, sys.argv[2])

VStarcam pack system

This format is pretty simple. There is an identical section starting with VSTARCAM_PACK_SYSTEM_HEAD and ending with VSTARCAM_PACK_SYSTEM_TAIL at the start and at the end of the file. This section seems to contain a payload size and its MD5 hash.

There are two types of payload here. One is a raw SquashFS image starting with hsqs. These seem to be updates to the base system: they contain an entire Linux root filesystem and the Web UI root but not the actual application. The matching application lives on a different partition and is likely delivered via incremental updates.

The other variant seems to be used for hardware running LiteOS rather than Linux. The payload here starts with a 16 byte header: compressed size, uncompressed size and an 8 byte identification of the compression algorithm. The latter is usually gziphead, meaning standard gzip compression. After uncompressing you get a single executable binary containing the entire operating system, drivers, and the actual application.

So far binwalk can handle all these files just fine. I found exactly one exception, firmware version 48.60.30.22. It seems to be another LiteOS-based update but the compression algorithm field is all zeroes. The actual compressed stream has some distinct features that make it look like none of the common compression algorithms.

Screenshot of a hexdump showing the first 160 and the last 128 bytes of a large file. The file starts with the bytes 30 c0 fb 54 and looks random except for two sequences of 14 identical bytes: ef at offset 0x24 and fb at offset 0x43. The file ending also looks random except for the closing sequence: ff ff 0f 00 00.

Well, I had to move on here, so that's the one update file I haven't managed to unpack.

VeePai updates

This is a format that seems to be used by newer VStarcam hardware. At offset 8 these files contain a firmware version like www.veepai.com-10.201.120.54. Offsets of the payload vary but it is a SquashFS image, so binwalk can be used to find and unpack it.

Normally these are updates for the partition where the VStarcam application resides in. In a few cases these are updating the Linux base system however, no application-specific files from what I could tell.

Ingenic updates

This format seems to be specific to the Ingenic hardware platform, and I've seen other hardware vendors use it as well. One noticeable feature here is the presence of a tag partition containing various data sections, e.g. the CMDL section encoding Linux kernel parameters.

In fact, looking for that tag partition within the update might be helpful to recognize the format. While the update files usually start with the 11 22 33 44 magic bytes, they sometimes start with a different byte combination. There is always the firmware version at offset 8 in the file however.

The total size of the file header is 40 bytes. It is followed by a sequence of partitions, each preceded by a 16 byte header where bytes 1 to 4 encode the partition index and bytes 9 to 12 the partition size.

Binwalk can recognize and extract some partitions but not all of them. If you prefer having all partitions extracted you can use a simple Python script:

#!/usr/bin/env python3
import io
import struct
import os
import sys


def unpack_ingenic_update(input: io.BytesIO, targetdir: str) -> None:
    os.makedirs(targetdir)

    input.read(40)
    while True:
        header = input.read(16)
        if len(header) < 16:
            break

        index, _, size, _ = struct.unpack('<LLLL', header)
        data = input.read(size)
        if len(data) < size:
            raise Exception(f'Unexpected end of data')

        path = os.path.join(targetdir, f'mtdblock{index}')
        with open(path, 'wb') as output:
            output.write(data)


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file target-dir', file=sys.stderr)
        sys.exit(1)

    with open(sys.argv[1], 'rb') as input:
        unpack_ingenic_update(input, sys.argv[2])

You will find some partitions rather tricky to unpack however.

LZO-compressed partitions

Some partitions contain a file name at offset 34, typically rootfs_camera.cpio. These are LZO-compressed but lack the usual magic bytes. Instead, the first four bytes contain the size of compressed data in this partition. Once you replace these four bytes by 89 4c 5a 4f (removing trailing junk is optional) the partition can be uncompressed with the regular lzop tool and the result fed into cpio to get the individual files.

Ingenic's jzlzma compression

Other Ingenic root partitions are more tricky. These also start with the data size but it is followed by the bytes 56 19 05 27 (that's a uImage signature in reversed byte order). After that comes a compressed stream that sort of looks like LZMA but isn't LZMA. What's more: while binwalk will report that the Linux kernel is compressed via LZ4, it's actually the same strange compression mechanism. The bootloader of these systems pre-dates the introduction of LZ4, so the same compression algorithm identifier was used for this compression mechanism that was later assigned to LZ4 by the upstream version of the bootloader.

What kind of compression is this? I've spent some time analyzing the bootloader but it turned out to be a red herring: apparently, the decompression is performed by hardware here, and the bootloader merely pushes the data into designated memory areas. Ugh!

At least the bootloader told me how it is called: jzlzma, which is apparently Ingenic's proprietary LZMA variant. An LZMA header starts with a byte encoding some compression properties (typically 5D), a 4 byte dictionary size and an 8 byte uncompressed size. Ingenic's header is missing compression properties, and the uncompressed size is merely 4 bytes. But even accounting for these differences the stream cannot be decompressed with a regular LZMA decoder.

Luckily, with the algorithm name I found tools on Github that are meant to create firmware images for the Ingenic platform. These included an lzma binary which turned out to be an actual LZMA tool from 2005 hacked up to produce a second compressed stream in Ingenic's proprietary format.

As I found, Ingenic's format has essentially two differences to regular LZMA:

  1. Bit order: Ingenic encodes bits within bytes in reverse order. Also, some of the numbers (not all of them) are written to the bit stream in reversed bit order.
  2. Range coding: Ingenic doesn't do any range coding, instead encoding all numbers verbatim.

That second difference essentially turns LZMA into LZ77. Clearly, the issue here was the complexity of implementing probabilistic range coding in hardware. Of course, that change makes the resulting algorithm produce considerably worse compression ratios than LZMA and even worse than much simpler LZ77-derived algorithms like deflate. And there is plenty of hardware to do deflate decompression. But at least they managed to obfuscate the data…

My original thought was "fixing" their stream and turning it into proper LZMA. But range coding is not only complex but also context-dependent, it cannot be done without decompressing. So I ended up just writing the decompression logic in Python which luckily was much simpler than doing the same thing for LZMA proper.

Note: The following script is minimalistic and wasn't built for performance. Also, it expects a file that starts with a dictionary size (typically the bytes 00 00 01 00), so if you have some header preceding it you need to remove it first. It will also happily "uncompress" any trailing junk you might have there.

#!/usr/bin/env python3
import sys

kStartPosModelIndex, kEndPosModelIndex, kNumAlignBits = 4, 14, 4


def reverse_bits(n, bits):
    reversed = 0
    for i in range(bits):
        reversed <<= 1
        if n & (1 << i):
            reversed |= 1
    return reversed


def bit_stream(data):
    for byte in data:
        for bit in range(8):
            yield 1 if byte & (1 << bit) else 0


def read_num(stream, bits):
    num = 0
    for _ in range(bits):
        num = (num << 1) | next(stream)
    return num


def decode_length(stream):
    if next(stream) == 0:
        return read_num(stream, 3) + 2
    elif next(stream) == 0:
        return read_num(stream, 3) + 10
    else:
        return read_num(stream, 8) + 18


def decode_dist(stream):
    posSlot = read_num(stream, 6)
    if posSlot < kStartPosModelIndex:
        pos = posSlot
    else:
        numDirectBits = (posSlot >> 1) - 1
        pos = (2 | (posSlot & 1)) << numDirectBits
        if posSlot < kEndPosModelIndex:
            pos += reverse_bits(read_num(stream, numDirectBits), numDirectBits)
        else:
            pos += read_num(stream, numDirectBits -
                            kNumAlignBits) << kNumAlignBits
            pos += reverse_bits(read_num(stream, kNumAlignBits), kNumAlignBits)
    return pos


def jzlzma_decompress(data):
    stream = bit_stream(data)
    reps = [0, 0, 0, 0]
    decompressed = []
    try:
        while True:
            if next(stream) == 0:           # LIT
                byte = read_num(stream, 8)
                decompressed.append(byte)
            else:
                size = 0
                if next(stream) == 0:       # MATCH
                    size = decode_length(stream)
                    reps.insert(0, decode_dist(stream))
                    reps.pop()
                elif next(stream) == 0:
                    if next(stream) == 0:   # SHORTREP
                        size = 1
                    else:                   # LONGREP[0]
                        pass
                elif next(stream) == 0:     # LONGREP[1]
                    reps.insert(0, reps.pop(1))
                elif next(stream) == 0:     # LONGREP[2]
                    reps.insert(0, reps.pop(2))
                else:                       # LONGREP[3]
                    reps.insert(0, reps.pop(3))

                if size == 0:
                    size = decode_length(stream)

                curLen = len(decompressed)
                start = curLen - reps[0] - 1
                while size > 0:
                    end = min(start + size, curLen)
                    decompressed.extend(decompressed[start:end])
                    size -= end - start
    except StopIteration:
        return bytes(decompressed)


if __name__ == '__main__':
    if len(sys.argv) != 3:
        print(f'Usage: {sys.argv[0]} in-file.jzlzma out-file', file=sys.stderr)
        sys.exit(1)

    with open(sys.argv[1], 'rb') as input:
        data = input.read()
    data = jzlzma_decompress(data[8:])
    with open(sys.argv[2], 'wb') as output:
        output.write(data)

The uncompressed root partition can be fed into the regular cpio tool to get the individual files.

Exotic Ingenic update

There was one update using a completely different format despite also being meant for the Ingenic hardware. This one started with the bytes a5 ef fe 5a and had a SquashFS image at offset 0x3000. The unpacked contents (binwalk will do) don't look like any of the other updates either: this definitely isn't a camera, and it doesn't have a PPPP implementation. Given the HDMI code I can only guess that this is a Network Video Recorder (NVR).

But what about these security issues?

As to those security issues I am glad to report that VStarcam solved the telnet issue:

export PATH=/system/system/bin:$PATH
#telnetd
export LD_LIBRARY_PATH=/system/system/lib:/mnt/lib:$LD_LIBRARY_PATH
mount -t tmpfs none /tmp -o size=3m

/system/system/bin/brushFlash
/system/system/bin/updata
/system/system/bin/wifidaemon &
/system/system/bin/upgrade &

Yes, their startup script really has telnetd call commented out. At least that's usually the case. There are updates from 2018 that are no longer opening the telnet port. There are other updates from 2025 that still do. Don't ask me why. From what I can tell the hardcoded administrator credentials are still universally present but these are only problematic with the latter group.

It's a similar story with the system.ini file that was accessible without authentication. Some firmware versions had this file moved to a different directory, others still have it in the web root. There is no real system behind it, and I even doubt that this was a security-induced change rather than an adjustment to a different hardware platform.

15 Dec 2025 2:19pm GMT

The Servo Blog: November in Servo: monthly releases, context menus, parallel CSS parsing, and more!

Landing in Servo 0.0.3 and our November nightly builds, we now have context menus for links, images, and other web content (@atbrakhi, @mrobinson, #40434, #40501), vsync on Android (@mrobinson, #40306), light mode for the new tab page (@arihant2math, #40272), plus several web platform features:

Servo 0.0.3 showing new support for <use> in SVG, <details name>, and context menus

Font variations are now applied in 'font-weight' and 'font-stretch' (@simonwuelker, #40867), fixing a rendering issue in the Web Engines Hackfest website.

@kkoyung has landed some huge improvements in the SubtleCrypto API, including some of the more modern algorithms in this WICG draft, and a fix for constant-time base64 (@kkoyung, #40334). We now have full support for SHA3-256, SHA3-384, SHA3-512 (@kkoyung, #40765), cSHAKE128, cSHAKE256 (@kkoyung, #40832), Argon2d, Argon2i, Argon2id, ChaCha20-Poly1305, ECDH, ECDSA, and X25519:

Algorithm deriveBits exportKey generateKey importKey sign verify
Argon2d #40936 n/a n/a #40932 n/a n/a
Argon2i #40936 n/a n/a #40932 n/a n/a
Argon2id #40936 n/a n/a #40932 n/a n/a
ChaCha20-Poly1305 n/a #40948 n/a #40948 n/a n/a
ECDH #40333 #40298 #40305 #40253 n/a n/a
ECDSA n/a #40536 #40553 #40523 #40591 #40557
X25519 #40497 #40421 #40480 #40398 n/a n/a

<details> now fires 'toggle' events (@lukewarlow, #40271), and <details name> is now exclusive, like radio buttons (@simonwuelker, #40314). InputEvent, which represents 'input' and 'beforeinput' events, now has composed, data, isComposing, and inputType properties (@excitablesnowball, #39989).

Embedding API

Each webview can now now have its own rendering context (@mrobinson, @mukilan, #40794, #40738, #40721, #40594, #40923). This effectively enables full support for multiple windows, and we've started incorporating that into servoshell (@mrobinson, @mukilan, #40883).

Our previously unused context menu API has been replaced with a new, more effective API that includes actions for links, images, and other web content (@mrobinson, @atbrakhi, #40402, #40501, #40607). For more details, see the docs for ContextMenu, EmbedderControl::ContextMenu, and WebViewDelegate::show_embedder_control().

WebView now has can_go_back() and can_go_forward() methods, and servoshell now uses those to disable the back and forward buttons (@mrobinson, #40598).

Having introduced our new RefreshDriver API in October, we've now removed Servo::animating() (@mrobinson, #40799) and ServoDelegate::notify_animating_changed() (@mrobinson, #40886), and similarly cleaned up the obsolete and inefficient "animating" state in servoshell (@mrobinson, #40715).

We've moved virtually all of the useful items in the Servo API to the root of the servo library crate (@mrobinson, #40951). This is a breaking change, but we expect that it will greatly simplify embedding Servo, and it means you can even write use servo::*; in a pinch.

When running Servo without a custom ClipboardDelegate, we normally use the system clipboard by default. But if there's no system clipboard, we now have a built-in fallback clipboard (@mrobinson, #40408), rather than having no clipboard at all. Note that the fallback clipboard is very limited, as it can only store text and does not work across processes.

Performance and stability

Servo now parses CSS in parallel with script and layout (@mrobinson, @vimpunk, #40639, #40556), and can now measure Largest Contentful Paint in PerformanceObserver as well as in our internal profiling tools (@shubhamg13, @boluochoufeng, #39714, #39384).

Just-in-time compilation (JIT) is now optional (@jschwe, #37972), which can be useful in situations where generating native code is forbidden by policy or unwanted for other reasons.

We've improved the performance of incremental layout (@Loirooriol, @mrobinson, #40795, #40797), touch input (@kongbai1996, #40637), animated GIF rendering (@mrobinson, #40158), the prefs subsystem (@webbeef, #40775), and parseFromString() on DOMParser (@webbeef, #40742). We also use fewer IPC resources when internal profiling features are disabled (@lumiscosity, #40823).

We've fixed a bug causing nytimes.com to hang (@jdm, #40811), as well as fixing crashes in Speedometer 3.0 and 3.1 (@Narfinger, #40459), grid layout (@nicoburns, #40821), the fonts subsystem (@simonwuelker, #40913), XPath (@simonwuelker, #40411), ReadableStream (@Taym95, #40911), AudioContext (@Taym95, #40729), and when exiting Servo (@mrobinson, #40933).

Donations

Thanks again for your generous support! We are now receiving 6433 USD/month (+11.8% over October) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.

Servo is also on thanks.dev, and already 28 GitHub users (same as October) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsors: Jenny & Phil Porada, Josh Aas, LambdaTest, and Sandwich! If you're interested in this kind of sponsorship, please contact us at join@servo.org.

6433 USD/month
10000

Use of donations is decided transparently via the Technical Steering Committee's public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.

15 Dec 2025 12:00am GMT