23 Feb 2026

feedPlanet Mozilla

Ludovic Hirlimann: Are mozilla's fork any good?

To answer that question, we first need to understand how complex, writing or maintaining a web browser is.

A "modern" web browser is :

Of course, all the above point are interacting with one another in different ways. In order for "the web" to work, standards are developed and then implemented in the different browsers, rendering engines.

In order to "make" the browser, you need engineers to write and maintain the code, which is probably around 30 Million lines of code[5] for Firefox. Once the code is written, it needs to be compiled [6] and tested [6]. This requires machines that run the operating system the browser ships to (As of this day, mozilla officially ships on Linux, Microslop Windows and MacOS X - community builds for *BSD do exists and are maintained). You need engineers to maintain the compile (build) infrastructure.

Once the engineers that are responsible for the releases [7] have decided what codes and features were mature enough, they start assembling the bits of code and like the engineers, build, test and send the results to the people using said web browser.

When I was employed at Mozilla (the company that makes Firefox) around 900+ engineers were tasked with the above and a few more were working on research and development. These engineers are working 5 days a week, 8 hours per day, that's 1872000 hours of engineering brain power spent every year (It's actually less because I have not taken vacations into account) on making Firefox versions. On top of that, you need to add the cost of building and running the test before a new version reaches the end user.

The current browsing landscape looks dark, there are currently 3 choices for rendering engines, KHTML based browsers, blink based ones and gecko based ones. 90+% of the market is dominated by KHTML/blink based browsers. Blink is a fork of KHTML. This leads to less standard work, if the major engine implements a feature and others need to play catchup to stay relevant, this has happened in the 2000s with IE dominating the browser landscape[8], making it difficult to use macOS 9 or X (I'm not even mentioning Linux here :)). This also leads to most web developers using Chrome and once in a while testing with Firefox or even Safari. But if there's a little glitch, they can still ship because of market shares.

Firefox was started back in 1998, when embedding software was not really a thing with all the platform that were to be supported. Firefox is very hard to embed (eg use as a softwrae library and add stuff on top). I know that for a fact because both Camino and Thunderbird are embeding gecko.

In the last few years, Mozilla has been itching the people I connect to, who are very privacy focus and do not see with a good eye what Mozilla does with Firefox. I believe that Mozilla does this in order to stay relevant to normal users. It needs to stay relevant for at least two things :

  1. Keep the web standards open, so anyone can implement a web browser / web services.
  2. to have enough traffic to be able to pay all the engineers working on gecko.

Now that, I've explained a few important things, let's answer the question "Are mozilla's fork any good?"

I am biased as I've worked for the company before. But how can a few people, even if they are good and have plenty of free time, be able to cope with what maintaining a fork requires :

If you are comfortable with that, then using a fork because Mozilla is pushing stuff you don't want is probably doable. If not, you can always kill those features you don't like using some `about:config` magic.

Now, I've set a tone above that foresees a dark future for open web technologies. What Can you do to keep the web open and with some privacy focus?

  1. Keep using Mozilla Nightly
  2. Give servo a try

[1] HTML is interpreted code, that's why it needs to be parsed and then rendered.

[2] In order to draw and image or a photo on a screen, you need to be able to encode it or decode it. Many file formats are available.

[3] Is a computer language that transforms HTML into something that can interact with the person using the web browser. See https://developer.mozilla.org/en-US/docs/Glossary/JavaScript

[4] Operating systems need to the very least know which program to open files with. The OS landscape has changed a lot over the last 25 years. These days you need to support 3 major OS, while in the 2000s you had more systems, IRIX for example. You still have some portions of the Mozilla code base that support these long dead systems.

[5]https://math.answers.com/math-and-arithmetic/How_many_lines_of_code_in_mozillafirefox

[6] Testing implies, testing the code and also having engineers or users using the unfinished product to see that it doesn't regress. Testing Mozilla, is explained at https://ehsanakhgari.org/wp-content/uploads/talks/test-mozilla/

[7] Read a release equals a version. Version 1.5 is a release, as is version 3.0.1.

[8] https://en.wikipedia.org/wiki/Browser_wars

23 Feb 2026 11:39am GMT

20 Feb 2026

feedPlanet Mozilla

Mozilla Privacy Blog: Behind the Velvet Rope: The AI Divide on Display at the India AI Impact Summit 2026

TLDR: No one could agree what 'sovereignty' means, but (almost) everyone agreed that AI cannot be controlled by a few dominant companies.

This past week, droves of AI experts and enthusiasts descended on New Delhi, bringing their own agendas, priorities, and roles in the debate to the table.

I scored high for my ability to move between cars, rickshaws and foot for transport (mostly thanks to colleagues), but low for being prepared with snacks. So, based on my tightly packed agenda combined with high hunger levels, here's my read out:

The same script, different reactions

As with any global summit, the host government made the most of having the eyes of the world and deep pockets of AI investors in town. While some press were critical of India seeking deals and investments, it wasn't notable - or outside of the norm.

What should be notable, and indeed were reflected in the voluntary agreements, were the key themes that drove conversations, including democratisation of AI, access to resources, and the vital role of open source to drive the benefits of AI. These topics were prominent in the Summit sessions and side events throughout the week.

In the name of innovation, regulation has become a faux pas

The EU has become a magnet for criticism given its recent attempts to regulate AI. I'm not going to debate this here, but it's clear that the EU AI Act (AIA) is being deployed and PRed quite expertly as a cautionary tale. While healthy debate around regulation is absolutely critical, much of the public commentary surrounding the AIA (and not just at this Summit) has been factually incorrect. Interrogate this reality by all means - we live in complex times - but it's hard not to see invalid criticisms as a strategic PR effort by those who philosophically (and financially) opposed governance. There is certainly plenty to question in the AIA, but the reality is much more nuanced than critics suggest.

What's more likely to kill a start up: the cost of compliance, or the concentration of market power in the hands of a few dominant players? It's true that regulation can absolutely create challenges. However, it is also worth looking at whether the greater obstacle is the control a small number of tech companies hold. A buy-out as an exit is great for many start-ups, but if that is now the most hopeful option, it raises important questions about the long-term health and competitiveness of the larger tech ecosystem.

A note of optimism: developing global frameworks on AI may still seem like a pipe dream in today's macro political climate, but ideas around like-minded powers working together and building trust makes me think that alignment beyond pure voluntary principles may be something we see grow. Frequent references to the Hiroshima Process as a middle ground were notable.

AI eats the world

There were pervasive assumptions that bigger - and then bigger still - is the inevitable direction of AI deployment, with hyperscale seen as the only viable path forward, in terms of inputs needed. However, the magnitude of what's required to fuel the current LLM-focused market structure met a global majority-focused reality: hyperscaling isn't sustainable. There were two primary camps at the Summit - the haves and the rest of us - and while the Summit brought them together, the gulf between them continues to grow.

Open source has to win

At the first AI Safety Summit in the UK, the concept of open source AI was vilified as a security risk. At the France AI Action Summit, the consensus began to shift meaningfully. At the India AI Impact Summit, we saw undeniable recognition of the vital role that open source plays in our collective AI future.

With proprietary systems, winning means owning. With open source approaches, winning means we're not just renting AI from a few companies and countries: we're working collectively to build, share, secure and inspect AI systems in the name of economic growth and the public interest. Before the Paris Summit, this was a difficult vision to push for, but after New Delhi, it's clear that open source is on the right side of history. Now, it's time for governments to build out their own strategies to promote and procure this approach.

Consolidation ≠ Competition

Global South discussions made one message clear: dependency orientated partnerships are not true partnerships and they're not a long term bet. Many countries are becoming more vocal that they want autonomy of their data and choice in their suppliers to lessen harmful impacts on citizens, and increase their impact to responsibly govern.

That is not today's reality.

I was, however, encouraged to find that attendees were far less starry-eyed over big tech than at previous Summits. The consensus agreed that it met no one's definition of sovereignty for a select few companies to own and control AI.

Despite agreement amongst the majority, addressing market concentration remained an elephant in the room. The narrative deployed against regulation became a blanket mantra, applied to anything from AI governance to competition action. However, it fails to address the fact that the AI market is already skewed toward a small number of powerful companies and traditional competition rules that act only after problems arise (and often through long legal processes) are not enough to keep up with fast-paced digital industries.

Some participants were downbeat and questioned if it was too late. The challenge is in demonstrating that it isn't. There is no single approach. But we know that concentration can be countered with a mix of technical and legal interventions. Options can be sweeping, or lighter touch and surgical in their focus. We are currently seeing is a wave of countries pass, draft, debate and consider new ex ante rules, providing learnings, data and inspiration.

It's important that we watch this space.

Whose safety are we talking about exactly?

The India AI Impact Summit has been criticised for letting safety discussions fall off the radar. That's not necessarily true. Instead of focusing on the view that AI is a cause for human annihilation, discussions focused on impacts that we can evidence now: on language, culture, bias, online safety, inclusion, and jobs.

Less headline-grabbing, less killer robots, far more human.

The path forward

It's difficult to know if these Summits will continue in the long term. There is a lot of fuss, expense, marketing, diplomacy, traffic and word salads involved. However, the opportunity to convene world leaders, businesses, builders, engineers, civil society and academics in one place, for what we are constantly reminded is a transformational technology, feels needed. Tracking progress on voluntary commitments over time might be illustrative. And while many of the top sessions are reserved for the few, witnessing the diverse debates this past week gives me hope that there is an opportunity for the greater voice to shape AI to be open, competitive and built for more than just the bottom line.

The post Behind the Velvet Rope: The AI Divide on Display at the India AI Impact Summit 2026 appeared first on Open Policy & Advocacy.

20 Feb 2026 8:47pm GMT

19 Feb 2026

feedPlanet Mozilla

The Rust Programming Language Blog: Rust participates in Google Summer of Code 2026

We are happy to announce that the Rust Project will again be participating in Google Summer of Code (GSoC) 2026, same as in the previous two years. If you're not eligible or interested in participating in GSoC, then most of this post likely isn't relevant to you; if you are, this should contain some useful information and links.

Google Summer of Code (GSoC) is an annual global program organized by Google that aims to bring new contributors to the world of open-source. The program pairs organizations (such as the Rust Project) with contributors (usually students), with the goal of helping the participants make meaningful open-source contributions under the guidance of experienced mentors.

The organizations that have been accepted into the program have been announced by Google. The GSoC applicants now have several weeks to discuss project ideas with mentors. Later, they will send project proposals for the projects that they found the most interesting. If their project proposal is accepted, they will embark on a several months long journey during which they will try to complete their proposed project under the guidance of an assigned mentor.

We have prepared a list of project ideas that can serve as inspiration for potential GSoC contributors that would like to send a project proposal to the Rust organization. However, applicants can also come up with their own project ideas. You can discuss project ideas or try to find mentors in the #gsoc Zulip stream. We have also prepared a proposal guide that should help you with preparing your project proposals. We would also like to bring your attention to our GSoC AI policy.

You can start discussing the project ideas with Rust Project mentors and maintainers immediately, but you might want to keep the following important dates in mind:

If you are interested in contributing to the Rust Project, we encourage you to check out our project idea list and send us a GSoC project proposal! Of course, you are also free to discuss these projects and/or try to move them forward even if you do not intend to (or cannot) participate in GSoC. We welcome all contributors to Rust, as there is always enough work to do.

Our GSoC contributors were quite successful in the past two years (2024, 2025), so we are excited what this year's GSoC will bring! We hope that participants in the program can improve their skills, but also would love for this to bring new contributors to the Project and increase the awareness of Rust in general. Like last year, we expect to publish blog posts in the future with updates about our participation in the program.

19 Feb 2026 12:00am GMT