27 Feb 2026
Planet Mozilla
Niko Matsakis: How Dada enables internal references
In my previous Dada blog post, I talked about how Dada enables composable sharing. Today I'm going to start diving into Dada's permission system; permissions are Dada's equivalent to Rust's borrow checker.
Goal: richer, place-based permissions
Dada aims to exceed Rust's capabilities by using place-based permissions. Dada lets you write functions and types that capture both a value and things borrowed from that value.
As a fun example, imagine you are writing some Rust code to process a comma-separated list, just looking for entries of length 5 or more:
let list: String = format!("...something big, with commas...");
let items: Vec<&str> = list
.split(",")
.map(|s| s.trim()) // strip whitespace
.filter(|s| s.len() > 5)
.collect();
One of the cool things about Rust is how this code looks a lot like some high-level language like Python or JavaScript, but in those languages the split call is going to be doing a lot of work, since it will have to allocate tons of small strings, copying out the data. But in Rust the &str values are just pointers into the original string and so split is very cheap. I love this.
On the other hand, suppose you want to package up some of those values, along with the backing string, and send them to another thread to be processed. You might think you can just make a struct like so…
struct Message {
list: String,
items: Vec<&str>,
// ----
// goal is to hold a reference
// to strings from list
}
…and then create the list and items and store them into it:
let list: String = format!("...something big, with commas...");
let items: Vec<&str> = /* as before */;
let message = Message { list, items };
// ----
// |
// This *moves* `list` into the struct.
// That in turn invalidates `items`, which
// is borrowed from `list`, so there is no
// way to construct `Message`.
But as experienced Rustaceans know, this will not work. When you have borrowed data like an &str, that data cannot be moved. If you want to handle a case like this, you need to convert from &str into sending indices, owned strings, or some other solution. Argh!
Dada's permissions use places, not lifetimes
Dada does things a bit differently. The first thing is that, when you create a reference, the resulting type names the place that the data was borrowed from, not the lifetime of the reference. So the type annotation for items would say ref[list] String1 (at least, if you wanted to write out the full details rather than leaving it to the type inferencer):
let list: given String = "...something big, with commas..."
let items: given Vec[ref[list] String] = list
.split(",")
.map(_.trim()) // strip whitespace
.filter(_.len() > 5)
// ------- I *think* this is the syntax I want for closures?
// I forget what I had in mind, it's not implemented.
.collect()
I've blogged before about how I would like to redefine lifetimes in Rust to be places as I feel that a type like ref[list] String is much easier to teach and explain: instead of having to explain that a lifetime references some part of the code, or what have you, you can say that "this is a String that references the variable list".
But what's also cool is that named places open the door to more flexible borrows. In Dada, if you wanted to package up the list and the items, you could build a Message type like so:
class Message(
list: String
items: Vec[ref[self.list] String]
// ---------
// Borrowed from another field!
)
// As before:
let list: String = "...something big, with commas..."
let items: Vec[ref[list] String] = list
.split(",")
.map(_.strip()) // strip whitespace
.filter(_.len() > 5)
.collect()
// Create the message, this is the fun part!
let message = Message(list.give, items.give)
Note that last line - Message(list.give, items.give). We can create a new class and move list into it along with items, which borrows from list. Neat, right?
OK, so let's back up and talk about how this all works.
References in Dada are the default
Let's start with syntax. Before we tackle the Message example, I want to go back to the Character example from previous posts, because it's a bit easier for explanatory purposes. Here is some Rust code that declares a struct Character, creates an owned copy of it, and then gets a few references into it.
struct Character {
name: String,
class: String,
hp: u32,
}
let ch: Character = Character {
name: format!("Ferris"),
class: format!("Rustacean"),
hp: 22
};
let p: &Character = &ch;
let q: &String = &p.name;
The Dada equivalent to this code is as follows:
class Character(
name: String,
klass: String,
hp: u32,
)
let ch: Character = Character("Tzara", "Dadaist", 22)
let p: ref[ch] Character = ch
let q: ref[p] String = p.name
The first thing to note is that, in Dada, the default when you name a variable or a place is to create a reference. So let p = ch doesn't move ch, as it would in Rust, it creates a reference to the Character stored in ch. You could also explicitly write let p = ch.ref, but that is not preferred. Similarly, let q = p.name creates a reference to the value in the field name. (If you wanted to move the character, you would write let ch2 = ch.give, not let ch2 = ch as in Rust.)
Notice that I said let p = ch "creates a reference to the Character stored in ch". In particular, I did not say "creates a reference to ch". That's a subtle choice of wording, but it has big implications.
References in Dada are not pointers
The reason I wrote that let p = ch "creates a reference to the Character stored in ch" and not "creates a reference to ch" is because, in Dada, references are not pointers. Rather, they are shallow copies of the value, very much like how we saw in the previous post that a shared Character acts like an Arc<Character> but is represented as a shallow copy.
So where in Rust the following code…
let ch = Character { ... };
let p = &ch;
let q = &ch.name;
…looks like this in memory…
# Rust memory representation
Stack Heap
───── ────
┌───► ch: Character {
│ ┌───► name: String {
│ │ buffer: ───────────► "Ferris"
│ │ length: 6
│ │ capacity: 12
│ │ },
│ │ ...
│ │ }
│ │
└──── p
│
└── q
in Dada, code like this
let ch = Character(...)
let p = ch
let q = ch.name
would look like so
# Dada memory representation
Stack Heap
───── ────
ch: Character {
name: String {
buffer: ───────┬───► "Ferris"
length: 6 │
capacity: 12 │
}, │
.. │
} │
│
p: Character { │
name: String { │
buffer: ───────┤
length: 6 │
capacity: 12 │
... │
} │
} │
│
q: String { │
buffer: ───────────────┘
length: 6
capacity: 12
}
Clearly, the Dada representation takes up more memory on the stack. But note that it doesn't duplicate the memory in the heap, which tends to be where the vast majority of the data is found.
Dada talks about values not references
This gets at something important. Rust, like C, makes pointers first-class. So given x: &String, x refers to the pointer and *x refers to its referent, the String.
Dada, like Java, goes another way. x: ref String is a String value - including in memory representation! The difference between a given String, shared String, and ref String is not in their memory layout, all of them are the same, but they differ in whether they own their contents.2
So in Dada, there is no *x operation to go from "pointer" to "referent". That doesn't make sense. Your variable always contains a string, but the permissions you have to use that string will change.
In fact, the goal is that people don't have to learn the memory representation as they learn Dada, you are supposed to be able to think of Dada variables as if they were all objects on the heap, just like in Java or Python, even though in fact they are stored on the stack.3
Rust does not permit moves of borrowed data
In Rust, you cannot move values while they are borrowed. So if you have code like this that moves ch into ch1…
let ch = Character { ... };
let name = &ch.name; // create reference
let ch1 = ch; // moves `ch`
…then this code only compiles if name is not used again:
let ch = Character { ... };
let name = &ch.name; // create reference
let ch1 = ch; // ERROR: cannot move while borrowed
let name1 = name; // use reference again
…but Dada can
There are two reasons that Rust forbids moves of borrowed data:
- References are pointers, so those pointers may become invalidated. In the example above,
namepoints to the stack slot forch, so ifchwere to be moved intoch1, that makes the reference invalid. - The type system would lose track of things. Internally, the Rust borrow checker has a kind of "indirection". It knows that
chis borrowed for some span of the code (a "lifetime"), and it knows that the lifetime in the type ofnameis related to that lifetime, but it doesn't really know thatnameis borrowed fromchin particular.4
Neither of these apply to Dada:
- Because references are not pointers into the stack, but rather shallow copies, moving the borrowed value doesn't invalidate their contents. They remain valid.
- Because Dada's types reference actual variable names, we can modify them to reflect moves.
Dada tracks moves in its types
OK, let's revisit that Rust example that was giving us an error. When we convert it to Dada, we find that it type checks just fine:
class Character(...) // as before
let ch: given Character = Character(...)
let name: ref[ch.name] String = ch.name
// -- originally it was borrowed from `ch`
let ch1 = ch.give
// ------- but `ch` was moved to `ch1`
let name1: ref[ch1.name] = name
// --- now it is borrowed from `ch1`
Woah, neat! We can see that when we move from ch into ch1, the compiler updates the types of the variables around it. So actually the type of name changes to ref[ch1.name] String. And then when we move from name to name1, that's totally valid.
In PL land, updating the type of a variable from one thing to another is called a "strong update". Obviously things can get a bit complicated when control-flow is involved, e.g., in a situation like this:
let ch = Character(...)
let ch1 = Character(...)
let name = ch.name
if some_condition_is_true() {
// On this path, the type of `name` changes
// to `ref[ch1.name] String`, and so `ch`
// is no longer considered borrowed.
ch1 = ch.give
ch = Character(...) // not borrowed, we can mutate
} else {
// On this path, the type of `name`
// remains unchanged, and `ch` is borrowed.
}
// Here, the types are merged, so the
// type of `name` is `ref[ch.name, ch1.name] String`.
// Therefore, `ch` is considered borrowed here.
Renaming lets us call functions with borrowed values
OK, let's take the next step. Let's define a Dada function that takes an owned value and another value borrowed from it, like the name, and then call it:
fn character_and_name(
ch1: given Character,
name1: ref[ch1] String,
) {
// ... does something ...
}
We could call this function like so, as you might expect:
let ch = Character(...)
let name = ch.name
character_and_name(ch.give, name)
So…how does this work? Internally, the type checker type-checks a function call by creating a simpler snippet of code, essentially, and then type-checking that. It's like desugaring but only at type-check time. In this simpler snippet, there are a series of let statements to create temporary variables for each argument. These temporaries always have an explicit type taken from the method signature, and they are initialized with the values of each argument:
// type checker "desugars" `character_and_name(ch.give, name)`
// into more primitive operations:
let tmp1: given Character = ch.give
// --------------- -------
// | taken from the call
// taken from fn sig
let tmp2: ref[tmp1.name] String = name
// --------------------- ----
// | taken from the call
// taken from fn sig,
// but rewritten to use the new
// temporaries
If this type checks, then the type checker knows you have supplied values of the required types, and so this is a valid call. Of course there are a few more steps, but that's the basic idea.
Notice what happens if you supply data borrowed from the wrong place:
let ch = Character(...)
let ch1 = Character(...)
character_and_name(ch, ch1.name)
// --- wrong place!
This will fail to type check because you get:
let tmp1: given Character = ch.give
let tmp2: ref[tmp1.name] String = ch1.name
// --------
// has type `ref[ch1.name] String`,
// not `ref[tmp1.name] String`
Class constructors are "just" special functions
So now, if we go all the way back to our original example, we can see how the Message example worked:
class Message(
list: String
items: Vec[ref[self.list] String]
)
Basically, when you construct a Message(list, items), that's "just another function call" from the type system's perspective, except that self in the signature is handled carefully.
This is modeled, not implemented
I should be clear, this system is modeled in the dada-model repository, which implements a kind of "mini Dada" that captures what I believe to be the most interesting bits. I'm working on fleshing out that model a bit more, but it's got most of what I showed you here.5 For example, here is a test that you get an error when you give a reference to the wrong value.
The "real implementation" is lagging quite a bit, and doesn't really handle the interesting bits yet. Scaling it up from model to real implementation involves solving type inference and some other thorny challenges, and I haven't gotten there yet - though I have some pretty interesting experiments going on there too, in terms of the compiler architecture.6
This could apply to Rust
I believe we could apply most of this system to Rust. Obviously we'd have to rework the borrow checker to be based on places, but that's the straight-forward part. The harder bit is the fact that &T is a pointer in Rust, and that we cannot readily change. However, for many use cases of self-references, this isn't as important as it sounds. Often, the data you wish to reference is living in the heap, and so the pointer isn't actually invalidated when the original value is moved.
Consider our opening example. You might imagine Rust allowing something like this in Rust:
struct Message {
list: String,
items: Vec<&{self.list} str>,
}
In this case, the str data is heap-allocated, so moving the string doesn't actually invalidate the &str value (it would invalidate an &String value, interestingly).
In Rust today, the compiler doesn't know all the details of what's going on. String has a Deref impl and so it's quite opaque whether str is heap-allocated or not. But we are working on various changes to this system in the Beyond the & goal, most notably the Field Projections work. There is likely some opportunity to address this in that context, though to be honest I'm behind in catching up on the details.
-
I'll note in passing that Dada unifies
strandStringinto one type as well. I'll talk in detail about how that works in a future blog post. ↩︎ -
This is kind of like C++ references (e.g.,
String&), which also act "as if" they were a value (i.e., you writes.foo(), nots->foo()), but a C++ reference is truly a pointer, unlike a Dada ref. ↩︎ -
This goal was in part inspired by a conversation I had early on within Amazon, where a (quite experienced) developer told me, "It took me months to understand what variables are in Rust". ↩︎
-
I explained this some years back in a talk on Polonius at Rust Belt Rust, if you'd like more detail. ↩︎
-
No closures or iterator chains! ↩︎
-
As a teaser, I'm building it in async Rust, where each inference variable is a "future" and use "await" to find out when other parts of the code might have added constraints. ↩︎
27 Feb 2026 10:20am GMT
26 Feb 2026
Planet Mozilla
Hacks.Mozilla.Org: Making WebAssembly a first-class language on the Web
This post is an expanded version of a presentation I gave at the 2025 WebAssembly CG meeting in Munich.
WebAssembly has come a long way since its first release in 2017. The first version of WebAssembly was already a great fit for low-level languages like C and C++, and immediately enabled many new kinds of applications to efficiently target the web.
Since then, the WebAssembly CG has dramatically expanded the core capabilities of the language, adding shared memories, SIMD, exception handling, tail calls, 64-bit memories, and GC support, alongside many smaller improvements such as bulk memory instructions, multiple returns, and reference values.
These additions have allowed many more languages to efficiently target WebAssembly. There's still more important work to do, like stack switching and improved threading, but WebAssembly has narrowed the gap with native in many ways.
Yet, it still feels like something is missing that's holding WebAssembly back from wider adoption on the Web.
There are multiple reasons for this, but the core issue is that WebAssembly is a second-class language on the web. For all of the new language features, WebAssembly is still not integrated with the web platform as tightly as it should be.
This leads to a poor developer experience, which pushes developers to only use WebAssembly when they absolutely need it. Oftentimes JavaScript is simpler and "good enough". This means its users tend to be large companies with enough resources to justify the investment, which then limits the benefits of WebAssembly to only a small subset of the larger Web community.
Solving this issue is hard, and the CG has been focused on extending the WebAssembly language. Now that the language has matured significantly, it's time to take a closer look at this. We'll go deep into the problem, before talking about how WebAssembly Components could improve things.
What makes WebAssembly second-class?
At a very high level, the scripting part of the web platform is layered like this:

WebAssembly can directly interact with JavaScript, which can directly interact with the web platform. WebAssembly can access the web platform, but only by using the special capabilities of JavaScript. JavaScript is a first-class language on the web, and WebAssembly is not.
This wasn't an intentional or malicious design decision; JavaScript is the original scripting language of the Web and co-evolved with the platform. Nonetheless, this design significantly impacts users of WebAssembly.
What are these special capabilities of JavaScript? For today's discussion, there are two major ones:
- Loading of code
- Using Web APIs
Loading of code
WebAssembly code is unnecessarily cumbersome to load. Loading JavaScript code is as simple as just putting it in a script tag:
<script src="script.js"></script>
WebAssembly is not supported in script tags today, so developers need to use the WebAssembly JS API to manually load and instantiate code.
let bytecode = fetch(import.meta.resolve('./module.wasm'));
let imports = { ... };
let { exports } =
await WebAssembly.instantiateStreaming(bytecode, imports);
The exact sequence of API calls to use is arcane, and there are multiple ways to perform this process, each of which has different tradeoffs that are not clear to most developers. This process generally just needs to be memorized or generated by a tool for you.
Thankfully, there is the esm-integration proposal, which is already implemented in bundlers today and which we are actively implementing in Firefox. This proposal lets developers import WebAssembly modules from JS code using the familiar JS module system.
import { run } from "/module.wasm";
run();
In addition, it allows a WebAssembly module to be loaded directly from a script tag using type="module":
<script type="module" src="/module.wasm"></script>
This streamlines the most common patterns for loading and instantiating WebAssembly modules. However, while this mitigates the initial difficulty, we quickly run into the real problem.
Using Web APIs
Using a Web API from JavaScript is as simple as this:
console.log("hello, world");
For WebAssembly, the situation is much more complicated. WebAssembly has no direct access to Web APIs and must use JavaScript to access them.
The same single-line console.log program requires the following JavaScript file:
// We need access to the raw memory of the Wasm code, so
// create it here and provide it as an import.
let memory = new WebAssembly.Memory(...);
function consoleLog(messageStartIndex, messageLength) {
// The string is stored in Wasm memory, but we need to
// decode it into a JS string, which is what DOM APIs
// require.
let messageMemoryView = new UInt8Array(
memory.buffer, messageStartIndex, messageLength);
let messageString =
new TextDecoder().decode(messageMemoryView);
// Wasm can't get the `console` global, or do
// property lookup, so we do that here.
return console.log(messageString);
}
// Pass the wrapped Web API to the Wasm code through an
// import.
let imports = {
"env": {
"memory": memory,
"consoleLog": consoleLog,
},
};
let { instance } =
await WebAssembly.instantiateStreaming(bytecode, imports);
instance.exports.run();
And the following WebAssembly file:
(module
;; import the memory from JS code
(import "env" "memory" (memory 0))
;; import the JS consoleLog wrapper function
(import "env" "consoleLog"
(func $consoleLog (param i32 i32))
)
;; export a run function
(func (export "run")
(local i32 $messageStartIndex)
(local i32 $messageLength)
;; create a string in Wasm memory, store in locals
...
;; call the consoleLog method
local.get $messageStartIndex
local.get $messageLength
call $consoleLog
)
)
Code like this is called "bindings" or "glue code" and acts as the bridge between your source language (C++, Rust, etc.) and Web APIs.
This glue code is responsible for re-encoding WebAssembly data into JavaScript data and vice versa. For example, when returning a string from JavaScript to WebAssembly, the glue code may need to call a malloc function in the WebAssembly module and re-encode the string at the resulting address, after which the module is responsible for eventually calling free.
This is all very tedious, formulaic, and difficult to write, so it is typical to generate this glue automatically using tools like embind or wasm-bindgen. This streamlines the authoring process, but adds complexity to the build process that native platforms typically do not require. Furthermore, this build complexity is language-specific; Rust code will require different bindings from C++ code, and so on.
Of course, the glue code also has runtime costs. JavaScript objects must be allocated and garbage collected, strings must be re-encoded, structs must be deserialized. Some of this cost is inherent to any bindings system, but much of it is not. This is a pervasive cost that you pay at the boundary between JavaScript and WebAssembly, even when the calls themselves are fast.
This is what most people mean when they ask "When is Wasm going to get DOM support?" It's already possible to access any Web API with WebAssembly, but it requires JavaScript glue code.
Why does this matter?
From a technical perspective, the status quo works. WebAssembly runs on the web and many people have successfully shipped software with it.
From the average web developer's perspective, though, the status quo is subpar. WebAssembly is too complicated to use on the web, and you can never escape the feeling that you're getting a second class experience. In our experience, WebAssembly is a power user feature that average developers don't use, even if it would be a better technical choice for their project.
The average developer experience for someone getting started with JavaScript is something like this:

There's a nice gradual curve where you use progressively more complicated features as the scope of your project increases.
By comparison, the average developer experience for someone getting started with WebAssembly is something like this:

You immediately must scale "the wall" of wrangling the many different pieces to work together. The end result is often only worth it for large projects.
Why is this the case? There are several reasons, and they all directly stem from WebAssembly being a second class language on the web.
1. It's difficult for compilers to provide first-class support for the web
Any language targeting the web can't just generate a Wasm file, but also must generate a companion JS file to load the Wasm code, implement Web API access, and handle a long tail of other issues. This work must be redone for every language that wants to support the web, and it can't be reused for non-web platforms.
Upstream compilers like Clang/LLVM don't want to know anything about JS or the web platform, and not just for lack of effort. Generating and maintaining JS and web glue code is a specialty skill that is difficult for already stretched-thin maintainers to justify. They just want to generate a single binary, ideally in a standardized format that can also be used on platforms besides the web.
2. Standard compilers don't produce WebAssembly that works on the web
The result is that support for WebAssembly on the web is often handled by third-party unofficial toolchain distributions that users need to find and learn. A true first-class experience would start with the tool that users already know and have installed.
This is, unfortunately, many developers' first roadblock when getting started with WebAssembly. They assume that if they just have rustc installed and pass a -target=wasm flag that they'll get something they could load in a browser. You may be able to get a WebAssembly file doing that, but it will not have any of the required platform integration. If you figure out how to load the file using the JS API, it will fail for mysterious and hard-to-debug reasons. What you really need is the unofficial toolchain distribution which implements the platform integration for you.
3. Web documentation is written for JavaScript developers
The web platform has incredible documentation compared to most tech platforms. However, most of it is written for JavaScript. If you don't know JavaScript, you'll have a much harder time understanding how to use most Web APIs.
A developer wanting to use a new Web API must first understand it from a JavaScript perspective, then translate it into the types and APIs that are available in their source language. Toolchain developers can try to manually translate the existing web documentation for their language, but that is a tedious and error prone process that doesn't scale.
4. Calling Web APIs can still be slow
If you look at all of the JS glue code for the single call to console.log above, you'll see that there is a lot of overhead. Engines have spent a lot of time optimizing this, and more work is underway. Yet this problem still exists. It doesn't affect every workload, but it's something every WebAssembly user needs to be careful about.
Benchmarking this is tricky, but we ran an experiment in 2020 to precisely measure the overhead that JS glue code has in a real world DOM application. We built the classic TodoMVC benchmark in the experimental Dodrio Rust framework and measured different ways of calling DOM APIs.
Dodrio was perfect for this because it computed all the required DOM modifications separately from actually applying them. This allowed us to precisely measure the impact of JS glue code by swapping out the "apply DOM change list" function while keeping the rest of the benchmark exactly the same.
We tested two different implementations:
- "Wasm + JS glue": A WebAssembly function which reads the change list in a loop, and then asks JS glue code to apply each change individually. This is the performance of WebAssembly today.
- "Wasm only": A WebAssembly function which reads the change list in a loop, and then uses an experimental direct binding to the DOM which skips JS glue code. This is the performance of WebAssembly if we could skip JS glue code.

The duration to apply the DOM changes dropped by 45% when we were able to remove JS glue code. DOM operations can already be expensive; WebAssembly users can't afford to pay a 2x performance tax on top of that. And as this experiment shows, it is possible to remove the overhead.
5. You always need to understand the JavaScript layer
There's a saying that "abstractions are always leaky".
The state of the art for WebAssembly on the web is that every language builds their own abstraction of the web platform using JavaScript. But these abstractions are leaky. If you use WebAssembly on the web in any serious capacity, you'll eventually hit a point where you need to read or write your own JavaScript to make something work.
This adds a conceptual layer which is a burden for developers. It feels like it should just be enough to know your source language, and the web platform. Yet for WebAssembly, we require users to also know JavaScript in order to be a proficient developer.
How can we fix this?
This is a complicated technical and social problem, with no single solution. We also have competing priorities for what is the most important problem with WebAssembly to fix first.
Let's ask ourselves: In an ideal world, what could help us here?
What if we had something that was:
- A standardized self-contained executable artifact
- Supported by multiple languages and toolchains
- Which handles loading and linking of WebAssembly code
- Which supports Web API usage
If such a thing existed, languages could generate these artifacts and browsers could run them, without any JavaScript involved. This format would be easier for languages to support and could potentially exist in standard upstream compilers, runtimes, toolchains, and popular packages without the need for third-party distributions. In effect, we could go from a world where every language re-implements the web platform integration using JavaScript, to sharing a common one that is built directly into the browser.
It would obviously be a lot of work to design and validate a solution! Thankfully, we already have a proposal with these goals that has been in development for years: the WebAssembly Component Model.
What is a WebAssembly Component?
For our purposes, a WebAssembly Component defines a high-level API that is implemented with a bundle of low-level WebAssembly code. It's a standards-track proposal in the WebAssembly CG that's been in development since 2021.
Already today, WebAssembly Components…
- Can be created from many different programming languages.
- Can be executed in many different runtimes (including in browsers today, with a polyfill).
- Can be linked together to allow code re-use between different languages.
- Allow WebAssembly code to directly call Web APIs.
If you're interested in more details, check out the Component Book or watch "What is a Component?".
We feel that WebAssembly Components have the potential to give WebAssembly a first-class experience on the web platform, and to be the missing link described above.
How could they work?
Let's try to re-create the earlier console.log example using only WebAssembly Components and no JavaScript.
NOTE: The interactions between WebAssembly Components and the web platform have not been fully designed, and the tooling is under active development.
Take this as an aspiration for how things could be, not a tutorial or promise.
The first step is to specify which APIs our application needs. This is done using an IDL called WIT. For our example, we need the Console API. We can import it by specifying the name of the interface.
component {
import std:web/console;
}
The std:web/console interface does not exist today, but would hypothetically come from the official WebIDL that browsers use for describing Web APIs. This particular interface might look like this:
package std:web;
interface console {
log: func(msg: string);
...
}
Now that we have the above interfaces, we can use them when writing a Rust program that compiles to a WebAssembly Component:
use std::web::console;
fn main() {
console::log("hello, world");
}
Once we have a component, we can load it into the browser using a script tag.
<script type="module" src="component.wasm"></script>
And that's it! The browser would automatically load the component, bind the native web APIs directly (without any JS glue code), and run the component.
This is great if your whole application is written in WebAssembly. However, most WebAssembly usage is part of a "hybrid application" which also contains JavaScript. We also want to simplify this use case. The web platform shouldn't be split into "silos" that can't interact with each other. Thankfully, WebAssembly Components also address this by supporting cross-language interoperability.
Let's create a component that exports an image decoder for use from JavaScript code. First we need to write the interface that describes the image decoder:
interface image-lib {
record pixel {
r: u8;
g: u8;
b: u8;
a: u8;
}
resource image {
from-stream:
static async func(bytes: stream<u8>) -> result<image>;
get: func(x: u32, y: u32) -> pixel;
}
}
component {
export image-lib;
}
Once we have that, we can write the component in any language that supports components. The right language will depend on what you're building or what libraries you need to use. For this example, I'll leave the implementation of the image decoder as an exercise for the reader.
The component can then be loaded in JavaScript as a module. The image decoder interface we defined is accessible to JavaScript, and can be used as if you were importing a JavaScript library to do the task.
import { Image } from "image-lib.wasm";
let byteStream = (await fetch("/image.file")).body;
let image = await Image.fromStream(byteStream);
let pixel = image.get(0, 0);
console.log(pixel); // { r: 255, g: 255, b: 0, a: 255 }
Next Steps
As it stands today, we think that WebAssembly Components would be a step in the right direction for the web. Mozilla is working with the WebAssembly CG to design the WebAssembly Component Model. Google is also evaluating it at this time.
If you're interested to try this out, learn to build your first component and try it out in the browser using Jco or from the command-line using Wasmtime. The tooling is under heavy development, and contributions and feedback are welcome. If you're interested in the in-development specification itself, check out the component-model proposal repository.
WebAssembly has come very far from when it was first released in 2017. I think the best is still yet to come if we're able to turn it from being a "power user" feature, to something that average developers can benefit from.
The post Making WebAssembly a first-class language on the Web appeared first on Mozilla Hacks - the Web developer blog.
26 Feb 2026 4:02pm GMT
25 Feb 2026
Planet Mozilla
This Week In Rust: This Week in Rust 640
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
Foundation
Project/Tooling Updates
- Zed: Split Diffs are Here
- CHERIoT Rust: Status update #0
- SeaORM now supports Arrow & Parquet
- Releasing bincode-next v3.0.0-rc.1
- Introducing Almonds
- SafePilot v0.1: self-hosted AI assistant
- Hitbox 0.2.0: declarative cache orchestration
Observations/Thoughts
- What it means that Ubuntu is using Rust
- Read Locks Are Not Your Friends
- Achieving Zero Bugs: Rust, Specs, and AI Coding
- [video] device-envoy: Making Embedded Fun with Rust-by Carl Kadie
Rust Walkthroughs
- About memory pressure, lock contention, and Data-oriented Design
- Breaking SHA-2: length extension attacks in practice with Rust
- device-envoy: Making Embedded Fun with Rust, Embassy, and Composable Device Abstractions
Research
Miscellaneous
Crate of the Week
This week's crate is docstr, a macro crate providing a macro to create multiline strings out of doc comments.
Thanks to Nik Revenco for the self-suggestion!
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
No calls for testing were issued this week by Rust, Cargo, Rustup or Rust language RFCs.
Let us know if you would like your feature to be tracked as a part of this list.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
No Calls for participation were submitted this week.
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on Bluesky or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
- Rust India Conference 2026 | CFP open until 2026-03-14 | Bangalore, IN | 2026-04-18
- Oxidize Conference | CFP open until 2026-03-23 | Berlin, Germany | 2026-09-14 - 2026-09-16
- EuroRust | CFP open until 2026-04-27 | Barcelona, Spain | 2026-10-14 - 2026-10-17
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on Bluesky or Mastodon!
Updates from the Rust Project
450 pull requests were merged in the last week
Compiler
- bring back
enum DepKind - simplify the canonical
enumclone branches to a copy statement - stabilize
if letguards (feature(if_let_guard))
Library
- add
try_shrink_toandtry_shrink_to_fitto Vec - fixed ByteStr not padding within its Display trait when no specific alignment is mentioned
- reflection
TypeId::trait_info_of - reflection
TypeKind::FnPtr - just pass
Layoutdirectly tobox_new_uninit - stabilize
cfg_select!
Cargo
cli: Remove--lockfile-pathjob_queue: Handle Clippy CLI arguments infixmessage- fix parallel locking when
-Zfine-grain-lockingis enabled
Clippy
- add
unnecessary_trailing_commalint - add new
disallowed_fieldslint clone_on_ref_ptr: don't add a&to the receiver if it's a referenceneedless_maybe_sized: don't lint in proc-macro-generated codestr_to_string: false positive non-str typesuseless_conversion: also fire inside compiler desugarings- add
allow-unwrap-typesconfiguration forunwrap_usedandexpect_used - add brackets around unsafe or labeled block used in
else - allow
deprecated(since = "CURRENT_RUSTC_VERSION") - do not suggest removing reborrow of a captured upvar
- enhance
collapsible_matchto cover if-elses - enhance
manual_is_variant_andto coverfilterchainingis_some - fix
explicit_counter_loopfalse negative when loop counter starts at non-zero - fix
join_absolute_pathsto work correctly depending on the platform - fix
redundant_iter_clonedfalse positive with move closures and coroutines - fix
unnecessary_min_or_maxfor usize - fix panic/assert message detection in edition 2015/2018
- handle
Result<T, !>andControlFlow<!, T>asTwrt#[must_use] - make
unchecked_time_subtractionto better handleDurationliterals - make
unnecessary_foldcommutative - the path from a type to itself is
Self
Rust-Analyzer
- add partial selection for
generate_getter_or_setter - offer block let fallback postfix complete
- offer on
is_some_andforreplace_is_method_with_if_let_method - fix some TryEnum reference assists
- add handling for cycles in
sizedness_constraint_for_ty() - better import placement + merging
- complete
.leton block tail prefix expression - complete derive helpers on empty nameref
- correctly parenthesize inverted condition in
convert_if_to_bool_… - exclude macro refs in tests when excludeTests is enabled
- fix another case where we forgot to put the type param for
PartialOrdandPartialEqin builtin derives - fix predicates of builtin derive traits with two parameters defaulting to
Self - generate method assist uses enclosing impl block instead of first found
- no complete suggest param in complex pattern
- offer
toggle_macro_delimiterin nested macro - prevent qualifying parameter names in
add_missing_impl_members - implement
Span::SpanSoucefor proc-macro-srv
Rust Compiler Performance Triage
Overall, a bit more noise than usual this week, but mostly a slight improvement with several low-level optimizations at MIR and LLVM IR building landing. Also less commits landing than usual, mostly due to GitHub CI issues during the week.
Triage done by @simulacrum. Revision range: 3c9faa0d..eeb94be7
3 Regressions, 4 Improvements, 4 Mixed; 3 of them in rollups 24 artifact comparisons made in total
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
- Gate #![reexport_test_harness_main] properly
- Observe
close(2)errors forstd::fs::{copy, write} - warn on empty precision
- refactor 'valid for read/write' definition: exclude null
- Remove -Csoft-float
- Place-less cg_ssa intrinsics
- Optimize
repr(Rust)enums by omitting tags in more cases involving uninhabited variants. - Proposal for a dedicated test suite for the parallel frontend
- Promote tier 3 riscv32 ESP-IDF targets to tier 2
- Proposal for Adapt Stack Protector for Rust
No Items entered Final Comment Period this week for Rust RFCs, Language Reference, Language Team, Leadership Council or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
- Cargo: hints.min-opt-level
- Cargo RFC for min publish age
- Place traits
- RFC: Extend manifest dependencies with used
Upcoming Events
Rusty Events between 2026-02-25 - 2026-03-25 🦀
Virtual
- 2026-02-25 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
- 2026-02-25 | Virtual (Girona, ES) | Rust Girona
- 2026-02-26 | Virtual (Berlin, DE) | Rust Berlin
- 2026-03-04 | Virtual (Indianapolis, IN, US) | Indy Rust
- 2026-03-05 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
- 2026-03-05 | Virtual (Nürnberg, DE) | Rust Nuremberg
- 2026-03-07 | Virtual (Kampala, UG) | Rust Circle Meetup
- 2026-03-10 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-03-10 | Virtual (London, UK)| Women in Rust
- 2026-03-11 | Virtual (Girona, ES) | Rust Girona
- 2026-03-12 | Virtual (Berlin, DE) | Rust Berlin
- 2026-03-17 | Virtual (Washington, DC, US) | Rust DC
- 2026-03-18 | Virtual (Girona, ES) | Rust Girona
- 2026-03-18 | Virtual (Vancouver, BC, CA) | Vancouver Rust
- 2026-03-19 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2026-03-20 | Virtual | Packt Publishing Limited
- 2026-03-24 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
- 2026-03-24 | Virtual (London, UK) | Women in Rust
- 2026-03-25 | Virtual (Girona, ES) | Rust Girona
Asia
- 2026-03-22 | Tel Aviv-yafo, IL | Rust 🦀 TLV
Europe
- 2026-02-25 | Copenhagen, DK | Copenhagen Rust Community
- 2026-02-26 | Prague, CZ | Rust Czech Republic
- 2026-02-28 | Stockholm, SE | Stockholm Rust
- 2026-03-04 | Barcelona, ES | BcnRust
- 2026-03-04 | Hamburg, DE | Rust Meetup Hamburg
- 2026-03-04 | Oxford, UK | Oxford ACCU/Rust Meetup.
- 2026-03-05 | Oslo, NO | Rust Oslo
- 2026-03-11 | Amsterdam, NL | Rust Developers Amsterdam Group
- 2026-03-12 | Geneva, CH | Post Tenebras Lab
- 2026-03-18 | Dortmund, DE | Rust Dortmund
- 2026-03-19 - 2026-03-20 | | Rustikon
- 2026-03-24 | Aarhus, DK | Rust Aarhus
North America
- 2026-02-25 | Austin, TX, US | Rust ATX
- 2026-02-25 | Los Angeles, CA, US | Rust Los Angeles
- 2026-02-26 | Atlanta, GA, US | Rust Atlanta
- 2026-02-26 | New York, NY, US | Rust NYC
- 2026-02-28 | Boston, MA, US | Boston Rust Meetup
- 2026-03-05 | Saint Louis, MO, US | STL Rust
- 2026-03-07 | Boston, MA, US | Boston Rust Meetup
- 2026-03-14 | Boston, MA, US | Boston Rust Meetup
- 2026-03-17 | San Francisco, CA, US | San Francisco Rust Study Group
- 2026-03-19 | Hybrid (Seattle, WA, US) | Seattle Rust User Group
- 2026-03-21 | Boston, MA, US | Boston Rust Meetup
- 2026-03-25 | Austin, TX, US | Rust ATX
Oceania
- 2026-03-26 | Melbourne, VIC, AU | Rust Melbourne
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
This is actually just Rust adding support for C++-style duck-typed templates, and the long and mostly-irrelevant information contained in the ICE message is part of the experience.
Thanks to Kyllingene for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by:
- nellshamrell
- llogiq
- ericseppanen
- extrawurst
- U007D
- mariannegoldin
- bdillo
- opeolluwa
- bnchi
- KannanPalani57
- tzilist
Email list hosting is sponsored by The Rust Foundation
25 Feb 2026 5:00am GMT