WebAssembly: How Did a Browser Tool Challenge Docker?

Mircea Talu (Junior DevOps Engineer) talks about how the WebAssembly bytecode format enables software developers to unite applications written in different languages into a single, standardised environment.

This article was previously featured in the July 2024 issue of Today Software Magazine.

WebAssembly: How Did a Browser Tool Challenge Docker?

A few months ago, Accesa approached me with an intriguing opportunity. As the company's 20th anniversary was closing in, excitement for the celebrations was already building. Before the big party, a different event was scheduled for June 6th — the Tech Conference. We were given the opportunity to present a technical session on DevOps for this event, and that's where our story begins.  

I recently learned about Nigel Poulton’s latest Kubernetes book from a colleague. Chapter 9 caught my eye as it was about "WebAssembly on Kubernetes." At that time, I had no idea what WebAssembly was, so I was curious to find out more about it. 

WebAssembly in a nutshell 

The browser is an amazing tool. It allows us to do our work, find books online, watch our favourite shows; you name it. And what languages does the browser speak? HTML, CSS, and JavaScript. I want you to focus on that last one. 

JavaScript is amazing at what it’s supposed to do. However, it’s not great when it comes to high-performance computational workloads. That’s one of the reasons why it’s very difficult to play your favourite complex video games on the browser. 

Now imagine a world where you could run C/C++ or Rust video game applications in the browser. Pretty difficult, right? Those languages compile to machine code and are, therefore, not portable. 

Some smart people thought up the same world 7 years ago. However, they weren’t satisfied with the status quo, and they found a solution. What if we compiled our C/C++ and Rust code to a low-level bytecode rather than directly to machine code? 

Furthermore, what if that bytecode could be interpreted by the browser? That would mean that you can take your C/C++ or Rust implementation, compile it to that bytecode, and then run it in the browser at a near-native performance. 

That bytecode format is now known as WebAssembly (WASM). Since its beginnings, a lot of progress was made, and currently there are compilers which can target WASM from many more languages: 

webassenbly in-article Fig1.png

(source: Fermyon Developer

Docker and containers 

We’re now going to take a radical turn and abandon WASM for the moment. 

If you’re a software engineer, chances are you have used Docker to containerise your application at least once. How did you do that? Probably like this: you wrote a Dockerfile, built the image with docker build -t my-cool-image:latest . and then ran it with docker run my-cool-image:latest. Amazing, your first containerised application! 

As the Tech Conference was fast approaching and I started digging deeper to prepare, I soon realised that I don’t actually know anything about containers… 

One of the biggest myths people believe is that Docker is what gives you containers. It doesn’t, it never did. It’s actually Linux that does. For the next few minutes, I want you to forget everything you know about Docker and containers. 

The first thing we need to remember is that everything is a process! Your video game, your browser, and your applications running inside containers are all processes. But then, what makes containers differ from normal processes running on your host? Isolation. 

Processes running inside containers are isolated! So, how does the Linux Kernel achieve this isolation when creating a container? There are just 2 concepts you need to know about: control groups and namespaces

Namespaces 

When you run your container, you specify an ENTRYPOINT or a CMD, which is what your new process will execute. 

FROM openjdk:23-slim

COPY my-application-1.0-SNAPSHOT.jar /home/my-application-1.0-SNAPSHOT.jar

CMD ["java","-jar","/home/my-application-1.0-SNAPSHOT.jar"]

But how is this new process created? The answer is: just like any other process, using fork()

webassenbly in-article Fig2_Website.png

So now, after fork(), we have a parent process and a child process (which is the containerised process). What happens if the child process calls getppid() (get parent process ID)? The system call will return the parent’s process identifier. This is bad! We don’t want our containerised process to know about anything outside the container. Therefore, we must make the child process think it has no parent, so we set its parent process ID to NULL using prctl(). Also, in the picture, you can see that the child process has 2 IDs (8 and 1). That’s thanks to the context. Inside the container, it is process 1, but from the host’s view, it is process 8. 

This is the kind of problem which namespaces are responsible for. Namespaces tell you what the process can see! 

Control groups (cgroups) 

Control groups tell you how much the process can use. In the below example, part of a YAML manifest of a Kubernetes Deployment, you can see the resource limits of my container. 

containers: 

  - name: myapp 

    image: campionulclujului/resource-app:latest 

    command: ["/"] 

    resources: 

      limits: 

        cpu: "0.5" 

        memory: "256Mi" 

There are quite a few control groups and if you want to dive deeper into each one, watch this great YouTube video

Docker 

If the Linux Kernel gives you everything you need to create containers (namespaces and control groups), then what does Docker do? While it is true that you can create containers using only Linux system calls, it is extremely difficult and time consuming to do so.  

Docker provides a very easy way to create Linux containers. Not only that, but Docker also comes with a CLI, an authentication service (to Docker Hub) and many other utilities. 

That’s why it became so popular and overtook all its competitors! It was easy for developers to use. All you have to do is run a little command, and you'll a containerised application running on your machine. 

However, as the Cloud Native Computing Foundation (CNCF) evolved and Kubernetes was widely adopted, concerns started to arise regarding Docker. While it was a great tool for developers to run locally, it had a lot of unnecessary functionality in production. When you deploy your containerised application in Kubernetes, all you care about is the container runtime, which used to be just a small part of the Docker bundle. 

Container runtimes 

As the race for containerisation intensified, people started wanting different container runtimes. Up until then, Kubernetes was tightly coupled with Docker. The solution was simple: create a Container Runtime Interface (CRI) and let anyone come up with their own implementation. 

webassenbly in-article Fig3_Website.png

Due to these developments, Docker had to react. They split up their monolithic architecture to match the Open Container Initiative (OCI) specifications. The resulting container runtime came to be known as containerd. 

webassenbly in-article Fig4_Website.png

We can see a few components we haven’t talked about in the image. Let’s start with runc. The purpose of runc is to set up your container environment (cgroups, namespaces), start your container process, and then exit

What happens after runc exits? Who will manage the container now? Its responsibilities are passed to the containerd-shim. As you can see, the lifecycle of this component matches the lifecycle of the container. When the container is killed, the shim dies with it. 

We have a shim because it provides an additional layer of decoupling. You can use different types of shims. In the image, two containers use the default containerd-shim, while the third one uses something different. 

Last but not least, at the highest level, we have the containerd daemon which takes on the role of high-level container runtime on that node. 

WASM containers  

Finally, we reached the endgame. I promised you we would be back to WASM and here we are. 

We know that containers are cool. They allow us to isolate our application with its dependencies. So, let’s take a Java application for instance. 

FROM openjdk:23-slim 

COPY my-application-1.0-SNAPSHOT.jar /home/my-application-1.0-SNAPSHOT.jar 

CMD ["java","-jar","/home/my-application-1.0-SNAPSHOT.jar"] 

This Dockerfile might look familiar to you. That’s because it’s the same one we looked at previously. But this time, I want you to focus on the first line. What does that FROM instruction do? 

It specifies a base image. Why do we need a base image? Because our Java bytecode must be interpreted and it needs a runtime environment. This is what that openjdk:23-slim gives us. 

webassenbly in-article Fig5_Website.png

Ok, that’s great. However, what happens when we need to scale our containerised application? We will have 100 instances of our container, each one of them with its own JDK. This is overhead! We basically need to create the same runtime environment 100 times. 

Imagine a world where you don’t need a runtime environment inside the container, a world where you can setup the runtime environment outside the container, where it can be shared by all your containers. 

“But Mircea… that’s impossible! That would require all your applications to be written in the same language, the same format.” 

Well, what if you could indeed do that? Now we understand why WebAssembly was so important in this whole story. If I can compile all my applications, no matter the language they’re written in, into WASM bytecode, then I can standardise the runtime environment. That’s exactly what another group of smart people thought when they came up with the WebAssembly System Interface (WASI), an interface for WebAssembly runtimes. 

webassenbly in-article Fig6_Website.png

In this image, you can clearly see how containers no longer need to create their own runtime environment (libraries, dependencies, bins). All that is left in the WASM container (module) is the bytecode in its purest form. 

This not only gets rid of the overhead, but it is also way more secure, because it reduces the attack surface. If there is no environment inside the container (no mini Linux distribution), then there is nothing to attack. The size of the WASM container is also much smaller, which makes it lighter and easier to start up, ideal for serverless architectures. 

Where do we stand now? 

Microsoft Azure has announced support for WASI node pools in their Azure Kubernetes Service (AKS) clusters. They are still in preview, but the future looks promising. 

Support for additional languages is on the rise, as can be seen in the language matrix we previously analysed. 

The WebAssembly System Interface has already seen a number of implementations. Some of them include wasmedge, wasmtime, wasmer, and others. 

Serverless services, such as Azure Functions, do not have a runtime for WASM yet, but I predict that it won’t be long until that becomes a reality. 

Demo 

How to run WASM containers on your Windows host 

On a Windows host, I recommend installing Docker Desktop as it provides Beta support for WASM containers. 

First, go to Settings -> General and check the Use containerd for pulling and storing images options. 

Then, go to Settings -> Features in development and check the Enable Wasm feature. This will install the following runtimes: 

webassenbly in-article Fig7_Website.png

Now, you’re ready to run your first WASM container: 

docker run --runtime=io.containerd.wasmedge.v1 --platform=wasi/wasm secondstate/rust-example-hello 

How to run WASM workloads in Azure Kubernetes Service 

I recommend following this tutorial from Microsoft, which goes through all the prerequisites of having a WASI node pool in AKS and even shows you how to deploy a simple app. 

How to run your own application as a WASM container in Kubernetes 

First, you need to install the right compiler which will take your code written in C/C++, Rust, Java, or any other of the supported languages and compile it into WASM bytecode. Then, the only other file you need is a TOML (Tom’s Obvious Minimal Language), which is a configuration file that tells the WASM runtime how to run your WASM module. That’s it! 

After you obtained the WASM bytecode, create a Dockerfile and copy the two files (.wasm and .toml). 

FROM scratch 

COPY /target/wasm32-wasi/release/myapp_wasm.wasm . 

COPY spin.toml . 

Notice the FROM instruction does not use a typical Linux base image (scratch is an empty base image)! 

Now you can build the image.

docker build --platform wasi/wasm --provenance=false -t <your_docker_hub_account>/mywasmapp:0.1 . 

Then push it to Docker Hub.

docker push <your_docker_hub_account>/mywasmapp:0.1 

Before you deploy your application to Kubernetes, you have to create a runtime class.

apiVersion: node.k8s.io/v1 

kind: RuntimeClass 

metadata: 

  name: rc-spin 

scheduling: 

  nodeSelector: 

    wasm: "yes" 

handler: spin 

Now you can use your image to create a Kubernetes Deployment, 

apiVersion: apps/v1 

kind: Deployment 

metadata: 

  name: mywasmapp 

spec: 

  runtimeClassName: rc-spin 

  containers: 

    - name: mywasmapp 

      image: <your_docker_hub_account>/mywasmapp:0.1 

      command: ["/"] 

      resources: 

        limits: 

          cpu: "0.1" 

          memory: 128Mi 

Thank you for sticking for so long and I hope this article helped you learn how to better use WebAssembly!