DEV Community

daniel jeong
daniel jeong

Posted on • Originally published at manoit.co.kr

WebAssembly WASI and SpinKube

WebAssembly WASI 0.3 and SpinKube — The Next Step After Containers

Docker co-founder Solomon Hykes's 2019 prophecy became reality in 2026. With WASI 0.3 standardization and SpinKube's maturation, WebAssembly has become a native workload running alongside containers in Kubernetes. We provide detailed analysis of Wasm's technical characteristics, performance, and real-world implementation cases.

1. Solomon Hykes's Prophecy Becomes Reality

In 2019, Docker co-founder Solomon Hykes posted on Twitter:

"If WASM+WASI existed in 2008, we wouldn't have needed to create Docker. That's how important it is."

That prophecy moved one step closer to reality in 2026 with WASI 0.3.0 standardization and the SpinKube project. WebAssembly (Wasm) was originally created to run C/C++ code in browsers, but with the emergence of WASI (WebAssembly System Interface), it has begun full-scale server-side adoption.

2. WASI 0.3.0 — Completing the Component Model

Released in early 2026, WASI 0.3.0 is a decisive turning point in the WebAssembly ecosystem. Where WASI 0.2 defined the component model's foundational structure, 0.3 added async I/O and streaming support, completing the final puzzle piece needed for real server workloads.

2.1 Capability-Based Security Model

The core of WASI is the capability-based security model. Unlike traditional Unix permission models, each Wasm module can only access resources through explicitly granted capability references.

# Container security (traditional model)
user@host$ docker run -it ubuntu /bin/bash
root@container:~# ls /etc/shadow    # ✓ Accessible (root)
root@container:~# cat /root/.ssh/id_rsa  # ✓ Accessible

# Wasm security (capability-based model)
// Wasm module (sandboxed by default)
fn main() {
    // File system access: no permission (denied by default)
    // file::open("/etc/shadow")  // ❌ Compile error

    // Only explicitly granted capabilities can be used
    let config_dir = get_filesystem_cap("/config");
    let file = config_dir.open("app.conf");  // ✓ Only possible within /config
}

# Explicitly grant permissions from host
wasmtime --dir /config:/config --dir /data:/data app.wasm
Enter fullscreen mode Exit fullscreen mode

All system interfaces—file system access, network sockets, reading environment variables—are declaratively controlled. This is fundamentally different from container security models.

2.2 Container vs Wasm Security Comparison

Aspect Container Wasm
Default Security Policy Allow-by-default Deny-by-default
Execution Context Shared host kernel, isolated by namespaces VM-level complete isolation
Additional Security Configuration seccomp, AppArmor, SELinux required Safe with zero configuration
Escape Risk Possible host access via kernel vulnerabilities Nearly impossible (language-level isolation)

3. Performance Comparison — The Impact of 50μs Cold Start

The most impressive characteristic of Wasm is cold start performance.

Metric Wasm Container Difference
Cold Start 5~50 μs 100~500 ms 1,000~10,000x
Memory Footprint 2 MB 5~50 MB 2.5~25x
Binary Size 100 KB~5 MB 10~500 MB 100~5,000x
CPU Overhead ~5% (AOT compile) ~10% (cgroups) 2x more efficient

4. SpinKube — Kubernetes-Native Wasm

SpinKube is an open-source project for natively deploying and operating Wasm workloads on Kubernetes. It acts as a bridge connecting Fermyon's Spin framework with Kubernetes.

4.1 SpinKube Architecture

# SpinKube deployment (same syntax as traditional Kubernetes)
apiVersion: core.spinoperator.dev/v1alpha1
kind: SpinApp
metadata:
  name: hello-spin
spec:
  image: ghcr.io/fermyon/spin-helloworld:20240119  # Wasm image
  replicas: 3
  executor: wasmtime  # Choose Wasm runtime (wasmtime, wasmer, wasm-edge)
  resources:
    requests:
      cpu: 50m      # ✓ Very low CPU request
      memory: 128Mi  # ✓ Very low memory request
  env:
    - name: PORT
      value: "8080"
---
apiVersion: v1
kind: Service
metadata:
  name: hello-spin
spec:
  selector:
    app: hello-spin
  ports:
    - port: 80
      targetPort: 8080
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

SpinKube consists of three core components:

  • spin-operator: Manages the custom SpinApp resource. Developers define Wasm apps with YAML manifests, and spin-operator handles deployment and scaling
  • containerd-shim-spin: Integrates with containerd via Container Runtime Interface (CRI). Existing containers and Wasm workloads can coexist on the same node
  • runtime-class-manager: Automatically provisions appropriate Wasm runtimes on nodes. Choose from Wasmtime, Wasmer, or WasmEdge

4.2 SpinKube Advantages

  • Reuse Existing Kubernetes Knowledge: Same YAML syntax as traditional Kubernetes, SpinApp definition almost identical to Deployment
  • Extreme Lightweightness: 50μs cold start enables serverless-level fast response, 2MB memory runs thousands of instances simultaneously
  • Security by Default: Capability-based security provides default safety without additional configuration
  • Multi-Runtime Support: Freely choose your preferred Wasm runtime

5. Comparison of Three Wasm Runtimes

The three major server-side Wasm runtimes currently are:

Runtime Developer Characteristics Recommended Use
Wasmtime Bytecode Alliance JIT compiler, Cranelift code generator, high performance CPU-intensive tasks, serverless
WasmEdge CNCF 2MB memory footprint, AI inference support, edge optimization Edge computing, IoT, resource-constrained environments
Wasmer Wasmer Embedded/standalone deployment, multiple backends (LLVM/Cranelift/Singlepass) Embedded Wasm, polyglot deployment

6. Workloads Suitable for Wasm vs. Those Requiring Containers

6.1 When Wasm Is Optimal

  • Stateless HTTP handlers: API gateways, GraphQL resolvers, Lambda functions
  • Fast cold start essential: Serverless environments (AWS Lambda, Google Cloud Run, Azure Functions)
  • CDN edge processing: Cloudflare Workers, Fastly Compute@Edge
  • Lightweight data transformation: ETL pipelines, log processing
  • Resource-constrained environments: IoT devices, edge computing

6.2 When Containers/VMs Are Needed

  • Long-running batch jobs: Data processing, image transformation
  • Large relational databases: Transaction processing, complex queries
  • GPU-based AI training: Wasm doesn't support GPU access
  • Complex state management: Multi-process, multi-thread requirements
  • Native C library dependencies: Direct system calls, POSIX compatibility

7. Practical Strategy in 2026

Wasm doesn't replace containers; it fills gaps containers don't cover. The most realistic strategy in 2026 is coexistence:

# Example mixed deployment (in Kubernetes cluster)

# Container workloads (existing)
- Databases (PostgreSQL)
- Message queues (RabbitMQ)
- Complex backend logic
- → Use traditional Kubernetes Deployment

# Wasm workloads (new)
- API gateway
- Request routing and transformation
- Metrics collection
- Simple business logic
- → Deploy with SpinKube

# Results
- Both workloads coexist in the same cluster
- Maximize resource efficiency (Wasm: 2MB vs Container: 100MB+)
- Enhanced security (Wasm's default safety)
- Faster response time (Wasm: 50μs vs Container: 500ms)
Enter fullscreen mode Exit fullscreen mode

💡 Pro Tip: If you're operating edge and serverless workloads, now is the time to evaluate SpinKube. With WASI 0.3.0 standardization, the ecosystem has stabilized, and real production use cases are growing. Start with a pilot project implementing a simple API gateway or request transformation logic in Wasm.

8. Conclusion: After Containers Comes Wasm

Solomon Hykes's prophecy has become reality. With WASI 0.3.0 standardization and SpinKube's maturation, WebAssembly has transcended being merely a browser technology to become a native workload running alongside containers in Kubernetes.

Cold start 50μs, memory 2MB, security model safe by default. These characteristics will define the future of edge computing and serverless workloads. If your organization is already succeeding with containers, Wasm is worth considering not as hasty migration but as a strategic additional tool.

This article was created with AI technology support. For more cloud-native engineering insights, visit the ManoIT Tech Blog.

Top comments (0)