DEV Community

Cover image for **WebAssembly Is Turning Your Browser Into a Full Desktop Application Runtime**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**WebAssembly Is Turning Your Browser Into a Full Desktop Application Runtime**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

I remember when WebAssembly first appeared. Most people talked about it as a way to make JavaScript run faster. They saw it as a performance booster for heavy math or graphics in a web page. That was interesting, but it felt like a small step. What I see now is something much bigger. WebAssembly is quietly creating a whole new kind of software that lives on the web.

Think about the applications on your computer. The photo editor, the music studio software, the 3D design tool. These are powerful, complex programs. They usually need to be downloaded and installed. The web offered convenience but often lacked this raw power. WebAssembly is changing that equation. It is not just speeding up the web. It is turning the browser into a universal computer that can run almost any kind of software.

The magic is in how it works. You can take code written in languages like Rust, C, or C++. These are languages used to build operating systems and game engines. You compile that code into a compact, efficient binary format called a Wasm module. Your browser can download and run this module at a speed very close to native machine code. It runs safely, isolated in the browser's security sandbox, but with performance that was once impossible for web apps.

Let me show you what this means with a real example. Say you want to build a professional image editor that works directly in a browser tab. Processing millions of pixels for filters, adjustments, or effects is heavy work. Doing this with standard JavaScript can be slow. With WebAssembly, you can bring the power of native image libraries to the web.

Here is a simplified look at how you might structure a Rust module for this.

// Rust library compiled to WebAssembly for image processing
use wasm_bindgen::prelude::*;
use image::{ImageBuffer, Rgba};
use std::time::Instant;

#[wasm_bindgen]
pub struct ImageProcessor {
    width: u32,
    height: u32,
    pixels: Vec<u8>,
}

#[wasm_bindgen]
impl ImageProcessor {
    #[wasm_bindgen(constructor)]
    pub fn new(width: u32, height: u32) -> Self {
        let pixel_count = (width * height) as usize * 4; // RGBA
        let pixels = vec![0; pixel_count];

        ImageProcessor {
            width,
            height,
            pixels,
        }
    }

    pub fn load_from_bytes(&mut self, data: &[u8]) -> Result<(), JsValue> {
        let start = Instant::now();

        // Decode image using Rust's image crate
        let img = image::load_from_memory(data)
            .map_err(|e| JsValue::from_str(&format!("Failed to load image: {}", e)))?;

        // Convert to RGBA8
        let rgba8 = img.to_rgba8();

        // Copy pixels to our buffer
        self.width = rgba8.width();
        self.height = rgba8.height();
        self.pixels = rgba8.into_raw();

        let duration = start.elapsed();
        console::log_1(&format!("Image loaded in {:?}", duration).into());

        Ok(())
    }

    pub fn apply_filter(&mut self, filter_type: &str, strength: f32) {
        let start = Instant::now();

        match filter_type {
            "grayscale" => self.apply_grayscale(),
            "blur" => self.apply_blur(strength),
            "sharpen" => self.apply_sharpen(strength),
            "edge_detect" => self.apply_edge_detection(),
            _ => console::warn_1(&format!("Unknown filter: {}", filter_type).into()),
        }

        let duration = start.elapsed();
        console::log_1(&format!("Filter applied in {:?}", duration).into());
    }

    fn apply_grayscale(&mut self) {
        for i in (0..self.pixels.len()).step_by(4) {
            let r = self.pixels[i] as f32;
            let g = self.pixels[i + 1] as f32;
            let b = self.pixels[i + 2] as f32;

            // Luminosity method
            let gray = (0.299 * r + 0.587 * g + 0.114 * b) as u8;

            self.pixels[i] = gray;
            self.pixels[i + 1] = gray;
            self.pixels[i + 2] = gray;
            // Alpha channel remains unchanged
        }
    }

    fn apply_blur(&mut self, radius: f32) {
        let radius_int = radius.max(1.0).min(10.0) as i32;
        let kernel_size = radius_int * 2 + 1;
        let mut temp = self.pixels.clone();

        // Simple box blur for demonstration
        // In production, use Gaussian blur or more optimized algorithm
        for y in 0..self.height as i32 {
            for x in 0..self.width as i32 {
                let mut r_sum = 0;
                let mut g_sum = 0;
                let mut b_sum = 0;
                let mut a_sum = 0;
                let mut count = 0;

                for ky in -radius_int..=radius_int {
                    for kx in -radius_int..=radius_int {
                        let px = x + kx;
                        let py = y + ky;

                        if px >= 0 && px < self.width as i32 && py >= 0 && py < self.height as i32 {
                            let idx = ((py * self.width as i32 + px) * 4) as usize;
                            r_sum += self.pixels[idx] as u32;
                            g_sum += self.pixels[idx + 1] as u32;
                            b_sum += self.pixels[idx + 2] as u32;
                            a_sum += self.pixels[idx + 3] as u32;
                            count += 1;
                        }
                    }
                }

                let idx = ((y * self.width as i32 + x) * 4) as usize;
                temp[idx] = (r_sum / count) as u8;
                temp[idx + 1] = (g_sum / count) as u8;
                temp[idx + 2] = (b_sum / count) as u8;
                temp[idx + 3] = (a_sum / count) as u8;
            }
        }

        self.pixels = temp;
    }

    pub fn get_pixels(&self) -> Vec<u8> {
        self.pixels.clone()
    }

    pub fn width(&self) -> u32 {
        self.width
    }

    pub fn height(&self) -> u32 {
        self.height
    }
}
Enter fullscreen mode Exit fullscreen mode

You would then use this compiled module from JavaScript. The interface is clean. Your web page handles the user interaction—uploading a file, clicking buttons—while the heavy pixel manipulation happens in the WebAssembly module at native speed.

// JavaScript interface for the Wasm module
export async function initImageProcessor() {
    const wasm = await import('./image_processor_bg.wasm');
    const { ImageProcessor, __wbg_set_wasm } = wasm;
    __wbg_set_wasm(wasm);

    return {
        ImageProcessor,

        async processImage(imageFile, filterType, strength) {
            const reader = new FileReader();

            return new Promise((resolve, reject) => {
                reader.onload = async (event) => {
                    try {
                        const arrayBuffer = event.target.result;
                        const bytes = new Uint8Array(arrayBuffer);

                        // Create processor instance
                        const processor = new ImageProcessor(1, 1);

                        // Load image
                        await processor.load_from_bytes(bytes);

                        // Apply filter
                        processor.apply_filter(filterType, strength);

                        // Get processed pixels
                        const pixels = processor.get_pixels();

                        resolve({
                            width: processor.width(),
                            height: processor.height(),
                            pixels: pixels,
                            processor // Keep reference for further processing
                        });
                    } catch (error) {
                        reject(error);
                    }
                };

                reader.readAsArrayBuffer(imageFile);
            });
        }
    };
}
Enter fullscreen mode Exit fullscreen mode

This pattern unlocks a new category. We are no longer talking about web pages with enhanced features. We are talking about full desktop-class applications that happen to run inside a browser tab. Audio workstations, video editors, computer-aided design software, and even integrated development environments are now feasible.

Consider an audio processing application. Real-time effects like compression, reverb, and equalization require sample-by-sample math on audio data, often at 44,100 samples per second. Doing this in real-time with JavaScript is a challenge. With WebAssembly, you can port established, battle-tested C++ audio libraries directly to the web.

Here is a conceptual look at an audio engine.

// C++ audio processing application compiled to WebAssembly
#include <emscripten.h>
#include <emscripten/bind.h>
#include <vector>
#include <cmath>
#include <algorithm>

class AudioEngine {
private:
    std::vector<float> audioBuffer;
    std::vector<float> processedBuffer;
    double sampleRate;
    bool isPlaying;
    size_t playPosition;

public:
    AudioEngine() : sampleRate(44100), isPlaying(false), playPosition(0) {}

    void loadAudioData(const std::vector<float>& data) {
        audioBuffer = data;
        processedBuffer = data;
    }

    void applyCompressor(float threshold, float ratio, float attack, float release) {
        if (audioBuffer.empty()) return;

        processedBuffer.resize(audioBuffer.size());

        float envelope = 0.0f;
        float gain = 1.0f;

        for (size_t i = 0; i < audioBuffer.size(); i++) {
            float sample = std::abs(audioBuffer[i]);

            // Envelope follower
            if (sample > envelope) {
                envelope = attack * envelope + (1.0f - attack) * sample;
            } else {
                envelope = release * envelope + (1.0f - release) * sample;
            }

            // Compression curve
            if (envelope > threshold) {
                float over = envelope - threshold;
                float compression = over / ratio;
                gain = (threshold + compression) / envelope;
            } else {
                gain = 1.0f;
            }

            processedBuffer[i] = audioBuffer[i] * gain;
        }
    }

    void applyReverb(float decay, float damping, float wetMix) {
        if (audioBuffer.empty()) return;

        // Simple Schroeder reverberator
        const size_t combCount = 4;
        const size_t allpassCount = 2;

        std::vector<std::vector<float>> combBuffers(combCount);
        std::vector<size_t> combDelays = {1557, 1617, 1491, 1422};
        std::vector<float> combFeedback(combCount, decay);

        std::vector<std::vector<float>> allpassBuffers(allpassCount);
        std::vector<size_t> allpassDelays = {225, 556};
        std::vector<float> allpassFeedback = {0.5f, 0.5f};

        // Initialize buffers
        for (size_t i = 0; i < combCount; i++) {
            combBuffers[i].resize(combDelays[i], 0.0f);
        }
        for (size_t i = 0; i < allpassCount; i++) {
            allpassBuffers[i].resize(allpassDelays[i], 0.0f);
        }

        processedBuffer.resize(audioBuffer.size());

        for (size_t n = 0; n < audioBuffer.size(); n++) {
            float input = audioBuffer[n];
            float output = 0.0f;

            // Comb filters
            for (size_t i = 0; i < combCount; i++) {
                size_t delay = combDelays[i];
                float feedback = combFeedback[i];

                float bufferOut = combBuffers[i][n % delay];
                combBuffers[i][n % delay] = input + feedback * bufferOut;
                output += bufferOut;
            }

            // All-pass filters
            for (size_t i = 0; i < allpassCount; i++) {
                size_t delay = allpassDelays[i];
                float feedback = allpassFeedback[i];

                float bufferOut = allpassBuffers[i][n % delay];
                float allpassOut = -output + bufferOut;
                allpassBuffers[i][n % delay] = output + feedback * bufferOut;
                output = allpassOut;
            }

            // Mix wet/dry
            processedBuffer[n] = audioBuffer[n] * (1.0f - wetMix) + output * wetMix;
        }
    }

    const std::vector<float>& getProcessedAudio() const {
        return processedBuffer;
    }

    void startPlayback() {
        isPlaying = true;
        playPosition = 0;
    }

    void stopPlayback() {
        isPlaying = false;
    }

    // Audio processing callback for Web Audio API
    void processAudio(float* outputBuffer, int channelCount, int bufferSize) {
        if (!isPlaying || processedBuffer.empty()) {
            std::fill(outputBuffer, outputBuffer + bufferSize * channelCount, 0.0f);
            return;
        }

        for (int i = 0; i < bufferSize; i++) {
            if (playPosition < processedBuffer.size()) {
                float sample = processedBuffer[playPosition];

                // Copy to all channels
                for (int channel = 0; channel < channelCount; channel++) {
                    outputBuffer[i * channelCount + channel] = sample;
                }

                playPosition++;
            } else {
                // End of buffer
                for (int channel = 0; channel < channelCount; channel++) {
                    outputBuffer[i * channelCount + channel] = 0.0f;
                }

                if (playPosition >= processedBuffer.size() + sampleRate) {
                    // Stop after 1 second of silence
                    isPlaying = false;
                }
            }
        }
    }
};
Enter fullscreen mode Exit fullscreen mode

The JavaScript wrapper connects this C++ engine to the browser's Web Audio API, creating a professional tool that feels instant and responsive.

This expansion goes beyond media editing. One of the most exciting areas is scientific computing and data analysis. Imagine a researcher or a student who needs to run complex statistical models or manipulate large datasets. Traditionally, this required software like Python with NumPy and SciPy, installed on a local machine or a remote server.

Now, projects like Pyodide compile the entire Python scientific stack—the interpreter and libraries—to WebAssembly. This means you can have a fully interactive Python notebook running directly in your browser, with no backend server needed for the computation.

// Pyodide: Python scientific stack running in WebAssembly
import { loadPyodide } from 'pyodide';

class PythonWasmRuntime {
    constructor() {
        this.pyodide = null;
        this.loadedPackages = new Set();
        this.isReady = false;
    }

    async initialize() {
        console.log('Loading Pyodide runtime...');

        this.pyodide = await loadPyodide({
            indexURL: "https://cdn.jsdelivr.net/pyodide/v0.23.4/full/",
            stdout: (text) => console.log('Python:', text),
            stderr: (text) => console.error('Python Error:', text)
        });

        // Load core scientific packages
        await this.loadPackage(['numpy', 'pandas', 'matplotlib', 'scipy']);

        this.isReady = true;
        console.log('Pyodide runtime ready');

        return this;
    }

    async loadPackage(packages) {
        if (!this.pyodide) return;

        const packagesToLoad = Array.isArray(packages) ? packages : [packages];
        const newPackages = packagesToLoad.filter(p => !this.loadedPackages.has(p));

        if (newPackages.length > 0) {
            await this.pyodide.loadPackage(newPackages);
            newPackages.forEach(p => this.loadedPackages.add(p));
        }
    }

    async runCode(code, returnResult = false) {
        if (!this.isReady) {
            throw new Error('Runtime not initialized');
        }

        try {
            const result = this.pyodide.runPython(code);

            if (returnResult) {
                // Convert Python objects to JavaScript
                return this.convertToJS(result);
            }

            return { success: true };
        } catch (error) {
            console.error('Python execution error:', error);
            return {
                success: false,
                error: error.message
            };
        }
    }

    convertToJS(pythonObj) {
        // Convert common Python types to JavaScript
        if (pythonObj === null || pythonObj === undefined) {
            return null;
        }

        // Check if it's a Pyodide proxy
        if (pythonObj.toString().includes('[Pyodide.Proxy]')) {
            // Try to convert based on type
            try {
                // Check if it's a list
                if (this.pyodide.isinstance(pythonObj, this.pyodide.globals.get('list'))) {
                    return pythonObj.toJs();
                }

                // Check if it's a dict
                if (this.pyodide.isinstance(pythonObj, this.pyodide.globals.get('dict'))) {
                    return pythonObj.toJs();
                }

                // Check if it's a numpy array
                if (this.pyodide.isinstance(pythonObj, this.pyodide.globals.get('numpy').ndarray)) {
                    return pythonObj.tolist();
                }

                // Check if it's a pandas DataFrame
                if (this.pyodide.isinstance(pythonObj, this.pyodide.globals.get('pandas').DataFrame)) {
                    return this.convertDataFrame(pythonObj);
                }
            } catch (e) {
                console.warn('Could not convert Python object:', e);
            }
        }

        // Fallback to string representation
        return pythonObj.toString();
    }

    convertDataFrame(df) {
        // Convert pandas DataFrame to JavaScript object
        const code = `
import json
df.to_json(orient='split')
`;

        const jsonStr = this.pyodide.runPython(code);
        return JSON.parse(jsonStr);
    }

    // Data analysis examples
    async analyzeDataset(data, analysisType) {
        await this.loadPackage(['numpy', 'pandas', 'scipy']);

        // Load data into Python
        const loadCode = `
import pandas as pd
import numpy as np
from io import StringIO

data = """${this.escapeString(JSON.stringify(data))}"""
df = pd.read_json(data)
`;

        await this.runCode(loadCode);

        switch (analysisType) {
            case 'descriptive':
                return await this.runCode(`
desc = df.describe().to_dict()
correlation = df.corr().to_dict()
{
    'description': desc,
    'correlation': correlation,
    'shapes': df.shape,
    'dtypes': {col: str(dtype) for col, dtype in df.dtypes.items()}
}
`, true);

            case 'regression':
                return await this.runCode(`
from scipy import stats
import numpy as np

results = {}
for col1 in df.columns:
    for col2 in df.columns:
        if col1 != col2 and df[col1].dtype in [np.int64, np.float64] and df[col2].dtype in [np.int64, np.float64]:
            slope, intercept, r_value, p_value, std_err = stats.linregress(df[col1], df[col2])
            results[f"{col1}_vs_{col2}"] = {
                'slope': slope,
                'intercept': intercept,
                'r_squared': r_value**2,
                'p_value': p_value,
                'std_err': std_err
            }
results
`, true);
        }
    }

    escapeString(str) {
        return str.replace(/"/g, '\\"').replace(/\n/g, '\\n');
    }

    // Create visualization
    async createPlot(data, plotType) {
        await this.loadPackage(['matplotlib']);

        // Generate HTML with embedded plot
        const plotCode = `
import matplotlib.pyplot as plt
import numpy as np
import base64
from io import BytesIO

# Create plot
plt.figure(figsize=(10, 6))

${this.generatePlotCode(data, plotType)}

# Save to buffer
buf = BytesIO()
plt.savefig(buf, format='png', bbox_inches='tight', dpi=100)
plt.close()

# Convert to base64
img_str = base64.b64encode(buf.getvalue()).decode('utf-8')
f"data:image/png;base64,{img_str}"
`;

        const result = await this.runCode(plotCode, true);
        return result;
    }

    generatePlotCode(data, plotType) {
        switch (plotType) {
            case 'scatter':
                return `
x = [d['x'] for d in ${JSON.stringify(data)}]
y = [d['y'] for d in ${JSON.stringify(data)}]
plt.scatter(x, y, alpha=0.6)
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Scatter Plot')
`;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

A user can upload a CSV file, and the entire analysis—loading, cleaning, statistical testing, and generating a plot—happens locally in their browser. This is powerful for privacy, offline use, and reducing server costs. It turns the browser into a universal data analysis workstation.

Perhaps the most visible shift is in gaming. Major game engines like Unity and Unreal Engine can now export projects directly to WebAssembly. This is not about simple canvas games. We are talking about rich 3D worlds with complex physics, advanced lighting, and detailed assets that you can play instantly by visiting a URL.

The engine code handling the 3D math, mesh rendering, and texture sampling is compiled from C++ to WebAssembly. It interacts with WebGL for graphics and the Gamepad API for controller support.

// Simplified game engine component compiled to WebAssembly
#include <emscripten.h>
#include <emscripten/html5.h>
#include <webgl/webgl2.h>
#include <cmath>
#include <vector>

class WebGLGameEngine {
private:
    GLuint program;
    GLuint vertexBuffer;
    GLuint indexBuffer;
    GLuint texture;

    int canvasWidth;
    int canvasHeight;

    float rotationAngle;

public:
    WebGLGameEngine() : program(0), rotationAngle(0.0f) {}

    bool initialize() {
        // Initialize WebGL2 context
        EmscriptenWebGLContextAttributes attrs;
        emscripten_webgl_init_context_attributes(&attrs);
        attrs.alpha = false;
        attrs.depth = true;
        attrs.stencil = true;
        attrs.antialias = true;
        attrs.premultipliedAlpha = false;
        attrs.preserveDrawingBuffer = false;
        attrs.powerPreference = EM_WEBGL_POWER_PREFERENCE_HIGH_PERFORMANCE;
        attrs.failIfMajorPerformanceCaveat = false;
        attrs.majorVersion = 2;
        attrs.minorVersion = 0;

        EMSCRIPTEN_WEBGL_CONTEXT_HANDLE context = emscripten_webgl_create_context("#canvas", &attrs);
        if (context <= 0) {
            return false;
        }
        emscripten_webgl_make_context_current(context);

        // Set viewport
        emscripten_get_canvas_element_size("#canvas", &canvasWidth, &canvasHeight);
        glViewport(0, 0, canvasWidth, canvasHeight);

        // Create shader program
        program = createShaderProgram();
        if (!program) {
            return false;
        }

        // Create geometry
        createCubeGeometry();

        // Create texture
        createTexture();

        // Enable depth testing
        glEnable(GL_DEPTH_TEST);
        glDepthFunc(GL_LESS);

        // Enable backface culling
        glEnable(GL_CULL_FACE);
        glCullFace(GL_BACK);

        return true;
    }

    GLuint createShaderProgram() {
        const char* vertexShaderSource = R"(
            #version 300 es
            precision highp float;

            in vec3 aPosition;
            in vec2 aTexCoord;
            in vec3 aNormal;

            uniform mat4 uModelViewMatrix;
            uniform mat4 uProjectionMatrix;

            out vec2 vTexCoord;
            out vec3 vNormal;
            out vec3 vPosition;

            void main() {
                vTexCoord = aTexCoord;
                vNormal = mat3(uModelViewMatrix) * aNormal;
                vPosition = vec3(uModelViewMatrix * vec4(aPosition, 1.0));
                gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(aPosition, 1.0);
            }
        )";

        const char* fragmentShaderSource = R"(
            #version 300 es
            precision highp float;

            in vec2 vTexCoord;
            in vec3 vNormal;
            in vec3 vPosition;

            uniform sampler2D uTexture;
            uniform vec3 uLightPosition;
            uniform vec3 uLightColor;

            out vec4 fragColor;

            void main() {
                // Sample texture
                vec4 texColor = texture(uTexture, vTexCoord);

                // Calculate lighting
                vec3 normal = normalize(vNormal);
                vec3 lightDir = normalize(uLightPosition - vPosition);
                float diff = max(dot(normal, lightDir), 0.0);
                vec3 diffuse = diff * uLightColor;

                // Ambient lighting
                vec3 ambient = vec3(0.1, 0.1, 0.1);

                // Combine
                vec3 finalColor = (ambient + diffuse) * texColor.rgb;
                fragColor = vec4(finalColor, texColor.a);
            }
        )";

        // Compile shaders
        GLuint vertexShader = compileShader(GL_VERTEX_SHADER, vertexShaderSource);
        GLuint fragmentShader = compileShader(GL_FRAGMENT_SHADER, fragmentShaderSource);

        if (!vertexShader || !fragmentShader) {
            return 0;
        }

        // Create program
        GLuint program = glCreateProgram();
        glAttachShader(program, vertexShader);
        glAttachShader(program, fragmentShader);
        glLinkProgram(program);

        // Check linking status
        GLint linked;
        glGetProgramiv(program, GL_LINK_STATUS, &linked);
        if (!linked) {
            GLchar infoLog[1024];
            glGetProgramInfoLog(program, 1024, NULL, infoLog);
            emscripten_log(EM_LOG_ERROR, "Shader linking failed: %s", infoLog);
            return 0;
        }

        glDeleteShader(vertexShader);
        glDeleteShader(fragmentShader);

        return program;
    }

    void createCubeGeometry() {
        // Cube vertices (position, texcoord, normal)
        float vertices[] = {
            // Front face
            -0.5f, -0.5f,  0.5f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f,
             0.5f, -0.5f,  0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
             0.5f,  0.5f,  0.5f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f,
            -0.5f,  0.5f,  0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,

            // Back face
            -0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f,
            -0.5f,  0.5f, -0.5f, 1.0f, 1.0f, 0.0f, 0.0f, -1.0f,
             0.5f,  0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, -1.0f,
             0.5f, -0.5f, -0.5f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f,
        };

        unsigned int indices[] = {
            // Front
            0, 1, 2, 2, 3, 0,
            // Back
            4, 5, 6, 6, 7, 4,
        };

        // Create vertex buffer
        glGenBuffers(1, &vertexBuffer);
        glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
        glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

        // Create index buffer
        glGenBuffers(1, &indexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
        glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
    }

    void update(float deltaTime) {
        rotationAngle += deltaTime * 0.5f; // Rotate 30 degrees per second

        // Update canvas size if changed
        int newWidth, newHeight;
        emscripten_get_canvas_element_size("#canvas", &newWidth, &newHeight);
        if (newWidth != canvasWidth || newHeight != canvasHeight) {
            canvasWidth = newWidth;
            canvasHeight = newHeight;
            glViewport(0, 0, canvasWidth, canvasHeight);
        }
    }

    void render() {
        // Clear screen
        glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
        glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

        // Use shader program
        glUseProgram(program);

        // Set up matrices
        float aspect = (float)canvasWidth / (float)canvasHeight;

        // Projection matrix
        float projection[16];
        perspectiveMatrix(projection, 45.0f * M_PI / 180.0f, aspect, 0.1f, 100.0f);

        // Model-view matrix
        float modelView[16];
        identityMatrix(modelView);

        // Translate and rotate
        translateMatrix(modelView, 0.0f, 0.0f, -3.0f);
        rotateMatrix(modelView, rotationAngle, 0.0f, 1.0f, 0.0f);
        rotateMatrix(modelView, rotationAngle * 0.7f, 1.0f, 0.0f, 0.0f);

        // Set uniforms
        GLint modelViewLoc = glGetUniformLocation(program, "uModelViewMatrix");
        GLint projectionLoc = glGetUniformLocation(program, "uProjectionMatrix");
        GLint lightPosLoc = glGetUniformLocation(program, "uLightPosition");
        GLint lightColorLoc = glGetUniformLocation(program, "uLightColor");

        glUniformMatrix4fv(modelViewLoc, 1, GL_FALSE, modelView);
        glUniformMatrix4fv(projectionLoc, 1, GL_FALSE, projection);
        glUniform3f(lightPosLoc, 2.0f, 2.0f, 2.0f);
        glUniform3f(lightColorLoc, 1.0f, 1.0f, 1.0f);

        // Set up vertex attributes
        glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);

        // Position attribute
        GLint posAttrib = glGetAttribLocation(program, "aPosition");
        glEnableVertexAttribArray(posAttrib);
        glVertexAttribPointer(posAttrib, 3, GL_FLOAT, GL_FALSE, 8 * sizeof(float), (void*)0);

        // Draw
        glDrawElements(GL_TRIANGLES, 12, GL_UNSIGNED_INT, 0);

        // Disable vertex attributes
        glDisableVertexAttribArray(posAttrib);
    }
};
Enter fullscreen mode Exit fullscreen mode

The JavaScript side simply loads the module and starts the game loop. The entire 3D world is rendered by the WebAssembly code.

What we are witnessing is the emergence of a new software category. These are not web pages, nor are they traditional native apps. They exist in a hybrid space. They have the instant access, linkability, and cross-platform nature of the web. They also have the performance, capability, and richness of desktop software.

This changes how we think about software distribution. There is no installation wizard. No worrying about Windows, Mac, or Linux versions. The latest version is always the one you run. Your work and data can live in the cloud or on your device. The security model of the browser sandbox remains, even while the application does work that once required full system access.

For developers, it means you can use the right tool for the job. You are not forced to write everything in JavaScript. You can build the performance-critical core of your application in Rust for safety and speed, or in C++ to reuse vast existing libraries. Then you wrap it in a thin JavaScript layer to handle the DOM, user input, and web APIs.

The expansion of WebAssembly is not just about doing old things faster. It is about doing entirely new things that were previously off-limits for the web. It is turning the browser from a document viewer into a universal application runtime. The line between what is a "web app" and a "desktop app" is blurring, and a new, more capable category of software is taking its place.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)