DEV Community

Seven
Seven

Posted on

My journey with code. Today threads: Threaded vs Evented

This is a record of my journey with Zig, threads, and the tests I made to understand the new std.Io. It's personal - it's not a tutorial. It's my chalkboard.


Why Zig?

I heard about Zig for the first time because of Bun JS. They said it was the fastest runtime in the market. At first, I didn't pay much attention - not my level. But it kept appearing everywhere with the best performance. Why so fast? They said it was written in Zig.

That name caught my curiosity. I had to test Zig.

Now I'm here, fully into Zig. I'm building my own game engine, my own Web framework. But Zig changes a lot, and that's confusing. I use the "master dev" version to access the newest features, but I don't recommend that!

The new std.Io is very different. That's why I started my tests.


The Problem: Threaded vs Evented

In the new Zig std.Io, we have two main ways to handle concurrency:

  • std.Io.Threaded: Uses a thread pool. Good for CPU-bound tasks (calculations, physics, particles).
  • std.Io.Evented: Uses io_uring on Linux. Good for I/O-bound tasks (network, files, databases).

But which one should I use? I needed to test it myself.


My Tests

I created several benchmarks to understand the difference. All code is available on my GitHub.

Test 1: CPU-Bound - 1 Million Particles

I simulated 1 million particles with 10 iterations:

// particle_benchmark.zig
fn computeAllParticles(particles: []Particle, dt: f64) void {
    for (particles) |*p| {
        p.x += p.vx * dt;
        p.y += p.vy * dt;
        p.vx *= 0.999;
        p.vy *= 0.999;
    }
}
Enter fullscreen mode Exit fullscreen mode

Results:

Type Time
Single-threaded 43 ms
Threaded (4 threads) 30 ms (1.4x faster)
Evented 104 ms (slower!)

For CPU-bound work, Threaded is the choice.

Test 2: Network I/O - 100 HTTP Requests

Simulated 100 HTTP requests (5ms delay each):

// network_benchmark.zig
fn simulateNetworkRequest(id: usize) TaskResult {
    var ts: std.posix.timespec = .{ .sec = 0, .nsec = 5_000_000 };
    _ = std.posix.system.nanosleep(&ts, &ts);
    return .{ .id = id, .status = "ok" };
}
Enter fullscreen mode Exit fullscreen mode

Results:

Type Time
Sequential 503 ms
Concurrent (Evented + Group) 261 ms

For I/O-bound work, Evented with Group is better.

Test 3: Memory Usage

Both use the same memory (~38 MB for 1 million particles). The difference is only thread stack overhead.


What I Learned

  1. CPU-bound tasks → Use Threaded
  2. I/O-bound tasks → Use Evented
  3. The same code works with both - just change the Io implementation

The key is simple:

"Does my task wait for I/O? Use Evented. Does my task do CPU work? Use Threaded."


API Quick Reference

Threaded (for CPU-bound)

var threaded = std.Io.Threaded.init(allocator, .{
    .async_limit = std.Io.Limit.limited(4),
});
const io = threaded.io();
defer threaded.deinit();

var group: std.Io.Group = .init;
group.async(io, myFunction, .{args});
group.await(io) catch {};
Enter fullscreen mode Exit fullscreen mode

Evented (for I/O-bound)

var evented: std.Io.Evented = undefined;
try evented.init(allocator, .{ .thread_limit = 4 });
const io = evented.io();
defer evented.deinit();

var group: std.Io.Group = .init;
group.async(io, myFunction, .{args});
group.await(io) catch {};
Enter fullscreen mode Exit fullscreen mode

Sources


Follow Me


Code Repository

All benchmark code available on GitHub.
github.com/llllOllOOll


This article is a personal record of my learning journey with Zig and low-level programming. It's not a tutorial - it's my chalkboard.

Top comments (0)