Hello, I’m Jairo 👋
The dev.to writer you love the most.
(Ok… maybe not yet 😅 but let me believe that for a second.)
First of all, thank you for taking the time to read this.
I hope you learn something new — or at least enjoy the journey.
🚀 Why I built a Thread Pool
This week I was looking for projects to improve my software engineering skills — not just frameworks or tools, but the fundamentals.
Lately, I’ve been learning Rust as my “low-level playground” language, building things closer to the system level. My goal is to understand what happens under the hood.
So I started exploring ideas like:
- building a Redis-like database from scratch
- understanding memory management
- working with threads and concurrency
And that’s when I decided:
“Why not build my own thread pool?”
🧠 Why a Thread Pool?
When learning a new language, we usually focus on things like:
- strings
- collections
- classes / structs
But one thing that’s often ignored:
👉 how concurrency actually works
A thread pool solves a very real problem:
❌ One thread per task → high memory usage
✅ Fixed workers → controlled concurrency
Instead of creating a new thread for every job, we:
- create a limited number of workers
- reuse them
- keep them alive
- let them consume tasks from a queue
⚙️ How I built it
The idea is simple:
main → pushes jobs → shared queue → workers → results queue
Workers:
- stay alive forever
- sleep when there’s no work
- wake up when a new job arrives
- execute the job
- store the result
🧩 Core Architecture
Queue (Arc + Mutex + Condvar)
↓
Workers (threads)
↓
Result Queue
🧪 Real code example
Here’s the core of the worker loop:
loop {
let mut process = shared_queue.get_process();
let response = process.run();
result_queue.push(Result::new(response, process.get_id()));
}
Workers:
- continuously fetch jobs
- execute them
- store results
Shared Queue (the heart of everything)
pub struct Queue {
processes: Arc<(Mutex<VecDeque<Process>>, Condvar)>,
}
Why this?
-
Arc→ shared across threads -
Mutex→ safe mutation -
VecDeque→ efficient queue -
Condvar→ sleep/wake mechanism
Adding a job
pub fn add(&self, process: Process) {
let (processes, condvar) = &*self.processes;
processes.lock().unwrap().push_back(process);
condvar.notify_one();
}
Getting a job (blocking wait)
pub fn get_process(&self) -> Process {
let (lock, condvar) = &*self.processes;
let mut processes = lock.lock().unwrap();
while processes.is_empty() {
processes = condvar.wait(processes).unwrap();
}
processes.pop_front().unwrap()
}
👉 This is where the magic happens:
- if no jobs → thread sleeps
- when job arrives → thread wakes up
🧠 Arc + Mutex (the big lesson)
This was the hardest part for me.
Arc
Allows multiple threads to own the same data
Mutex
Ensures only one thread mutates at a time
Together:
Arc<Mutex<VecDeque<Process>>>
This gives you:
- shared ownership
- safe mutation
- no race conditions
Important detail
let queue_clone = queue.clone();
This does NOT clone the data.
It creates a new pointer to the same memory.
Mutex lock
let guard = mutex.lock().unwrap();
Means:
“I want exclusive access to this data”
If another thread is using it:
👉 you wait.
😴 Condvar (sleeping workers)
Without this, your workers would:
loop forever → burning CPU
With Condvar:
no job → sleep
new job → wake up
This makes your system:
- efficient
- responsive
- production-like
💡 What I learned
At some point I realized:
I wasn’t just learning Rust…
I was rebuilding what a dispatcher does internally.
Things became much clearer:
- how thread pools work
- how coroutines are implemented under the hood
- how shared memory works
- why synchronization matters
🤖 Using AI as a teacher (not a code generator)
One important thing about this project: I didn’t use ChatGPT to generate the solution for me.
Instead, I used it as a teacher.
Whenever I got stuck, I asked things like:
- “Why does this break?”
- “What is the responsibility of this component?”
- “Am I modeling this correctly?”
And then I implemented everything myself.
That made a huge difference.
Because instead of just copying code, I was forced to:
- understand the concepts
- reason about concurrency
- fix my own mistakes
And honestly, that’s where the real learning happens.
📦 The project
You can check the full implementation here:
👉 https://github.com/jairo-dev-junior/thread-pool
🎯 Final thoughts
This project was one of those “click moments”.
Rust forces you to think about:
- ownership
- memory
- concurrency
- synchronization
No magic. Just control.
And yeah… I still need to implement a clean shutdown 😅
Thanks for reading until here 🙏
See you in the next article 🚀
Top comments (0)