Lessons in Multiprocessor Programming: An Introduction
For a long time, if you wanted your software to run faster, you just waited for a faster computer. Hardware designers would crank up the clock speed (overclocking), and your code would magically perform better without you changing a single line.
But that free lunch is over. You may ask why? Overclocking raises a processor’s clock speed beyond its rated frequency, which can yield higher performance but also increases power draw and heat, potentially reducing component lifespan or causing instability if cooling and voltage aren’t managed carefully.
While we are still fitting more transistors onto chips (keeping up with Moore's Law), we've hit physical limits on how fast we can drive them without melting the board. Instead of making a single processor faster, manufacturers are now putting multiple processors (cores) on the same chip.
To take advantage of this hardware, we can't just write code the old way. We need to write concurrent programs, and—when we have multiple cores available—run them in parallel so they truly execute at the same time.
Concurrency vs. Parallelism (quick primer)
- Concurrency is about correctly managing overlapping work (threads, coroutines, async tasks, distributed nodes). Tasks may or may not run at the same instant, but they do interact and interleave.
- Parallelism is about using multiple hardware units at once (multiple cores, SIMD units, GPUs) to make overlapping work truly simultaneous.
Safety and Liveness are concurrency properties: they make sure interacting tasks stay correct and keep making progress. They matter even on a single core (where tasks interleave) and still matter when you scale out to many cores (parallelism).
In this series, I will share the techniques I have learned and still learning to master this science and art. We will explore protocols, algorithms, and the mindset required to solve problems in a parallel world.
The Two Guiding Lights
Designing concurrent and parallel algorithms is tricky. When multiple threads try to talk to each other or access the same data, chaos can ensue. To keep us on track, we rely on two fundamental principles:
- The Safety Principle
- The Liveness Principle
1. Safety: "Nothing bad happens"
The Safety principle guarantees that the program doesn't break the rules or enter an invalid state.
- Real-world analogy: Two cars will never crash into each other at an intersection.
- In code: Two threads will never modify the same variable at the exact same time and corrupt the data.
2. Liveness: "Something good eventually happens"
The Liveness principle guarantees that the program keeps making progress.
- Real-world analogy: The traffic light will eventually turn green, and cars will move.
- In code: A thread waiting for a database connection will eventually get one and finish its job.
Defining Correctness
How do we know if our concurrent/parallel code is actually "correct"? We measure it against three specific qualities:
1. Mutual Exclusion
This is the most basic safety property. It ensures that only one thread can execute a critical piece of code (like writing to a file or updating a bank balance) at a time. If one thread is busy, others must wait.
Illustration: The Restroom Key
Imagine a coffee shop with a single restroom and one key. Only the person holding the key can enter. Everyone else must wait until the key is returned.
2. Freedom from Deadlock
Deadlock is a situation where everyone is waiting for someone else, and no one can move. It is a violation of the Liveness principle.
Illustration: The Standoff
Imagine Person A has a Hammer but needs a Nail. Person B has a Nail but needs a Hammer. If neither person is willing to give up what they have until they get what they want, they will wait forever.
3. Freedom from Starvation
Starvation happens when a system is running and technically "making progress," but some specific threads never get a turn.
Illustration: The Busy Intersection
Imagine a busy highway intersecting with a small side street. If the traffic light logic always prioritizes the highway because it has more cars, the single car on the side street might wait for hours. The intersection is working (cars are moving), but the side street car is starving.
Freedom from starvation ensures fairness: eventually, every thread gets a slice of the resources.
What's Next?
Understanding these definitions is the first step. In the next post, we will look at actual code examples, starting with the classic Dining Philosophers Problem, to see how we can implement these principles in Java.
References & Further Reading
- The Free Lunch Is Over by Herb Sutter. A classic article explaining why the era of free performance gains for single-threaded applications has ended and why concurrency is the future.
- The Art of Multiprocessor Programming by Maurice Herlihy and Nir Shavit. An excellent textbook covering the principles and algorithms of multiprocessor programming in depth.
- Proving the Correctness of Multiprocess Programs by Leslie Lamport. The seminal paper that introduced the concepts of Safety and Liveness in the context of concurrent systems.
About the Author
Deji Adeoti is a Technical Lead and Engineering Manager with a passion for building scalable software systems. He enjoys breaking down complex concepts like concurrency and distributed systems for fellow engineers. This series documents his deep dive into the art of multiprocessor programming.


Top comments (0)