DEV Community

Cover image for C# Parallelism - A Quick Overview
Gabriel Amarantes
Gabriel Amarantes

Posted on

C# Parallelism - A Quick Overview

Have you ever wondered how your PC can run so many apps at the same time? Or why your application freezes while processing something intensive?

In this article, we'll explore the concepts of Parallelism and Concurrency, specifically in C#. What are they? How are they different? And most importantly, how can we write parallel code to make our applications faster?

Let's get started!

What is Concurrency?

Concurrency means managing multiple tasks over the same period, but not necessarily executing them at the exact same instant.

Imagine a restaurant with just one chef. To be efficient, the chef starts boiling water for pasta. While the water is heating up (a waiting period), they start chopping vegetables for a salad. The chef is making progress on both tasks by switching between them. This is concurrency.

In computing, this is what happens on a single-core CPU. The CPU performs context switching: it runs a small piece of one task, then quickly switches to another. It does this so fast that it creates the illusion of tasks running at the same time.

What is Parallelism?

Parallelism is when multiple tasks are executed at the exact same time, each on its own CPU core.

Let's return to our restaurant. Now, it has two cooks. Cook 1 can make the pasta while Cook 2 makes the salad, simultaneously. They are working in parallel. This approach truly saves time because more work gets done in the same period.

Parallel programming uses this exact approach. Instead of running tasks one after another, we execute them simultaneously to maximize the use of our hardware. For example, an application processing video from multiple cameras can handle each feed at the same time.

True parallelism is only possible when you have more than one CPU core (or in our analogy, more than one cook!).

Threads

A thread is the smallest unit of execution within a process (a process is your running program, like an .exe). Every C# application starts with a main thread, but we can create additional threads (called worker threads) to run tasks in parallel or concurrently.

To create a thread manually, you instantiate the System.Threading.Thread class and pass it the method you want to execute. Then, you call the Start() method to begin execution.

How create and start a Thread

You can request a thread to stop using a CancellationToken or pause it using Thread.Sleep. However, manually creating and destroying threads is resource-intensive, consuming significant CPU time and memory. Because of this complexity and cost, you should be careful when using the Thread class directly.

Task Parallel Library (TPL)

Fortunately, .NET provides a much better way to write parallel code without needing to manage threads manually. It’s a powerful library called the Task Parallel Library (TPL), and it handles all the complex work of dividing tasks among CPU cores for you. The TPL dynamically scales the degree of concurrency to use all available processor most efficiently.

Under the hood, the TPL uses the ThreadPool, which is a pool of pre-created worker threads ready to execute work. This avoids the high cost of creating and destroying threads for every small task.

TPL Examples

Scenario 1: Processing a Collection

Let’s look at a common scenario: processing a large collection of items. Imagine we have a method that performs a CPU-intensive operation, like converting thousands of videos.

Sequential code

Scenario 1: Sequential Code

Parallel code

Scenario 1: Parallel Code

By simply switching from foreach to Parallel.ForEach , we told the TPL to run the ConvertToMp4 operation on multiple CPU cores at the same time. The TPL automatically manages everything, and the result is a massive performance boost.

Scenario 2: Running Independent Operations

Now, let’s look another scenario: running two independent operations concurrently.

Sequential Code

Scenario 2: Sequential Code

Parallel Code

Scenario 2: Parallel Code

In the sequential code, we wait for GetMkvVideos() to finish completely before we even start GetMp4Videos().

By using Task.WhenAll, we start both operations at the same time and wait for them both to complete. This is ideal for asynchronous I/O operations (like database queries or API calls). While one operation is waiting for a response from the network, the other can be running. This overlaps the waiting periods, drastically reducing the total time and making better use of resources.

Conclusion

  • Use Parallel for CPU-Bound Work. Is your code doing heavy calculations, image processing, or complex transformation? Parallel.ForEach is an excellent choice.
  • Use async/await for I/O-Bound Work. Is your code waiting for a database, an API call, or reading a file from network? async/await with Task.WhenAll is the right tool. It prevents threads from being blocked while waiting.
  • Always Prefer the TPL over Thread. The Task Parallel Library (Task, Parallel.ForEach) is safer, more efficient, and easier to use than creating threads manually. Avoid new Thread() unless you have a very specific reason.
  • Measure, Don’t Guess! Parallelism has a small overhead. For very small or very fast loops, it can sometimes be slower than a sequential one. Always use a Stopwatch to measure your code and confirm you’re getting a real performance benefit.

Parallelism is a powerful tool in the modern C# developer’s toolkit. By understanding the basics of the TPL, you can unlock the full potential of today’s multi-core processors and build faster, more responsive applications.

References

For more informartion, check out the official Microsoft documentation:

Top comments (0)