Multitasking and Multithreading Core concepts

lucasepe profile image Luca Sepe ・3 min read


Multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time.

New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory.

Context switching

At any given time, a processor (CPU) is executing in a specific context.

This context is made up of the contents of its registers and the memory (including stack, data, and code) that it is addressing.

When the processor needs to switch to a different task, it must save its current context (so it can later restore the context and continue execution where it left off) and switch to the context of the new task.

This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking).

Cooperative vs Preemptive (multitasking)

Cooperative Preemptive
use of the processor is never taken from a task; rather, a task must voluntarily yield control of the processor before any other task can run the OS can take control of the processor without the task’s cooperation (a task can also give it up voluntarily, as in non-preemptive multitasking).
The process of a task having control taken from it is called preemption
all programs must cooperate for the entire scheduling scheme to work OS takes control of the processor from a task: when a task’s time slice runs out; when a task that has higher priority becomes ready to run


Multithreading is the concurrent execution of multiple threads.


A thread is just a sequence of instructions that can be executed independently by a processor/task.

Both processes and threads are independent sequences of execution.

The typical difference is that threads (of the same process) run in a shared memory space, while processes run in separate memory spaces.

All of the threads in a process share the same memory and system resources (including quota limits).

Modern processors can executed multiple threads at once (multi-threading).

In a multithreaded program, the programmer is responsible for making sure that the different threads don’t interfere with each other by using these shared resources in a way that conflicts with another thread’s use of the same resource. As you might suspect, this can get a little tricky.


A section of code that modifies data structures shared by multiple threads is called a 'critical section'.

It is important than when a critical section is running in one thread that no other thread be able to access that data structure.

Synchronization is necessary to ensure that only one thread can execute in a critical section at a time.

Access to data structure or instructions or object that will be shared among threads must be synchronized.

  1. Wait on the synchronization before accessing the data structure
  2. Access the data structure - this is the critical section
  3. Unlock the synchronization so that the data can be accessed by other threads

The first step is critical because if it’s omitted then any thread can access the data structure while you’re accessing.

The last step is also critical - if omitted, then no thread will be able to access the data even after you’re done.

Using this technique on a critical section insures that only one thread can access the data at a time.





Editor guide