DEV Community

Cover image for Codexion
Yassir El bakkari
Yassir El bakkari

Posted on

Codexion

Introduction

Codexion is a concurrent system simulation written in C where multiple threads represent coders competing for limited shared resources called dongles, Each coder runs independently in parallel, and all coders must coordinate access to these resources without direct communication.

At the system level, this project models how an operating system manages multiple processes competing for shared hardware, Each coder is implemented as a POSIX thread (pthread), and each dongle is protected using using synchronization primitives such as mutexes and condition variables to ensure safe access, Since multiple threads attempt to access the same resources at the same time, the program must prevent race conditions, deadlocks, starvation, and inconsistent shared state.

Each coder repeatedly performs a cycle of actions: acquiring two dongles, compiling code, releasing resources, debugging, and refactoring. However, each coder has a strict time constraint: if they fail to compile within a given deadline, they are considered to have burned out, and the simulation stops. A separate monitor thread continuously checks the state of all coders and enforces this rule with high precision timing.

The system also introduces a scheduling mechanism that controls how dongles are assigned when multiple coders request them at the same time. Depending on the selected policy (FIFO or EDF), the order of access changes dynamically based on request time or urgency. Additionally, a cooldown mechanism delays reuse of dongles after they are released, adding another layer of resource contention.

Overall, Codexion is a low-level concurrency problem that requires careful coordination between threads, precise timing control, and efficient synchronization to ensure fairness, liveness, and correctness under strict constraints.

System Overview

This project simulates a group of coders running in parallel using threads. Each coder is an independent execution flow controlled by the operating system. All coders share limited resources called dongles, which are protected to prevent conflicts when multiple coders try to use them at the same time.

The system runs continuously in cycles. Each coder repeatedly tries to acquire two dongles, performs compiling, then releases the resources and moves to debugging and refactoring. Since resources are limited, coders may need to wait until dongles become available.

At the same time, a monitoring system observes all coders and tracks their last compiling time. If a coder does not compile within the allowed time limit, the system stops and that coder is considered burned out.

To manage fairness, the system uses a scheduling strategy (FIFO or EDF) that decides which coder gets access to a dongle when many coders request it at the same time.

Core Concepts

1. Threads (POSIX Threads)
A thread is a single flow of execution inside a process. In this project, each coder is represented by a thread created using POSIX threads.

Inside the operating system, the CPU does not run all threads at the same time. Instead, it switches very fast between them using context switching. One thread runs for a short time, then it is paused, and another thread continues.

All threads share the same memory space. This means they can access the same data, such as shared counters and shared dongles. Because of this shared memory, synchronization is required to avoid conflicts.

2. Shared Resources (Dongles)
Dongles are shared resources used by coders to compile. Each coder needs two dongles at the same time to perform the compiling step.

Since all coders share a limited number of dongles, multiple threads may try to access the same resource simultaneously. Without control, this would lead to conflicts where two coders use the same dongle at the same time, which is not allowed.

This is why dongles must be protected using synchronization tools.

3. Mutex (Mutual Exclusion)
A mutex is a locking mechanism used to protect shared resources. Only one thread can lock a mutex at a time.

When a coder locks a dongle, other coders trying to access it will be blocked by the operating system and placed in a waiting state.

The OS manages this waiting internally using kernel queues. The blocked thread does not consume CPU time until it is allowed to continue.

Mutexes ensure that shared resources are used safely without data corruption or simultaneous access.

4. Synchronization and Critical Sections

A critical section is a part of the program where shared resources are accessed or modified.

In this project, critical sections include:

  • Taking dongles
  • Updating last compilation time
  • Incrementing compile counters
  • Printing logs Only one thread can execute a critical section protected by a mutex at a time.

Only one thread can execute a critical section protected by a mutex at a time.

5. Scheduling (FIFO and EDF)

When multiple coders try to take the same dongle at the same time, a scheduling policy decides who gets it first.

FIFO (First In First Out):
The coder who requests the resource first is served first. Requests are handled in arrival order.

EDF (Earliest Deadline First):
Each coder has a deadline based on:
last compile time + time to burnout.
The coder with the closest deadline gets priority.

This scheduling is implemented using a priority system that organizes waiting threads.

6. Monitor System (Burnout Detection)

A separate monitor thread continuously checks the state of all coders.

It compares the current time with each coder’s last compilation time. If a coder exceeds the allowed time without compiling, the monitor triggers a burnout event and stops the simulation.

This monitor acts like a watchdog system inside the program, ensuring that no coder stays inactive for too long.

7. Lifecycle of a Coder

Each coder repeats the same cycle during execution:

  1. First, the coder tries to acquire two dongles. If successful, it enters the compiling phase.
  2. After compiling, the coder releases the dongles and moves to debugging, where it waits for a fixed amount of time.
  3. Then the coder enters refactoring, another waiting phase.
  4. After completing these steps, the coder returns again to try compiling.

    This cycle continues until the system stops due to burnout or completion condition.

Concurrency Problems

In a multi-thread system like this project, running threads at the same time creates several hidden problems. These problems do not appear in normal single-thread programs, but they become critical when many coders share the same resources.

1. Race Conditions
A race condition happens when multiple threads try to read and write the same shared data at the same time.
1- video (YouTube)
2- video (YouTube)

In this project, shared data includes:

  • last compilation time
  • compile counters
  • dongle state
  • log output

If two threads update the same variable without protection, the final value becomes unpredictable. It depends on which thread executes first, which is decided by the OS scheduler.

To prevent this, mutexes are used to ensure that only one thread modifies shared memory at a time.

2. Deadlock
A deadlock happens when threads block each other in a circular way, and none of them can continue execution.
video (YouTube)

In this project, each coder needs two dongles. If every coder locks one dongle and waits for the second one, all threads can become stuck forever.

The system prevents this by changing the order of resource acquisition depending on the coder ID (even or odd). This breaks the circular waiting condition and avoids deadlock.

3. Starvation

Starvation happens when a thread never gets access to a required resource because other threads are always prioritized before it.

In scheduling systems like FIFO or EDF, a poorly designed priority system could continuously favor some coders and block others indefinitely.

To avoid starvation, the scheduling logic ensures fairness by controlling the order of access to dongles and respecting either arrival time (FIFO) or urgency (EDF).

4. Livelock

A livelock happens when threads are not blocked, but they keep changing state without making progress.

For example, coders may repeatedly try to acquire dongles but always fail because others take them first. They stay active but never reach the compiling phase.

Careful scheduling and controlled locking reduce this behavior.

5. Resource Contention

Since the system depends on precise timing (burnout detection), small delays can affect correctness.

If thread scheduling or locking delays are too slow, a coder may exceed the allowed time before being checked by the monitor.

This is why the monitor must run frequently and efficiently, and shared state updates must be protected and fast.

Threads (Deep System Explanation)

1. What a thread really is (system level)

A thread is the smallest unit of execution the operating system can schedule on the CPU.
1. video (YouTube)
2. video (YouTube)
3. video (YouTube)

Inside the OS:

  • A process is a container of memory (code, heap, globals).
  • A thread is a running path inside that memory. So when you create threads with POSIX:
  • you are not creating “new programs”
  • you are creating multiple CPU execution paths inside the same process

Each thread has:

  • its own register state (CPU state)
  • its own stack (function calls, local variables)
  • shared access to heap and global memory

The OS scheduler sees each thread as:

  • a runnable entity that competes for CPU time So it continuously switches between threads using context switching.

2. Thread creation: pthread_create
When you call:
pthread_create(&thread, NULL, routine, arg);
nternally the OS does:

Allocate a thread control block (TCB)

  1. Create a new stack for the thread
  2. Store the starting function pointer (routine)
  3. Put the thread in the ready queue
  4. Mark it as runnable

At this point:

  • the thread exists
  • but it may not run immediately

The scheduler decides when it gets CPU time.

Top comments (0)