DEV Community

Oleg Goncharov
Oleg Goncharov

Posted on

Inter-Process Communication (IPC) in C++: Complete Guide

A complete reference guide to all inter-process communication mechanisms for systems programmers

Introduction

Inter-Process Communication (IPC) is a fundamental concept in systems programming that enables independent processes to exchange data and coordinate their actions. In the era of multi-core processors and distributed systems, understanding IPC is critical for creating efficient, scalable, and reliable applications.

In this reference guide, we'll explore all core IPC mechanisms, including their implementation in C++, performance characteristics, pitfalls, best practices, and platform-specific considerations.


Part 1: Fundamental Concepts and IPC Classification

What is IPC and Why Do We Need It?

IPC (Inter-Process Communication) is the mechanism for data exchange between processes in an operating system.

Why is IPC necessary?

Processes in modern operating systems are isolated from each other — each has its own address space. This provides security and stability, but creates a problem: how do processes exchange data?

Typical IPC use cases:

  • Client-server architecture (web server + database)
  • Parallel computing (worker processes)
  • Microservices (inter-service communication)
  • Plugins and extensions (main app + modules)
  • GUI and backend (separation of interface and logic)
  • Event processing (notifications between processes)

Classification of IPC Mechanisms

IPC mechanisms can be classified by several criteria:

By data exchange method:

  1. Shared Memory — processes access a common memory region
  2. Message Passing — processes exchange messages through OS kernel

By synchronicity:

  1. Synchronous — sender waits for reception/processing
  2. Asynchronous — sender continues without waiting

By communication direction:

  1. Unidirectional (simplex) — data flows one way only
  2. Bidirectional (duplex) — data flows both ways

By infrastructure:

  1. Local — on same machine only
  2. Network — between machines

Complete Table of All IPC Mechanisms

Mechanism Platforms Complexity Performance Direction Synchronization Typical Usage
Files All OS Low Low Bidirectional Manual Simple exchange, config
Signals POSIX, limited Windows Low High Unidirectional Asynchronous Notifications, control
Anonymous Pipes POSIX, Windows Low Medium Unidirectional Automatic Parent-child
Named Pipes (FIFO) POSIX, Windows Low Medium Bidirectional Automatic Unrelated processes
Shared Memory POSIX, Windows High Very High Bidirectional Manual Large data volumes
Semaphores POSIX, Windows Medium High N/A Automatic Access synchronization
Message Queues POSIX, some OS Medium Medium Bidirectional Automatic Async exchange, priorities
Memory-Mapped Files (mmap) POSIX, Windows Medium Very High Bidirectional Manual Large files, DB, zero-copy
Mailslots Windows Low Low Unidirectional Asynchronous Broadcast messages
Unix Domain Sockets POSIX, Windows (10+) Medium High Bidirectional Automatic Local client-server
TCP/IP Sockets All OS Medium Medium Bidirectional Automatic Network communication
RPC/RMI Implementation-dep High Medium Bidirectional Automatic Distributed systems

Part 2: Core IPC Mechanisms with Code Examples

2.1 Pipes (Channels)

What it is: Unidirectional data channel between processes (usually parent-child).

Anonymous Pipes

POSIX (Linux/macOS):

#include <unistd.h>
#include <iostream>

int main() {
    int pipefd[2];  // [0] - read end, [1] - write end

    if (pipe(pipefd) == -1) {
        perror("pipe");
        return 1;
    }

    pid_t pid = fork();

    if (pid == 0) {  // Child process
        close(pipefd[1]);

        char buffer[100];
        ssize_t n = read(pipefd[0], buffer, sizeof(buffer));
        buffer[n] = '\0';

        std::cout << "Received: " << buffer << std::endl;
        close(pipefd[0]);
    } else {  // Parent process
        close(pipefd[0]);

        const char* msg = "Hello from parent!";
        write(pipefd[1], msg, strlen(msg));

        close(pipefd[1]);
        wait(nullptr);
    }

    return 0;
}
Enter fullscreen mode Exit fullscreen mode

Windows:

#include <windows.h>

int main() {
    HANDLE hReadPipe, hWritePipe;
    SECURITY_ATTRIBUTES sa = {sizeof(SECURITY_ATTRIBUTES), NULL, TRUE};

    if (!CreatePipe(&hReadPipe, &hWritePipe, &sa, 0)) {
        return 1;
    }

    STARTUPINFO si = {sizeof(STARTUPINFO)};
    si.hStdInput = hReadPipe;
    si.dwFlags = STARTF_USESTDHANDLES;

    PROCESS_INFORMATION pi;
    CreateProcess(/* ... */, &si, &pi);

    CloseHandle(hReadPipe);
    CloseHandle(hWritePipe);

    return 0;
}
Enter fullscreen mode Exit fullscreen mode

2.2 Named Pipes (FIFO)

POSIX:

#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

// Writer
void writer() {
    const char* fifo_path = "/tmp/my_fifo";
    mkfifo(fifo_path, 0666);

    int fd = open(fifo_path, O_WRONLY);
    const char* msg = "Hello via FIFO!";
    write(fd, msg, strlen(msg) + 1);
    close(fd);
}

// Reader
void reader() {
    const char* fifo_path = "/tmp/my_fifo";
    int fd = open(fifo_path, O_RDONLY);

    char buffer[100];
    read(fd, buffer, sizeof(buffer));
    std::cout << "Received: " << buffer << std::endl;

    close(fd);
    unlink(fifo_path);
}
Enter fullscreen mode Exit fullscreen mode

2.3 Shared Memory

POSIX Shared Memory:

#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <cstring>

struct SharedData {
    int counter;
    char message[256];
};

// Writer
void writer() {
    const char* name = "/my_shm";
    int shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
    ftruncate(shm_fd, sizeof(SharedData));

    SharedData* ptr = (SharedData*)mmap(
        nullptr, sizeof(SharedData),
        PROT_READ | PROT_WRITE,
        MAP_SHARED, shm_fd, 0
    );

    ptr->counter = 42;
    strcpy(ptr->message, "Hello from shared memory!");

    munmap(ptr, sizeof(SharedData));
    close(shm_fd);
}

// Reader
void reader() {
    const char* name = "/my_shm";
    int shm_fd = shm_open(name, O_RDONLY, 0666);

    SharedData* ptr = (SharedData*)mmap(
        nullptr, sizeof(SharedData),
        PROT_READ, MAP_SHARED, shm_fd, 0
    );

    std::cout << "Counter: " << ptr->counter << std::endl;

    munmap(ptr, sizeof(SharedData));
    close(shm_fd);
    shm_unlink(name);
}
Enter fullscreen mode Exit fullscreen mode

Synchronization (POSIX Semaphores):

#include <semaphore.h>

void synchronized_access() {
    sem_t* sem = sem_open("/my_sem", O_CREAT, 0666, 1);

    sem_wait(sem);  // Lock
    ptr->counter++;
    sem_post(sem);  // Unlock

    sem_close(sem);
}
Enter fullscreen mode Exit fullscreen mode

2.4 Message Queues

POSIX Message Queues:

#include <mqueue.h>

// Sender
void sender() {
    mqd_t mq = mq_open("/my_queue", O_CREAT | O_WRONLY, 0666, nullptr);

    const char* msg = "Hello via message queue!";
    mq_send(mq, msg, strlen(msg) + 1, 0);

    mq_close(mq);
}

// Receiver
void receiver() {
    mqd_t mq = mq_open("/my_queue", O_RDONLY);

    struct mq_attr attr;
    mq_getattr(mq, &attr);

    char* buffer = new char[attr.mq_msgsize];
    mq_receive(mq, buffer, attr.mq_msgsize, nullptr);

    std::cout << "Received: " << buffer << std::endl;

    delete[] buffer;
    mq_close(mq);
    mq_unlink("/my_queue");
}
Enter fullscreen mode Exit fullscreen mode

2.5 Signals

POSIX Signals:

#include <signal.h>

void signal_handler(int signum) {
    std::cout << "Received signal: " << signum << std::endl;
}

// Receiver
void receiver() {
    signal(SIGUSR1, signal_handler);
    signal(SIGUSR2, signal_handler);

    std::cout << "My PID: " << getpid() << std::endl;

    while (true) {
        pause();
    }
}

// Sender
void sender(pid_t target_pid) {
    kill(target_pid, SIGUSR1);
    sleep(1);
    kill(target_pid, SIGUSR2);
}
Enter fullscreen mode Exit fullscreen mode

With data transmission (sigaction):

void advanced_handler(int signum, siginfo_t* info, void* context) {
    std::cout << "Signal: " << signum << std::endl;
    std::cout << "Data: " << info->si_value.sival_int << std::endl;
}

void receiver_advanced() {
    struct sigaction sa;
    memset(&sa, 0, sizeof(sa));
    sa.sa_sigaction = advanced_handler;
    sa.sa_flags = SA_SIGINFO;

    sigaction(SIGUSR1, &sa, nullptr);
    while (true) pause();
}

void sender_advanced(pid_t target_pid) {
    union sigval value;
    value.sival_int = 42;
    sigqueue(target_pid, SIGUSR1, value);
}
Enter fullscreen mode Exit fullscreen mode

2.6 Sockets

Unix Domain Sockets:

#include <sys/socket.h>
#include <sys/un.h>

// Server
void server() {
    int server_fd = socket(AF_UNIX, SOCK_STREAM, 0);

    struct sockaddr_un addr;
    addr.sun_family = AF_UNIX;
    strcpy(addr.sun_path, "/tmp/my_socket");

    unlink("/tmp/my_socket");
    bind(server_fd, (struct sockaddr*)&addr, sizeof(addr));
    listen(server_fd, 5);

    int client_fd = accept(server_fd, nullptr, nullptr);

    char buffer[100];
    ssize_t n = read(client_fd, buffer, sizeof(buffer));

    std::cout << "Received: " << std::string(buffer, n) << std::endl;

    close(client_fd);
    close(server_fd);
}

// Client
void client() {
    int sock_fd = socket(AF_UNIX, SOCK_STREAM, 0);

    struct sockaddr_un addr;
    addr.sun_family = AF_UNIX;
    strcpy(addr.sun_path, "/tmp/my_socket");

    connect(sock_fd, (struct sockaddr*)&addr, sizeof(addr));

    const char* msg = "Hello via Unix socket!";
    write(sock_fd, msg, strlen(msg));

    close(sock_fd);
}
Enter fullscreen mode Exit fullscreen mode

TCP/IP Sockets:

#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>

// Server
void tcp_server() {
    int server_fd = socket(AF_INET, SOCK_STREAM, 0);

    struct sockaddr_in addr;
    addr.sin_family = AF_INET;
    addr.sin_addr.s_addr = INADDR_ANY;
    addr.sin_port = htons(8080);

    bind(server_fd, (struct sockaddr*)&addr, sizeof(addr));
    listen(server_fd, 5);

    int client_fd = accept(server_fd, nullptr, nullptr);
    // Handle client...
    close(client_fd);
}

// Client
void tcp_client() {
    int sock_fd = socket(AF_INET, SOCK_STREAM, 0);

    struct sockaddr_in addr;
    addr.sin_family = AF_INET;
    addr.sin_port = htons(8080);
    inet_pton(AF_INET, "127.0.0.1", &addr.sin_addr);

    connect(sock_fd, (struct sockaddr*)&addr, sizeof(addr));

    const char* msg = "Hello via TCP!";
    send(sock_fd, msg, strlen(msg), 0);

    close(sock_fd);
}
Enter fullscreen mode Exit fullscreen mode

2.7 Memory-Mapped Files (mmap)

POSIX mmap:

#include <sys/mman.h>
#include <fcntl.h>

// Create and write
void create_mapped_file() {
    const char* filename = "/tmp/mapped_file.dat";
    int fd = open(filename, O_CREAT | O_RDWR, 0644);

    size_t size = 4096;
    ftruncate(fd, size);

    char* mapped = (char*)mmap(nullptr, size,
                               PROT_READ | PROT_WRITE,
                               MAP_SHARED, fd, 0);

    strcpy(mapped, "Hello from mmap!");
    msync(mapped, size, MS_SYNC);

    munmap(mapped, size);
    close(fd);
}

// Read
void read_mapped_file() {
    int fd = open("/tmp/mapped_file.dat", O_RDONLY);

    size_t size = 4096;
    char* mapped = (char*)mmap(nullptr, size,
                               PROT_READ,
                               MAP_SHARED, fd, 0);

    std::cout << "Content: " << mapped << std::endl;

    munmap(mapped, size);
    close(fd);
}
Enter fullscreen mode Exit fullscreen mode

Anonymous mmap (for IPC without file):

void anonymous_mmap() {
    int* shared = (int*)mmap(nullptr, 4096,
                             PROT_READ | PROT_WRITE,
                             MAP_SHARED | MAP_ANONYMOUS, -1, 0);

    pid_t pid = fork();

    if (pid == 0) {  // Child
        sleep(1);
        std::cout << "Child reads: " << *shared << std::endl;
    } else {  // Parent
        *shared = 42;
        wait(nullptr);
    }

    munmap(shared, 4096);
}
Enter fullscreen mode Exit fullscreen mode

2.8 Semaphores

POSIX Named Semaphores:

#include <semaphore.h>

// Initialize
void init_semaphore() {
    sem_t* sem = sem_open("/my_semaphore", O_CREAT, 0644, 1);
    sem_close(sem);
}

// Use
void use_semaphore() {
    sem_t* sem = sem_open("/my_semaphore", 0);

    sem_wait(sem);
    std::cout << "In critical section" << std::endl;
    sem_post(sem);

    sem_close(sem);
}
Enter fullscreen mode Exit fullscreen mode

Counting Semaphore (resource pool):

void resource_pool() {
    sem_t* sem = sem_open("/connection_pool", O_CREAT, 0644, 5);

    sem_wait(sem);
    std::cout << "Connection acquired" << std::endl;

    // Use resource...

    sem_post(sem);
    sem_close(sem);
}
Enter fullscreen mode Exit fullscreen mode

2.9 Files

POSIX/Windows (same approach):

#include <fstream>

// Writer
void file_writer() {
    std::ofstream file("/tmp/ipc_file.txt");
    file << "Hello from writer!" << std::endl;
    file << "Counter: " << 42 << std::endl;
}

// Reader
void file_reader() {
    std::this_thread::sleep_for(std::chrono::milliseconds(100));

    std::ifstream file("/tmp/ipc_file.txt");
    std::string line;
    while (std::getline(file, line)) {
        std::cout << "Read: " << line << std::endl;
    }
}
Enter fullscreen mode Exit fullscreen mode

With file locking (POSIX):

#include <fcntl.h>
#include <sys/file.h>

void file_with_lock() {
    int fd = open("/tmp/ipc_file.txt", O_WRONLY | O_CREAT, 0644);

    flock(fd, LOCK_EX);  // Exclusive lock
    write(fd, "Locked write\n", 13);
    flock(fd, LOCK_UN);  // Unlock

    close(fd);
}
Enter fullscreen mode Exit fullscreen mode

2.10 Mailslots — Windows Only

Windows Mailslots:

#include <windows.h>

// Server (reader)
void mailslot_server() {
    HANDLE hMailslot = CreateMailslot(
        "\\\\.\\mailslot\\MyMailslot",
        0,
        MAILSLOT_WAIT_FOREVER,
        NULL
    );

    char buffer[512];
    DWORD bytesRead;

    while (true) {
        if (ReadFile(hMailslot, buffer, sizeof(buffer), &bytesRead, NULL)) {
            buffer[bytesRead] = '\0';
            std::cout << "Received: " << buffer << std::endl;
        }
    }

    CloseHandle(hMailslot);
}

// Client (writer)
void mailslot_client() {
    HANDLE hFile = CreateFile(
        "\\\\.\\mailslot\\MyMailslot",
        GENERIC_WRITE,
        FILE_SHARE_READ,
        NULL,
        OPEN_EXISTING,
        FILE_ATTRIBUTE_NORMAL,
        NULL
    );

    const char* msg = "Hello via Mailslot!";
    DWORD bytesWritten;
    WriteFile(hFile, msg, strlen(msg), &bytesWritten, NULL);

    CloseHandle(hFile);
}

// Broadcast to all
void mailslot_broadcast() {
    HANDLE hFile = CreateFile(
        "\\\\*\\mailslot\\MyMailslot",  // * = broadcast
        GENERIC_WRITE,
        FILE_SHARE_READ,
        NULL,
        OPEN_EXISTING,
        FILE_ATTRIBUTE_NORMAL,
        NULL
    );

    const char* msg = "Broadcast message!";
    DWORD bytesWritten;
    WriteFile(hFile, msg, strlen(msg), &bytesWritten, NULL);

    CloseHandle(hFile);
}
Enter fullscreen mode Exit fullscreen mode

2.11 RPC/RMI (Remote Procedure Call)

gRPC Example:

#include <grpc++/grpc++.h>

// Greeter service implementation
class GreeterServiceImpl : public Greeter::Service {
    Status SayHello(ServerContext* context,
                    const HelloRequest* request,
                    HelloReply* reply) override {
        reply->set_message("Hello " + request->name());
        return Status::OK;
    }
};

// Server
void run_server() {
    std::string server_address("0.0.0.0:50051");
    GreeterServiceImpl service;

    ServerBuilder builder;
    builder.AddListeningPort(server_address, 
                            grpc::InsecureServerCredentials());
    builder.RegisterService(&service);

    std::unique_ptr<Server> server(builder.BuildAndStart());
    server->Wait();
}

// Client
void run_client() {
    auto channel = grpc::CreateChannel("localhost:50051",
                                      grpc::InsecureChannelCredentials());
    auto stub = Greeter::NewStub(channel);

    HelloRequest request;
    request.set_name("World");

    HelloReply reply;
    ClientContext context;

    Status status = stub->SayHello(&context, request, &reply);
    std::cout << "Response: " << reply.message() << std::endl;
}
Enter fullscreen mode Exit fullscreen mode

Part 3: Extended Mechanism Comparison

Complete Performance Table

Mechanism Throughput Latency Data Direction Platforms
Shared Memory 3.8M msg/s 0.24 μs Large Bidirectional POSIX, Win
mmap ~3.5M msg/s 0.3 μs Very large Bidirectional POSIX, Win
Signals ~1M msg/s <1 μs ~4 bytes Unidirectional POSIX
Message Queues 68K msg/s 14.7 μs Up to 64KB Bidirectional POSIX
Unix Sockets 41K msg/s 24.5 μs Any Bidirectional POSIX
Pipes 37K msg/s 27.3 μs Streams Unidirectional POSIX, Win
Named Pipes 26K msg/s 38.0 μs Streams Bidirectional POSIX, Win
TCP Sockets 22K msg/s 44.4 μs Any Bidirectional All
Mailslots ~1K msg/s ~1 ms Up to 424B Unidirectional Windows
Files ~100/s ~10 ms Any Bidirectional All
gRPC ~10K msg/s ~100 μs Any Bidirectional All

Platform Compatibility Matrix

Mechanism Linux macOS Windows FreeBSD Android iOS
Files
Signals ⚠️ Limited
Pipes
Named Pipes ✅ (different API)
Shared Memory ✅ (different API)
Semaphores ✅ (different API)
Message Queues
mmap ✅ (different API)
Unix Sockets ✅ (Win10+)
TCP/UDP
Mailslots

Part 4: Choosing the Right IPC Mechanism

Selection Flowchart

Need maximum performance?
  ├─ On same machine?
  │  ├─ Large data (> 1 MB)?
  │  │  └─ Shared Memory or mmap
  │  └─ Small messages (< 1 KB)?
  │     └─ Unix Domain Sockets
  └─ Need network communication?
     └─ TCP/UDP Sockets

Need data persistence?
  ├─ Large files?
  │  └─ mmap
  └─ Simple exchange?
     └─ Files

Need only notifications?
  └─ Signals (POSIX)

Windows + Broadcast?
  └─ Mailslots

Asynchronous + priorities?
  └─ Message Queues

Microservices/Distributed?
  └─ gRPC / RPC

Access synchronization?
  └─ Semaphores

Parent-child process?
  └─ Anonymous Pipes

Unrelated processes locally?
  └─ Named Pipes (FIFO)
Enter fullscreen mode Exit fullscreen mode

Selection Matrix by Scenario

Scenario Best Choice Alternative Avoid
HFT / High-frequency Shared Memory mmap Files, Sockets
Large database files mmap Shared Memory Pipes
Microservices gRPC, REST ZeroMQ Shared Memory
Notifications Signals Message Queues Files
Windows Broadcast Mailslots Network Sockets Files
Data streaming Unix Sockets Pipes Message Queues
Configuration Files Shared Memory Signals
Service Discovery Mailslots (Win), mDNS TCP Sockets Files
Simple parent-child Pipes Sockets Shared Memory
Unrelated processes Named Pipes Message Queues Files

Part 5: Best Practices

Combining Mechanisms

Pattern: Data + Notifications:

// Shared Memory for data + Signals for notifications
void producer() {
    shm->data = calculate_data();  // Shared memory
    kill(consumer_pid, SIGUSR1);   // Signal
}

void consumer() {
    signal(SIGUSR1, [](int) {
        process_data(shm->data);   // Process
    });
    while (true) pause();
}
Enter fullscreen mode Exit fullscreen mode

Pattern: mmap + Semaphores:

// Large files with synchronization
void safe_access() {
    sem_t* sem = sem_open("/file_sem", O_CREAT, 0644, 1);
    char* mapped = mmap(/* ... */);

    sem_wait(sem);      // Lock
    modify_data(mapped);
    msync(mapped, size, MS_SYNC);
    sem_post(sem);      // Unlock

    munmap(mapped, size);
}
Enter fullscreen mode Exit fullscreen mode

Optimization Tips

  1. Choose correct block size:

    • Small messages (< 500 bytes) → Named Pipes
    • Large messages (> 10 KB) → Unix Sockets
  2. Use Zero-Copy where possible:

    • mmap for files
    • sendfile() (Linux)
    • splice() for pipes
  3. Profile and measure:

strace -e ipc <program>        # IPC system calls
perf record <program>          # Profiling
ipcs -a                        # View resources
Enter fullscreen mode Exit fullscreen mode

Conclusion

Usage Recommendations

  • For simple tasks: Files or Pipes
  • For performance: Shared Memory or mmap
  • For network: TCP Sockets or gRPC
  • For notifications: Signals (POSIX)
  • For broadcast: Mailslots (Windows)
  • For microservices: gRPC + Message Queues

Remember: correct IPC mechanism choice can provide 100x performance difference!

Choose wisely, measure results, optimize your system! 🚀

Top comments (0)