True Parallel PHP is Here: Elegant Parallelism, Worker Pools, Self-Healing Clusters & Fractal Concurrency
For years, the PHP community has repeated the same mantra: "PHP is single-threaded."
Recently, Fibers (introduced in PHP 8.1) solved the I/O problem. We can now make non-blocking HTTP requests and database queries concurrently. But what happens when you need to parse thousands of massive HTML documents, crunch complex math, or process images?
Fibers block the CPU. If you run a heavy regex on a 5MB string, your entire event loop freezes.
To get true CPU parallelism, you usually have to mess with complex extensions (ext-parallel) or write clunky proc_open wrappers where passing data feels like writing an old-school TCP socket protocol or relying on pcntl wrapper libraries.
I wanted something that felt as natural as writing standard async PHP. So, I built Hibla Parallel: a cross-platform, self-healing, multi-processing engine for PHP 8.4+.
Here’s a look at what makes its syntax and architecture special.
Crossing the Boundary: Value Objects & Clean Syntax
In Hibla, you don't have to manually encode/decode JSON to talk to your background processes. It handles the serialization of closures, variables, and objects seamlessly.
Pass objects in, get objects out.
use App\ValueObjects\ReportResult;
use function Hibla\{parallel, await};
// Spawn a background process, run the closure, and wait for the result
$result = await(parallel(function () {
$data = heavy_data_processing();
// Return a rich Value Object directly from the worker
return new ReportResult(
status: 'success',
rowsProcessed: 50000,
data: $data
);
}));
echo get_class($result); // "App\ValueObjects\ReportResult"
echo $result->rowsProcessed; // 50000
Under the hood, Hibla handles the OS-level IPC (Inter-Process Communication), reconstructs your objects, and preserves their types.
Seamless Framework Bootstrapping
When you spawn a new PHP process, it starts as a blank slate. If you want to use Laravel's Eloquent ORM, Symfony's container, or your own legacy global functions inside a worker, you need to load them.
Hibla makes this a one-liner using withBootstrap(). It loads your environment into the worker seamlessly:
use Hibla\Parallel\Parallel;
$pool = Parallel::pool(size: 4)
// Boot Laravel (or Symfony, etc.) inside every worker process automatically
->withBootstrap(__DIR__ . '/bootstrap/app.php', function (string $file) {
$app = require $file;
$app->make(Illuminate\Contracts\Console\Kernel::class)->bootstrap();
});
// Now you can freely use your ORM and framework helpers inside the closure!
$pool->run(function () {
return User::where('active', true)->count();
});
Persistent Worker Pools
Spawning a new process for every small task is expensive, especially if your worker needs to boot up a heavy framework every time.
Hibla solves this with persistent Worker Pools featuring a beautifully expressive, fluent API. You boot the pool and the framework once, and the workers stay alive to crunch through your queue.
$tasks = [/* 10,000 heavy items */];
foreach ($tasks as $item) {
// Workers are reused. No framework boot penalty per task!
$pool->run(function () use ($item) {
return process_item($item);
})->then(fn($result) => echo "Done: $result\n");
}
$pool->shutdown();
How Does it Compare to Python's Multiprocessing?
Python is famous for making data processing easy, but its multiprocessing library has some infamous quirks. Because of how Python spawns processes across different OS environments (like Windows), you are forced to wrap your execution logic in an if __name__ == '__main__': block, or you trigger an infinite fork bomb.
Python (concurrent.futures):
import concurrent.futures
def process_item(item):
return item * 2
# This guard is strictly required on Windows and macOS!
if __name__ == '__main__':
items = [1, 2, 3, 4, 5]
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as pool:
results = list(pool.map(process_item, items))
print(results)
PHP (Hibla Parallel):
Hibla handles closure AST serialization and fork-bomb prevention safely under the hood. No boilerplate guards required. Using our Promise::map utility combined with the pool's runFn() factory, it becomes a beautiful one-liner:
use Hibla\Parallel\Parallel;
use Hibla\Promise\Promise;
use function Hibla\await;
$pool = Parallel::pool(size: 4);
$items = [1, 2, 3, 4, 5];
// Map concurrently across the process pool without double-wrapping closures!
$results = await(
Promise::map($items, $pool->runFn(fn($item) => $item * 2))
);
$pool->drain();
print_r($results);
It feels just like writing modern, asynchronous JavaScript, but you are utilizing every CPU core on your server.
Fault-Tolerant Parallelism (The Supervisor Pattern)
In distributed systems, things fail. A worker might hit an Out-Of-Memory (OOM) error, cause a segmentation fault, or trigger a fatal exit(1). In most PHP libraries, this deadlocks your queue or crashes the parent process.
Hibla brings Erlang’s "Let it Crash" philosophy to PHP. If a worker dies, Hibla immediately detects the crash, spawns a fresh replacement to maintain pool capacity, and fires an onWorkerRespawn hook so you can resubmit the lost work.
But true fault tolerance isn't just about recovering from crashes, it's about preventing state corruption and memory leaks. Long-running PHP scripts naturally accumulate memory over time.
To solve this, Hibla gives you PHP-FPM style worker management. You can set hard limits on execution time, memory, and automatically recycle workers to keep RAM usage perfectly flat:
use Hibla\Parallel\Parallel;
$pool = Parallel::pool(size: 4)
// 1. Proactive Limits: Kill any rogue task taking longer than 60 seconds
->withTimeout(60)
// 2. Memory Sandbox: Isolate the worker's memory limit
->withMemoryLimit('256M')
// 3. FPM-Style Recycling: Retire and replace the worker after 100 tasks
// to cleanly flush all framework state and memory leaks
->withMaxExecutionsPerWorker(100)
// 4. Circuit Breaker: Shut down the pool if >5 workers crash in 1 second
->withMaxRestartPerSecond(5)
// 5. Supervisor Hook: Resubmit long-running tasks to the replacement worker
->onWorkerRespawn(function ($pool) use ($daemonTask) {
echo "Worker recycled or crashed! Spawning replacement & resubmitting...\n";
$pool->run($daemonTask);
});
With this setup, your application stays online, recycles its own memory automatically, heals itself from segfaults, and alerts you when things go critically wrong. It's production-grade process supervision, out of the box.
Real-Time Output Streaming
One of the most frustrating things about traditional PHP background processes is that echo and print get swallowed until the process finishes (or they break the IPC protocol entirely).
Hibla intercepts PHP's output buffer and streams it back to the parent process in real-time, completely non-blocking.
use function Hibla\{parallel, await};
await(parallel(function () {
echo "Starting process...\n";
sleep(1);
echo "50% complete...\n";
sleep(1);
echo "Finished!\n";
}));
// The parent console prints these lines immediately as they happen in the worker!
Structured Message Passing
Sometimes raw console output isn't enough. If you are running a long-lived task and need to send structured data (like progress updates, partial JSON records, or value objects) back to the parent before the task finishes, Hibla provides the emit() and onMessage() APIs.
Just like return values, emitted messages cross the process boundary with their types preserved:
use Hibla\Parallel\Parallel;
use App\Messages\ProgressUpdate;
use function Hibla\emit;
Parallel::task()
// The parent process listens for messages asynchronously
->onMessage(function ($msg) {
if ($msg->data instanceof ProgressUpdate) {
echo "Worker {$msg->pid}: {$msg->data->percent}% complete\n";
}
})
->run(function () {
// The worker emits data back to the parent mid-execution
emit(new ProgressUpdate(percent: 25));
do_heavy_work();
emit(new ProgressUpdate(percent: 100));
return "Task Done!";
});
This is perfect for building real-time progress bars, logging systems, or streaming data out of a massive batch job.
Exception Teleportation
Debugging distributed systems is a nightmare. Usually, if a child process dies, you just get a generic "Process exited with code 255" error.
Hibla features Exception Teleportation. If your worker throws an exception, Hibla catches it, serializes it, teleports it back to the parent, and merges the stack traces.
use function Hibla\{parallel, await};
try {
await(parallel(function () {
throw new \InvalidArgumentException("Invalid CSV format!");
}));
} catch (\InvalidArgumentException $e) {
echo $e->getMessage(); // "Invalid CSV format!"
// The stack trace shows exactly where the parallel() call started
// AND where the exception was thrown inside the worker!
echo $e->getTraceAsString();
}
The Holy Grail: Fractal Concurrency
This is where Hibla flexes its architectural muscles. Because Hibla is built on top of a unified Promise and Event Loop system, it doesn't care if a Promise is waiting for an I/O Fiber or an OS-level Process.
You can seamlessly mix async I/O (Fibers) and parallel CPU work (Processes). I call this Fractal Concurrency and from the event loop's perspective, it's all just streams.
Practical Example: High-Performance Web Scraping
Imagine you need to scrape and extract data from 10,000 URLs.
- Fetching the HTML is network-bound (I/O). We should use Fibers for this.
- Parsing the DOM (running regex or heavy DOM tree parsing) is CPU-bound. If we do this in a Fiber, we block the loop. We should distribute this across our CPU cores using a Process Pool.
By combining Hibla Parallel with the newly released async-first Hibla HTTP Client, we can build a wildly efficient, non-blocking scraper in just a few lines of code:
use Hibla\Parallel\Parallel;
use Hibla\HttpClient\Http;
use Hibla\Promise\Promise;
use function Hibla\await;
// 1. Boot a persistent pool with 1 worker per CPU core
$pool = Parallel::pool(size: 8)->boot();
$urls = [/* 10,000 URLs */];
// 2. Map over the URLs with a concurrency limit of 50 active tasks
$scrapedData = await(Promise::map($urls, function(string $url) use ($pool) {
// Step A (I/O Bound): Fetch the HTML asynchronously using a Fiber.
// No process is spawned here. The Event Loop just handles the sockets.
$response = await(Http::get($url));
$html = $response->body();
// Step B (CPU Bound): Offload the heavy HTML parsing to a background worker.
// The Event Loop is NOT blocked while the worker parses the DOM!
$parsedResult = await($pool->run(function () use ($html) {
return my_heavy_dom_parser($html);
}));
return $parsedResult;
}, concurrency: 50));
$pool->shutdown();
echo "Successfully parsed " . count($scrapedData) . " pages!";
Why this is amazing:
The Event Loop is orchestrating HTTP socket streams and IPC process streams simultaneously. While Core 1 is parsing HTML, the main thread is downloading the next 49 pages. You get near the raw speed of Go or Node.js, entirely in PHP, without blocking a single thread.
True Cross-Platform Non-Blocking Parallel execution (Yes, Even on Windows)
Have you ever actually run a true parallel PHP script on a Windows machine?
Chances are, you haven't. Or if you did, it broke unexpectedly. Almost all existing PHP multiprocessing libraries rely on pcntl_fork(). There are two massive problems with this:
-
pcntlsimply does not exist on Windows. Your code immediately breaks if you share it with a teammate on a Windows dev environment. -
Even on Linux,
pcntl_fork()is incredibly dangerous. It clones the entire memory space of the parent process. Shared database connections, Redis instances, and event loops get duplicated across children, leading to bizarre race conditions, corrupted connections, and segfaults.
Hibla doesn't use pcntl_fork(). It spawns fresh, perfectly isolated OS processes using proc_open(). They share nothing, meaning zero chance of corrupted database connections.
But wait, isn't spawning processes from scratch slow?
Yes, it can be. But that is exactly why Hibla features Persistent Worker Pools. Because the workers boot up once and stay alive to receive tasks over IPC streams, the process creation overhead is amortized to zero. The performance impact becomes completely negligible.
But there is a dark secret to proc_open(): Anonymous pipes on Windows ignore stream_set_blocking(false). A blocking read on a Windows pipe will freeze your entire PHP event loop instantly.
Hibla fixes this at the architectural level. When you spawn a worker, Hibla detects your OS. On Linux/macOS, it uses highly optimized kernel pipes. On Windows, it automatically swaps the transport layer to use socket descriptors, ensuring true, 100% non-blocking I/O.
Your code behaves identically on your local Windows dev machine and your production Linux server. No deadlocks. No memory-cloning hacks. Just pure parallelism.
Final Thoughts
I built the Hibla ecosystem because I wanted the multi-processing and async power of modern languages, but with the elegant, standard syntax of modern PHP.
If you are building high-performance CLI apps, web scrapers, daemons, or background workers in PHP, I'd love for you to try it out.
Check out and learn more about the repos here:
- hiblaphp/parallel (true parallel engine)
- hiblaphp/http-client (high performance elegant http-client)
(Make sure to drop a ⭐️ on GitHub if you find it useful!)
Let me know what you think in the comments! How are you currently handling heavy background processing in your PHP apps?
Top comments (0)