Let's take a look at this typical PHP code:
function names()
{
$data = Http::get('data.location/products')->json();
$names = [];
foreach ($data as $item){
$names[] = $item['name'];
}
return $names;
}
We send an HTTP request that returns an array of items, then we store the name of each item in a $names
array.
Executing this function will take time that's equal to the duration of the request plus the time it took to build the array. What if we want to run this function multiple times for different data sources:
$products = names('/products');
$users = names('/users');
Running this code takes time that's equal to the duration of both functions combined:
HTTP request to collect products: 1.5 seconds
Building products names array: 0.01 seconds
HTTP request to collect users: 3.0 seconds
Building users names array: 0.01 seconds
Total: 4.52 seconds
This is called Synchronous code execution, or executing one thing at a time. To make this code run faster, you may want to execute it asynchronously. So what are our options if we want to achieve that?
- Execute in different processes.
- Execute in different threads.
- Execute in coroutines/fibers/green-threads.
Execute in different processes
Running each of these function calls in a separate process gives the operation System the task of running them in parallel. If you have a multi-core processor, we all have now, and there are 2 idle cores sitting around, the OS will use both cores to execute the processes in parallel (at the same time). However, in most cases there are other processes running on the machine that need to use the available cores. In that case, the OS will share the CPU time between those processes. In other words, the available cores will switch between processing our two processes and other processes. In that case, our processes will be executed concurrently.
The execution time for those two processes will be something like 3.03 seconds (not 100% accurate I know). This conclusion is based on the fact that the slowest request takes 3 seconds, the 2 network calls took 10 milliseconds, and both loops for collecting names each takes 10 milliseconds.
The execution inside the core will look like this:
Switch to process 1
Start HTTP request to collect products
Switch to process 2
Start HTTP request to collect users
Switch to process 1
If a response came, then build products names array
Switch to process 2
If a response came, then build users names array
So while the CPU waits for a response on the first request, it sends the second request. Then waits until any of the requests return before it continues the process.
Multiprocessing is an easy way to achieve asynchronous code execution in PHP. However, it's not the most performant. Because creating processes is relatively expensive and each process requires its own private space in memory. Switching between processes (context switching) also has an overhead.
You could use Laravel queues and start a fixed number of workers (processes) and keep them alive to process your tasks. That way you won't have to create new processes each time you want to run something async. However, the overhead of context switching and memory allocation will still apply. Also with workers, you need to manage how you can receive the results from your code execution inside the workers.
Multiprocessing and Laravel workers have been doing great for millions of apps. So when I say that it's not the most performant I'm speaking relative to the other options. Don't just read this and think that multiprocessing and queues are bad. Ok?
Execute in different threads
A process has its own private space in memory, a process may have multiple threads. All threads live in the same memory space as the process. That makes spawning threads less expensive than spawning processes.
However, context switching still happens. When you have too many threads, as the case with too many processes, everything on your machine will slow down. Because the CPU cores are switching between a lot of contexts.
Also, with the same memory space being accessed by multiple threads at the same time, race conditions may happen.
In addition to all this, multithreading in PHP is not supported anymore.
Execute in coroutines/fibers/green-threads
This "thing" has many names. But let's call it "coroutines" because it's the most used term.
A coroutine is like a thread in that it shares the memory of the process it was created inside, but it's not an actual thread because the OS knows nothing about it. There's no context switching between coroutines on the OS level. The runtime controls when the switching happens, which is less expensive than the OS context switching.
Let's convert our code to using coroutines. This code is for demonstration only, it won't work if you run it:
$products = [];
$users = [];
go(fn() => $products = names('/products'));
go(fn() => $users = names('/users'));
The idea behind coroutines is that the runtime is going to schedule running those callbacks concurrently. Inside each coroutine, the code may explicitly yield control to the runtime so it can run another coroutine. At any given time, only one coroutine is being executed.
So if we break down our code, it comes to this:
go(function(){
$data = Http::get('data.location/products')->json();
// yields
foreach(...)
});
go(function(){
$data = Http::get('data.location/users')->json();
// yields
foreach(...)
});
The runtime will execute the first coroutine until it yields, then execute the second coroutine until it yields, and then go back to where it stopped in the first coroutine. Until all coroutines are executed. And then, it'll continue code execution in the regular synchronous fashion.
Now you may have two question; first, when should the yielding happen? second, how do we make it happen?
Inside each coroutine in our example there are two operations; one I/O bound and one CPU bound. Sending the HTTP request is an I\O bound operation, we're sending a request (input) and waiting for a response (output). The loop, on the other hand, is a CPU bound operation, we are looping over a set of records and calculating results. Calculations are done by the CPU, that's why it's called CPU bound.
CPU bound work will take the same amount of time if it runs in the same process. The only way to make the work take less time is if you execute it in different processes or threads. On the other hand, I\O bound work can run concurrently inside the same process; while one I\O operation is waiting for output, another operation can start.
Looking at our example, the execution inside the runtime will look like this:
Start coroutine 1
Start HTTP request to collect products
Coroutine 1 yields
Switch to coroutine 2
Start HTTP request to collect users
Coroutine 2 yields
Switch to coroutine 1
If a response came, then build products names array
Switch to coroutine 2
If a response came, then build users names array
Using coroutines, we can take the time an I\O operation spends waiting and use it to do other work. By doing so, we are running all the coroutines concurrently.
Now let's move to our second question: how do we make the yielding happen? We don't.
Different frameworks and libraries must support asynchronous execution by yielding control whenever an I\O operation is waiting. There's a popular term for this that you should know "Non-blocking I\O". Libraries that communicate with databases, caches, filesystem, networking, etc... must be adapted to become non-blocking.
If you use a blocking library inside a coroutine, it'll never yield and thus your coroutines will be executed synchronously. The main process will wait until the I\O operations receive the output before continuing with the rest of the program.
Conclusion
There's a lot more to say about coroutines and asynchronous execution. My plan is to explore ways we can make Laravel play nicely with coroutines and share my findings in the process.
Until then, embrace the synchronous code execution of PHP. It worked very well for over 25 years. Send I\O bound work that you need to cut to a queue worker and act on the results later. This should cover many of the use cases.
Top comments (0)