DEV Community

Cover image for Measuring Performance with the "Benchmark" Class in Laravel
Ash Allen
Ash Allen

Posted on • Originally published at ashallendesign.co.uk

Measuring Performance with the "Benchmark" Class in Laravel

Introduction

When building Laravel applications, I often need to measure the execution time of different code blocks. This could be when I'm experimenting with different approaches to solving a problem or optimising existing code.

Although there are dedicated profiling and performance-monitoring tools available, I sometimes just need a rough idea of performance rather than a detailed analysis. This is where the Illuminate\Support\Benchmark class comes in handy.

The Benchmark class provides multiple simple ways to measure the execution time of a piece of code.

In this article, we'll take a look at how to use the Benchmark class in Laravel to measure the performance of different code blocks, along with some important considerations to keep in mind when using it.

Using the Benchmark class

As I mentioned, I use the Benchmark class quite often when building new features and experimenting with different approaches to solving a problem. Sometimes I have an idea of how I'm going to implement a feature, but I'm not sure it will be efficient enough. So in this scenario, I might try out a couple of different implementations and use the Benchmark class to measure their execution times. This then allows me to make a more informed decision about which approach to use based on real numbers rather than just a hunch.

Sometimes, the results show that the differences between implementations are negligible and unlikely to have a significant impact on overall performance. Other times, the results can be quite surprising, revealing that one approach is significantly faster than another. In these cases, using the Benchmark class can be incredibly useful for identifying potential bottlenecks before they become a problem in production.

You can interact with the Benchmark class in three different ways:

  • Benchmark::measure() - Measures the execution time and returns it as a float value (in milliseconds).
  • Benchmark::value() - Measures the execution time and returns both the result of the code and the execution time as a float value (in milliseconds).
  • Benchmark::dd() - Measures the execution time and dumps it using dd() as a formatted string (in milliseconds).

Let's take a look at each of these methods. We'll stick to simple examples (inspired by those in the Laravel documentation) that work really well for demonstrating the Illuminate\Support\Benchmark class.

Using "Benchmark::measure()"

The Benchmark::measure() method accepts a closure (or an array of closures, which we'll get to later) and returns the execution time as a float value (in milliseconds).

For example:

use App\Models\User;
use Illuminate\Support\Benchmark;

$executionTime = Benchmark::measure(fn () => User::find(1));

// $executionTime will be: 0.271625

// This means the code took approximately 0.27 milliseconds to execute.
Enter fullscreen mode Exit fullscreen mode

It also accepts a second argument, iterations, which specifies how many times to run the benchmarked code. The returned value will be the average execution time across all iterations, which is really useful for getting a more accurate measurement, since individual runs can be affected by factors like CPU load and memory usage.

You can specify the number of iterations like this:

use App\Models\User;
use Illuminate\Support\Benchmark;

$executionTime = Benchmark::measure(
    benchmarkables: fn () => User::find(1),
    iterations: 10,
);

// $executionTime will be: 0.271625

// This means the code took approximately 0.27 milliseconds to execute on average.
Enter fullscreen mode Exit fullscreen mode

You can also pass an array of closures to benchmark multiple pieces of code in one go. This is really handy for comparing the performance of different implementations side by side. For example:

use App\Models\User;
use Illuminate\Support\Benchmark;

$executionTime = Benchmark::measure(
    benchmarkables: [
        fn () => User::count(),
        fn () => User::all()->count(),
    ],
    iterations: 10,
);

// $executionTime will be: [0.5, 20.0]

// This means the first closure took approximately 0.5 milliseconds to execute,
// and the second closure took approximately 20.0 milliseconds to execute.
Enter fullscreen mode Exit fullscreen mode

Using "Benchmark::value()"

The Benchmark::value() method is similar to Benchmark::measure(), but it also returns the benchmarked code's result along with the execution time.

It accepts a closure and returns an array containing two elements:

  • The return value of the closure.
  • The execution time as a float value (in milliseconds).

Here's an example:

use App\Models\User;
use Illuminate\Support\Benchmark;

[$user, $executionTime] = Benchmark::value(fn () => User::find(1));

// $user will be the App\Models\User model with ID 1.
// $executionTime will be: 0.271625
// This means the code took approximately 0.27 milliseconds to execute.
Enter fullscreen mode Exit fullscreen mode

Unlike the Benchmark::measure() and Benchmark::dd() methods, Benchmark::value() does not support passing an array of closures or running multiple iterations. It's designed to measure a single piece of code and return its result along with the execution time. So this is a great option when you want to measure the performance of a specific operation and also use its result in your code without disrupting the flow.

Using "Benchmark::dd()"

The Benchmark::dd() method is a quick and easy way to measure the execution time of a piece of code and dump the result using Laravel's dd() function.

It works in pretty much the same way as the Benchmark::measure() method, but instead of returning the execution time, it dumps it directly to the screen.

It's important to note that this method returns the execution time as a formatted string (in milliseconds) rather than a raw float value; for example, "0.270ms" rather than 0.270625.

Let's look at an example of how to use it:

use App\Models\User;
use Illuminate\Support\Benchmark;

Benchmark::dd(fn () => User::find(1));

// The following will be dumped: "0.270ms"
Enter fullscreen mode Exit fullscreen mode

Similar to the Benchmark::measure() method, you can also specify the number of iterations to run the benchmarked code, and the average execution time will be returned:

use App\Models\User;
use Illuminate\Support\Benchmark;

Benchmark::dd(
    benchmarkables: fn () => User::find(1),
    iterations: 10,
);

// The following will be dumped: "0.270ms"
Enter fullscreen mode Exit fullscreen mode

You can also pass an array of closures to benchmark multiple pieces of code in one go:

use App\Models\User;
use Illuminate\Support\Benchmark;

Benchmark::dd(
    benchmarkables: [
        fn () => User::count(),
        fn () => User::all()->count(),
    ],
    iterations: 10,
);

// The following will be dumped: ["0.5ms", "20.0ms"]
Enter fullscreen mode Exit fullscreen mode

No External Tools Required

One of the benefits of using the Illuminate\Support\Benchmark class is that you don't need to set up any external tools or services to get started. You can use it directly within your Laravel application, making it easy to integrate into your existing workflow.

As a freelance Laravel developer, I'm often brought on board to work on existing projects. Sometimes, the client might not want data being sent to an external service for privacy or security reasons. Other times, they might not have the budget to pay for a dedicated profiling tool. Similarly, it might be difficult to get the application working with an external tool due to infrastructure or networking constraints, or because of the need to install/update Composer packages or PHP extensions. Or maybe you just want to quickly measure a piece of code's performance without the hassle of setting up an external tool.

So in these cases, the Benchmark class is a great alternative.

Things To Keep in Mind

Although the Benchmark class is a handy tool for measuring performance, there are a few important considerations to keep in mind when using it.

Not a Replacement for Dedicated Profiling Tools

It's important to remember that the Benchmark class only measures execution time and should be used as a guide rather than an absolute. It doesn't provide other important performance metrics, such as memory usage, database query counts, and so on. For these reasons, it's not a like-for-like replacement for proper profiling and performance monitoring tools such as Inspector, Laravel Nightwatch, or Blackfire. So if you're looking for a more in-depth performance analysis, I'd recommend using one of those tools instead.

However, this doesn't mean the Benchmark class isn't useful. It's still a fantastic tool that can help you quickly experiment with different approaches and optimisations.

Local vs Production Benchmarking

Another key point to remember is that no matter how closely you match your local environment to production, there will always be differences that can affect performance. Factors such as infrastructure, hardware capabilities, network setup and latency, dataset size, and traffic patterns can all impact performance in ways that are hard to replicate locally. So always treat the results as a guide rather than an absolute when benchmarking locally. Otherwise, you might end up making decisions based on inaccurate data.

If you don't want to, or aren't able to, use an external performance monitoring tool, the Benchmark class is still a great way to quickly measure performance in a production environment. For example, you could use it to measure the performance of a new feature or optimisation after deploying it to production, and log the results for later analysis. This way, you can gain better insight into how your changes perform in the real world. But remember that the results won't be as reliable or detailed as a proper profiling tool.

Running Benchmarks Multiple Times

In addition to ensuring benchmarks are run multiple times per run (via the iterations parameter), I also like to run the entire benchmark multiple times.

This helps account for any performance variability caused by external factors. For instance, your machine might have been running other unrelated processes in the background while you were running the benchmark, which may have slowed things down.

As an example, let's say you run the following code locally:

use App\Models\User;
use Illuminate\Support\Benchmark;

$executionTime = Benchmark::measure(
    benchmarkables: fn () => User::find(1),
    iterations: 10,
);
Enter fullscreen mode Exit fullscreen mode

This means the benchmarked code will be run 10 times, and the average execution time will be returned. But if your machine was doing something resource-intensive in the background while all 10 of those closures were running, all of the runs may appear slower than they actually are. So I like to wait 30 seconds or so after running the benchmarks, and then run them again. I might do this 2 or 3 times in total.

Most of the time, the results will be similar across all runs. But in the past, I've had instances where the first run was significantly slower than subsequent runs, particularly on my older machine, which didn't have as much processing power or memory as my current one. If I'd have taken the results from just that first run, I may have made a decision based on inaccurate data.

Conclusion

In this article, we've looked at how to use the Illuminate\Support\Benchmark class in Laravel to measure the performance of different pieces of code. We've also discussed some important considerations for using it.

If you enjoyed reading this post, you might be interested in checking out my 220+ page ebook "Battle Ready Laravel" which covers similar topics in more depth.

Or, you might want to check out my other 440+ page ebook "Consuming APIs in Laravel" which teaches you how to use Laravel to consume APIs from other services.

If you're interested in getting updated each time I publish a new post, feel free to sign up for my newsletter.

Keep on building awesome stuff! 🚀

Top comments (0)