Laravel is a very performant framework, but a standard architecture has one big flaw derived from how PHP works: It has to rebuild the entire framework on every single request.
Even with optimizations, this process still takes 40 to 60 ms on my machine with PHP 8.4. Luckily, for years, the PHP and Laravel worlds have had a solution that dramatically reduces this load time: Laravel Octane and FrankenPHP. The booting time for the Laravel framework can drop to just 4 to 6 ms per request. Incredible, isn't it?
Now, if you're new to Laravel Octane or FrankenPHP, you may wonder: How is this possible?
The simple answer is that the framework is kept in memory. After FrankenPHP starts, Laravel is always ready to serve requests without a full reboot. The real explanation is more complex and out of scope for this article. If you're curious, you can read the official Laravel Octane and FrankenPHP docs for a deeper dive.
read the full article at: DanielPetrica.com
Top comments (2)
It is not a PHP flaw. It is the way how script languages work. Python, Perl, Ruby, even Javascript have the same characteristic.
PHP tries to make the cold start as fast as possible by using OpCache and JIT.
All script languages use a server that keeps the process running. So keeping it in memory is not the real actor. There is not more complex than that.
If your summary already has flaws, I don't think it is worth to go to your website.
Thanks for the feedback. I think there is a misunderstanding regarding the focus of the article. I am not critiquing PHP as a language—I've been a PHP developer for years and love the ecosystem.
The point regarding the "rebuild" refers specifically to the framework bootstrapping process, not the bytecode compilation that OpCache handles. In a standard PHP-FPM lifecycle, even with JIT and OpCache enabled, Laravel still has to load service providers, configuration, and register bindings on every single request.
The article explores how tools like Laravel Octane and FrankenPHP allow us to load the application in memory once and serve requests via a long-running process, skipping that boot phase. In my benchmarks, this reduced overhead from ~60ms to ~10ms. The article focuses on implementing this architecture, not explaining the fundamentals of the request lifecycle.