For a long time, I made all my side projects with a custom-built PHP framework (or rather a set of libraries). At the beginning of my PHP journey (in 2011), writing these libraries was a good exercise in learning the language.
Recently, though, I started to run into all kinds of limitations:
- security concerns
- a constant need of patching for edge cases
- building simple functionality instead of focusing on building a product
The only frameworks I worked with before were Zend Framework 1 and Symfony 5.
With my previous project, I had a first-hand experience of running a relatively high load web site. The highest traffic level I saw was 10 million page views a day and up to 10,000 concurrent sessions. At the backend, we had 7 web nodes serving PHP requests (Linux, Nginx+php-fpm, MySQL, MongoDB, PHP7, Memcache). And being stuck with Zend Framework 1 was not fun :)
The other one, Symfony 5, I tried for a small project for a month and left it with a feeling of too much magic happening behind the scene for my taste. And the learning curve was pretty steep.
So I decided to take a look at the current scene of PHP frameworks to get the feel of everyone and test the performance (in case I would be lucky enough to get high traffic again).
Here is the list I chose what to test from.
The goal
I want to answer the following questions:
- How many requests can a framework serve consistently on a cheap VPS?
- What is the performance overhead of using a framework comparing to plain PHP or a self-written set of libraries or even plain HTML?
The results
How many simple HTTP-requests can a framework consistently serve on a comparatively cheap Digital Ocean droplet (40$, dedicated CPU, 2 cores, 4Gb of memory)?
Name | Requests per second |
Requests per day |
Latency ms |
---|---|---|---|
HTML | 11,367 | 982,127,520 | 77 |
PHP 7.2.24 | 2,597 | 224,406,720 | 77 |
Fatfree 3.7.0 | 2,113 | 182,544,480 | 78 |
Phalcon 3.4.5 | 2,111 | 182,409,120 | 78 |
Slim 4.3.0 | 1,542 | 133,248,960 | 79 |
Yii 2.0.26 | 1,250 | 108,002,880 | 79 |
Slim (skeleton) 4.3.0 | 1,161 | 100,347,840 | 80 |
Symfony 5.0.1 | 402 | 34,727,040 | 82 |
Laravel 5.8.35 | 329 | 28,455,840 | 83 |
Laravel 6.5.2 | 318 | 27,502,560 | 85 |
CakePHP 3.8.6 | 185 | 15,973,920 | 83 |
CodeIgniter 4.0.0-rc3 | 173 | 14,906,880 | 89 |
Zend (skeleton) 3.1.13 | 108 | 9,365,760 | 87 |
The setup
- 2 servers: “Web” (SF1 region) and “Client” (NYC1 region)
- Nginx 1.14.0, PHP 7.2.24 (Php-fpm), Ubuntu 18.04.3.
- Both servers accept connections only from each other via HTTP
- The “Client” server opens a connection to the “Web” server using wrk (v4.1.0), adding 1 more connection every 10 seconds.
- The “Web” server PHP usage is limited to 50% (more about it later)
The rules
Testing PHP frameworks is complicated because there is much ground to cover: template engines, ORMs, caching... For now, I am choosing to test a simple “Hello world!” response with PHP. I’m assuming it is an ideal setup for testing performance overhead of various frameworks - a way to test the cost of a framework “takeoff.”
- The test-case is a simple plain text “Hello world!” response
- So no template engines
- No ORMs
- No controllers (if possible)
- No distinctions between frameworks level of complexity (micro, full-stack)
- I try to optimize every framework according to its “Deployment” docs section, if any. Usually, it is running “composer install --optimize-autoloader --no-dev” and running frameworks command for config caching
I’m interested in how many HTTP-requests a framework can consistently serve:
- Without a significant latency increase (<10%)
- Without errors (timeouts, connection errors)
- Using not more than 50% of the server CPU
A note about frameworks performance, CPU, memory
An unintuitive fact about frameworks is that every one of them can serve a roughly equal number of requests with roughly the same latency.
Let me repeat that: in an ideal world, with an infinite amount of computing resources, all frameworks perform almost equally.
It is not that one framework is inherently slower than the other. Some frameworks are slower just because they consume more CPU or more memory or make more reads from a disk or a database.
From my experience with high-traffic websites, when we deal with running php-fpm+Nginx, it is usually CPU we should be worried about, not memory. If we take an example configuration of 1 CPU core and 4GB of memory, php-fpm+Nginx can easily consume 100% CPU with less than 5% of memory usage.
That is why I choose a 40$ Digital Ocean droplet: it is the cheapest one with a dedicated, not shared CPU.
In real life, we do not want to max out or CPUs. It leads to an increase in server response time (latency), which is terrible. And it does not leave us any headroom for traffic spikes, which is wrong. Usually, for production web-nodes, it is a good idea to stay well below 50% of CPU limit.
For this test, I decided to limit PHP usage on the “Web” server to 50%.
The attitude
I do not imply that frameworks that perform slower are worse:
- As the results show, if you serve less than 10,000,000 pageviews a day, you are good to go with any framework.
- It is often a tradeoff: the faster to build something on a framework, the more syntactic sugar, the more features, the more complexity - the slower it is.
Observations
Laravel
Laravel is the only framework, which was ridiculously slow and error-prone out of the box. I could not get it to serve even 10 requests per second without multiple timeout errors and latency instantly spiking over 100 ms.
Then I looked at the slow log and discovered that Laravel had some troubles with sessions handling (of course, session folders had all the necessary rights and user/group assigned to them).
After changing config values for SESSION_DRIVER from “file” to “array,” Laravel performance increased 10x.
Maybe I missed something important there, so please let me know if that is the case.
Phalcon
Phalcon is a beast. It is the only framework that comes as Zephir/C extension. And in some cases, it is faster than plain PHP.
FatFree
The only framework I could not make to accept requests with a trailing slash (like http://domain.com/fatfree/). Only http://domain.com/fatfree worked.
Top comments (3)
Hi Pixelbot,
Great post and great work!
How did you handle the test execution and information gathering?
Hi, Daniel, thanks for reply.
For test I wrote a script, which added 1 new connection every 10 seconds. 1 connection is about 10 requests per second. The script also outputs data to a file. Then another script parses the file and aggregates data. Than my hands put data into Google Sheets :).
After that I choose maximum result without errors and latency increase for every framework.
It's very time consuming, that's one of the reasons I did not put the code on github - I wanted to see I anybody is interested in these kinds of tests - before putting in more work.
Well I'm interested!
If you put the code in GitHub I'll definitively contribute to improve it.
Saludos,