Back in my mind, I had always thought that Perl is slow, comparing to compiled languages, like C/C++. So it is always in my mind that I should rewrite my simple web server from Perl to a compiled language. Finally I did, and the result is astonishing to me.
- This is my simple web server in Perl
- And this is my rewrite done in Go
Both are pretty simple.
How they stack up against each other?
I had believed that the Go implementation should be much faster than the Perl code, since it is a compiled language, and also that the Go http server is said to be a production grade software that lays a solid foundation for Go web applications. However I'm astonished to find out that the result is just the opposite.
Here is a light load situation, of 100 concurrency requests (the number of multiple requests to perform at a time):
We can see that it is no-sweat for Perl, however for Go, the response time has jumped to about 5 times than the initial value (when there was almost zero load) in the end.
Here is a load situation that finally pushed the response time up for Perl:
This time, the concurrency requests is 500. We can see that for Perl, the situation get deteriorated at the very end. However for Go, it starts twice or four times as slow as the previous case when there was almost zero load (from 5ms to 20ms), and it keeps deteriorating, until it is close to 70ms, then jumped rapidly upwards.
So the lesson I learned today is, Perl is not only a fast prototyping language, when it comes to simple web servers like this, Perl performs much better than Go too.
PS.
The two load situations are generated by Apache Bench,
using ab -n 500 -c 100
and ab -n 1500 -c 500
respectively. I've published the raw test results here, and there statistic summaries are included at the end of the article.
PPS.
Shameless plugin - my simple web server in Perl is the core of my dbab package, the dnsmasq based ad blocking, which is ad-blocking done at the DNS level -- all requests to ad-sites are blocked right there at DNS. No more user space extensive pattern matching & replacing necessary any more. It is feather weight, both in terms of package size and CPU usage. It can speed up mobile device browsing without installing any apps to them. I'm working to release a new version soon...
Now the raw test results.
Perl:
Concurrency Level: 500
Time taken for tests: 0.333 seconds
Complete requests: 1500
Failed requests: 0
Total transferred: 222000 bytes
HTML transferred: 64500 bytes
Requests per second: 4509.60 [#/sec] (mean)
Time per request: 110.875 [ms] (mean)
Time per request: 0.222 [ms] (mean, across all concurrent requests)
Transfer rate: 651.78 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.9 0 21
Processing: 1 6 0.7 6 8
Waiting: 1 6 0.7 6 8
Total: 5 7 2.4 6 25
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 7
80% 7
90% 7
95% 7
98% 22
99% 23
100% 25 (longest request)
Go:
Concurrency Level: 500
Time taken for tests: 0.250 seconds
Complete requests: 1500
Failed requests: 0
Total transferred: 324000 bytes
HTML transferred: 64500 bytes
Requests per second: 6007.50 [#/sec] (mean)
Time per request: 83.229 [ms] (mean)
Time per request: 0.166 [ms] (mean, across all concurrent requests)
Transfer rate: 1267.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 17 7.5 16 34
Processing: 6 33 16.1 31 90
Waiting: 5 28 14.9 27 87
Total: 7 49 16.3 48 111
Percentage of the requests served within a certain time (ms)
50% 48
66% 52
75% 62
80% 63
90% 68
95% 80
98% 96
99% 101
100% 111 (longest request)
Top comments (9)
Comparison is unfair. You are using the whole net/http.Server layer for Go, but you are just reading and writing directly to a socket in Perl. Rewrite your Go implementation using only the net package and I am pretty sure the results won't be the same.
I still love writing perl code to this day
I'm unclear how to interpret the results from ab. From the top chunk of the results (time taken for tests, requests per second, time per request, etc.), it would appear that Go is about 33% faster than Perl. But the bit at the bottom about "percentage of the requests served..." seems wildly incongruent and makes it look like Perl is orders of magnitude faster. It doesn't add up. It also doesn't logically add up for Perl to be faster than Go (which, I get, is your point). It makes me suspicious of ab's results.
I now know how to interpret those ab's results, after taking a peek at Go server's implementation -- the keyword is
goroutine
. I.e., Go server serviced ab's requests concurrently, thus finished faster than Perl overall. That why it appears that Go is a bit faster than Perl in statistics. However, when it comes to individual requests and response times, they suffered badly because of the massive concurrency. I.e., the measured & reported response time that I trusted previously are true.Good point, looks to me things don't add up either. That's why I trust the measured & reported response time more than the statistical numbers. But as the other comment meant, I also agree this is an unfair comparison to Go...
Thank you for the clear confirmation of my thoughts. I think the Go implementation would be even faster if each request was handled in is own service goroutine, but then that would be unfair for the perl version 😀
Thank you all for your code, thoughts and replies. I'll try to turn Axel's code into goroutine and see how far it will get us.
Hi Axel, First of all, I do hope that Go can be faster, that's why I was rewriting the Perl code to Go. Now, I use your code and tested again. As you can see, the test result from my machine is much different from yours. Would you blog how you tested and your results please? (Mine is from here). Thanks!
See my comment on the other thread on how to interpret those ab's results. I.e., I believe goroutine and concurrency will not be helpful in such simple case. I.e., your implementation, Axel, should be the best one. Thanks a lot for that! I cannot come up with it myself.