Mandelbrot benchmark
- Target ... C, PHP, HHVM, Ruby, Python, PyPy, and our Kinx
Introduction
I heard PHP8 would support JIT.
Oh, it sounds great! That is also a good situation for our Kinx to show the ability of the performance by native
keyword in Kinx.
Posted this article means that the result was very good. It is rather over my expectation!
Note that please see the article here if you do not know Kinx. I will be very happy if you are interested in it.
Before doing it
Benchmark
Look at here(https://gist.github.com/dstogov/12323ad13d3240aee8f1), and you will find some benchmarks. So I made some stuffs based on those.
Note that the environment is quite different. That's why I benchmarked it all. And also I choosen the near version because I can't setup it with exactly same versions, and compared with original one.
But as an original comment said, only PHP did cheat. In fact, it is unnecessary to output the result and see an I/O overhead, so I removed it.
How to measure the time
It is being used a timer which has been prepared by the language. And, I found the compile time with source code analysis is not included in the measurement. Although it can't be helped about C.
The reason why I have found it is that I felt something strange about HHVM. When it is HHVM, the result time on display is very fast but my actual feeling is different. The result is faster than PHP, but the real
time of HHVM is slower than PHP as an actual elapsed time.
Output
I confirmed that all codes were displaying the following output. Our Kinx was also working as we expected.
*
*
*
*
*
***
*****
*****
***
*
*********
*************
***************
*********************
*********************
*******************
*******************
*******************
*******************
***********************
*******************
*******************
*********************
*******************
*******************
*****************
***************
*************
*********
*
***************
***********************
* ************************* *
*****************************
* ******************************* *
*********************************
***********************************
***************************************
*** ***************************************** ***
*************************************************
***********************************************
*********************************************
*********************************************
***********************************************
***********************************************
***************************************************
*************************************************
*************************************************
***************************************************
***************************************************
* *************************************************** *
***** *************************************************** *****
****** *************************************************** ******
******* *************************************************** *******
***********************************************************************
********* *************************************************** *********
****** *************************************************** ******
***** *************************************************** *****
***************************************************
***************************************************
***************************************************
***************************************************
*************************************************
*************************************************
***************************************************
***********************************************
***********************************************
*******************************************
*****************************************
*********************************************
**** ****************** ****************** ****
*** **************** **************** ***
* ************** ************** *
*********** ***********
** ***** ***** **
* * * *
Benchmark it
It is time to benchmark. First of all, let me introduce source codes.
C
Here is a version of gcc.
$ gcc --version
gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
C code is like this.
#include <stdio.h>
#include <sys/time.h>
#define BAILOUT 16
#define MAX_ITERATIONS 1000
int mandelbrot(double x, double y)
{
double cr = y - 0.5;
double ci = x;
double zi = 0.0;
double zr = 0.0;
int i = 0;
while(1) {
i ++;
double temp = zr * zi;
double zr2 = zr * zr;
double zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
int main (int argc, const char * argv[]) {
struct timeval aTv;
gettimeofday(&aTv, NULL);
long init_time = aTv.tv_sec;
long init_usec = aTv.tv_usec;
int x,y;
for (y = -39; y < 39; y++) {
//printf("\n");
for (x = -39; x < 39; x++) {
volatile int i = mandelbrot(x/40.0, y/40.0);
//if (i==0)
// printf("*");
//else
// printf(" ");
}
}
//printf ("\n");
gettimeofday(&aTv,NULL);
double query_time = (aTv.tv_sec - init_time) + (double)(aTv.tv_usec - init_usec)/1000000.0;
printf ("C Elapsed %0.3f\n", query_time);
return 0;
}
PHP/HHVM
It is the version of PHP.
$ php --version
PHP 7.2.24-0ubuntu0.18.04.6 (cli) (built: May 26 2020 13:09:11) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.24-0ubuntu0.18.04.6, Copyright (c) 1999-2018, by Zend Technologies
It is the version of HHVM.
$ hhvm --version
HipHop VM 3.21.0 (rel)
Compiler: 3.21.0+dfsg-2ubuntu2
Repo schema: ebd0a4633a34187463466c1d3bd327c131251849
There is no differences of the source code between PHP and HHVM.
<?php
define("BAILOUT",16);
define("MAX_ITERATIONS",1000);
class Mandelbrot
{
function Mandelbrot()
{
$d1 = microtime(1);
for ($y = -39; $y < 39; $y++) {
for ($x = -39; $x < 39; $x++) {
$this->iterate($x/40.0,$y/40.0);
}
}
$d2 = microtime(1);
$diff = $d2 - $d1;
printf("PHP Elapsed %0.3f\n", $diff);
}
function iterate($x,$y)
{
$cr = $y-0.5;
$ci = $x;
$zr = 0.0;
$zi = 0.0;
$i = 0;
while (true) {
$i++;
$temp = $zr * $zi;
$zr2 = $zr * $zr;
$zi2 = $zi * $zi;
$zr = $zr2 - $zi2 + $cr;
$zi = $temp + $temp + $ci;
if ($zi2 + $zr2 > BAILOUT)
return $i;
if ($i > MAX_ITERATIONS)
return 0;
}
}
}
$m = new Mandelbrot();
?>
Ruby
It is the version of Ruby.
$ ruby --version
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux-gnu]
Here is the Ruby's source code.
BAILOUT = 16
MAX_ITERATIONS = 1000
class Mandelbrot
def initialize
#puts "Rendering"
for y in -39...39 do
#puts
for x in -39...39 do
i = iterate(x/40.0,y/40.0)
#if (i == 0)
# print "*"
#else
# print " "
#end
end
end
end
def iterate(x,y)
cr = y-0.5
ci = x
zi = 0.0
zr = 0.0
i = 0
while(1)
i += 1
temp = zr * zi
zr2 = zr * zr
zi2 = zi * zi
zr = zr2 - zi2 + cr
zi = temp + temp + ci
return i if (zi2 + zr2 > BAILOUT)
return 0 if (i > MAX_ITERATIONS)
end
end
end
time = Time.now
Mandelbrot.new
#puts
puts "Ruby Elapsed %f" % (Time.now - time)
Python/PyPy
It is the version of Python.
$ python --version
Python 2.7.15+
PyPy's version.
$ pypy --version
Python 2.7.13 (5.10.0+dfsg-3build2, Feb 06 2018, 18:37:50)
[PyPy 5.10.0 with GCC 7.3.0]
Here is the Python's source code. The source code for PyPy is the same.
import sys, time
stdout = sys.stdout
BAILOUT = 16
MAX_ITERATIONS = 1000
class Iterator:
def __init__(self):
#print 'Rendering...'
for y in range(-39, 39):
#stdout.write('\n')
for x in range(-39, 39):
i = self.mandelbrot(x/40.0, y/40.0)
#if i == 0:
#stdout.write('*')
#else:
#stdout.write(' ')
def mandelbrot(self, x, y):
cr = y - 0.5
ci = x
zi = 0.0
zr = 0.0
i = 0
while True:
i += 1
temp = zr * zi
zr2 = zr * zr
zi2 = zi * zi
zr = zr2 - zi2 + cr
zi = temp + temp + ci
if zi2 + zr2 > BAILOUT:
return i
if i > MAX_ITERATIONS:
return 0
t = time.time()
Iterator()
print 'Python Elapsed %.02f' % (time.time() - t)
Kinx/Kinx(native)
Here is the version of Kinx.
$ kinx -v
kinx version 0.9.2
It is the source code of normal Kinx.
const BAILOUT = 16;
const MAX_ITERATIONS = 1000;
function mandelbrot(x, y) {
var cr = y - 0.5;
var ci = x;
var zi = 0.0;
var zr = 0.0;
var i = 0;
while (true) {
i++;
var temp = zr * zi;
var zr2 = zr * zr;
var zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
var tmr = new SystemTimer();
var x,y;
for (y = -39; y < 39; y++) {
#System.print("\n");
for (x = -39; x < 39; x++) {
var i = mandelbrot(x/40.0, y/40.0);
#if (i==0)
# System.print("*");
#else
# System.print(" ");
}
}
#System.print("\n");
System.print("Kinx Elapsed %0.3f\n" % tmr.elapsed());
Here is the source code of Kinx with native. The type is not necessary when it can be assumed from the result of expectation, so that was okay only with adding :dbl
to the argument.
const BAILOUT = 16;
const MAX_ITERATIONS = 1000;
native mandelbrot(x:dbl, y:dbl) {
var cr = y - 0.5;
var ci = x;
var zi = 0.0;
var zr = 0.0;
var i = 0;
while (true) {
i++;
var temp = zr * zi;
var zr2 = zr * zr;
var zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
var tmr = new SystemTimer();
var x,y;
for (y = -39; y < 39; y++) {
#System.print("\n");
for (x = -39; x < 39; x++) {
var i = mandelbrot(x/40.0, y/40.0);
#if (i==0)
# System.print("*");
#else
# System.print(" ");
}
}
#System.print("\n");
System.print("Kinx(native) Elapsed %0.3f\n" % tmr.elapsed());
Result
Here is the result. It is an average of 10 times. The order is the faster one is upside. 'real' is the result of time command.
language | version | time(sec) | time(real) |
---|---|---|---|
C | 7.4.0 | 0.018 | 0.046 |
PyPy | 5.10.0 | 0.020 | 0.122 |
Kinx(native) | 0.9.2 | 0.048 | 0.107 |
HHVM | 3.21.0 | 0.068 | 0.552 |
PHP | 7.2.24 | 0.182 | 0.241 |
Ruby | 2.5.1 | 0.365 | 0.492 |
Kinx | 0.9.2 | 0.393 | 0.457 |
Python | 2.7.15 | 0.564 | 0.601 |
Good! Kinx(native) is faster than HHVM. And I am happy because Kinx normal is almost same as Ruby VM which I feel very fast.
By the way, PyPy is too fast, but the time of real is almost same. I guess the difference is optimization.
The result shows HHVM is slower than PHP in the real
of time command. It was because the compilation time would be long. It can not be helped because it is a spec of language. The same penalty of compilation is shown also in Kinx native, although a little bit.
Okay, let's compare with the result in the orginal article. In this benchmark, the differences between environments seem to be a big factor. Only for HHVM, it is strange... but I don't know why. In other cases, it is around 2x faster on my environment.
language | version | time(sec) | Original result | Original version |
---|---|---|---|---|
C | 7.4.0 | 0.018 | 0.022 | 4.9.2 |
PyPy | 5.10.0 | 0.020 | ||
Kinx(native) | 0.9.2 | 0.048 | ||
HHVM | 3.21.0 | 0.068 | 0.030 | 3.5.0 |
PHP | 7.2.24 | 0.182 | 0.281 | 7 |
Ruby | 2.5.1 | 0.365 | 0.684 | 2.1.5 |
Kinx | 0.9.2 | 0.393 | ||
Python | 2.7.15 | 0.564 | 1.128 | 2.7.8 |
Conclusion
Benchmarking is very fun, when it is a good result. I couldn't touch native
these days, but it is one of characteristic of Kinx, so I want to grow it.
See you next time.
By the way, here is the script to measure it. I used Process implemented recently. I used a result shown at average
.
using Process;
var count = 10;
var command = [$$[1], $$[2]];
var r = [];
var re = /[0-9]+\.[0-9]+/;
for (var i = 0; i < count; ++i) {
var result = "";
var [r1, w1] = new Pipe();
var p1 = new Process(command, { out: w1 }).run();
w1.close();
while (p1.isAlive() || r1.peek() > 0) {
var buf = r1.read();
if (buf.length() < 0) {
System.println("Error...");
return 1;
} else if (buf.length() > 0) {
result += buf;
} else {
// System.println("no input...");
}
}
re.reset(result);
if (re.find()) {
r.push(Double.parseDouble(re.group[0].string));
}
}
var total = r.reduce(&(r, e) => r + e);
System.println("total : %8.3f" % total);
System.println("count : %8d" % r.length());
System.println("average: %8.3f" % (total / r.length()));
Thank you!
Top comments (0)