You may have noticed there is lot of noise about ARM vs x86. I would say mainly because of new mac books wit apple silicon. But if you are AWS user you may have noticed that Amazon has arm based EC2 instances for a while.
At the moment there is 2nd generation of AWS Graviton processors and available EC2 T4g, M6g, C6g, and R6g instances, and their variants with local NVMe-based SSD storage, that provide up to 40% better price performance over comparable current generation x86-based instances . From my SRE/DevOps engineer perspective potential 40% reduction of AWS EC2 bill seems very interesting.
In our company big portion of our EC2 is used to running Elasticsearch. During the 2020 we were able to move majority of our server to the Elasticsearch 7 so the fact that since Elasticsearch 7.8.0 ARM and AArch64 architecture is officially supported  was quite interesting news for us. It seems to me as the best option to start testing ARM instances in our infrastructure on Elasticsearch because:
- it is supported (we don’t have to build ARM version on our own)
- we have lot of Elasticsearch servers (one deployment can cover big portion of infrastructure)
- Elasticsearch is distributed (you can start with 1 server in cluster and slowly continue in conversion)
- Elasticsearch performs many parallel tasks, so it might benefit from real physical cores instead of logical cores (simultaneous multithreading in Intel and AMD x86 chips)
I have found that there is nice benchmark tool to Elasticsearch setup called
esrally  with multiple benchmarks for different use cases . Install and run the benchmark in default settings on ubuntu 20.04 is quite easy.
sudo apt-get install build-essential python3-dev openjdk-11-jdk python3-pip python3 -m pip install esrally --user
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64" ./.local/bin/esrally configure ./.local/bin/esrally --distribution-version=7.8.0
export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-arm64" ./.local/bin/esrally configure ./.local/bin/esrally --distribution-version=7.8.0
To my tests I have decided to use default track (benchmark)
geonames with default settings. The instances EC2 families were T3 (Intel based x86), T3a (AMD based x86) and T4g (ARM based) all of them in medium size (2 vCPU cores and 4GB of RAM) with unlimited CPU credits. The selected EBS for all instances was 30GB gp3 volume with 3000 IOPS (default IOPS count for gp3). The memory consumption of java process was ~1.5GB so the 4GB is okay for recommended 50:50 ratio between heap and file cache .
|EC2 instance||CPU||vCPUs||RAM||clock speed||Price/hr*|
|t3.medium||Intel Xeon Platinum 8000||2||4||3.1 GHz||$0.0456|
|t3a.medium||AMD EPYC 7000||2||4||2.5 GHz||$0.0408|
|t4g.medium||AWS Graviton2||2||4||2.5 GHz||$0.0368|
* On-demand Europe (Ireland) 
Number of operations that Elasticsearch can perform within a certain time period, usually per second. 
Time period between submission of a request and receiving the complete response. It also includes wait time, i.e. the time the request spends waiting until it is ready to be serviced by Elasticsearch. 
Time period between start of request processing and receiving the complete response. This metric can easily be mixed up with
latency but does not include waiting time. This is what most load testing tools refer to as “latency” (although it is incorrect). 
t3.medium as 1.00, lines where throughput was the same for all 3 instances were removed.
Let's talk just about the tasks where there are some difference between instances. The tasks with the same values aren't probably CPU bounded.
We can see 87% of performance when we compare
t3.medium. This is probably expected since ration between 2.5Ghz for AMD and 3.1Ghz for Intel is ~81%. The both values are for turbo CPU clock speed, so the real clock speed might be slightly different and there might be small difference in IPC. The price of
t3a.medium is 90% of
t3.medium, so the performance per price is slightly in favor of Intel based instances.
Now the interesting part
t3.medium. 30% more performance for ARM is quite surprising to me and when we combine this with 20% lower price, performance per price is amazing. It is hard to say how much can Elasticsearch benefit from SMT on Intel and AMD processors vs the real cores on AWS Graviton2, but it might be an explanation why Graviton2 instance has 30% higher score than Intel and 50% higher score than AMD.
I am looking forward to do more tests and maybe try add few Graviton2 into our current Elasticsearch cluster to test some real world scenarios.