I am working on a side project that intends to provide tracking of application performance at the function level called Criterion.dev. The Criterion API uses a PostgreSQL database and Actix Web for the web service.
I am very happy with the performance, but still, some random thought came to mind that made me question the numbers.
GCP Infrastructure
Compute Engine: E2-small
Cloud SQL: PostgreSQL 1v shared CPU .6GB
API request wait time: 84ms
AWS Infrastructure
EC2: t2.micro
RDS: db.t2.micro
API request wait time: 42ms
I started the service on GCP, so I logged into that instance and then into the SQL server and ran some queries by hand. The same queries that the service executes actually take less than 2ms. This convinced me that the network was the bulk of the request wait time.
So I copied my project over to AWS made a new subdomain and connected the SPA to it - The request time was 40ms less.
Have you ever seen this before? Could it be due to the shared/free instance tiers I am experimenting with?
Top comments (0)