DEV Community

Avoiding Beginner Mistakes Hampering You to Scale Backend⚡️

Riken Shah on June 14, 2024

This blog covers how I unlocked performance that allowed me to scale my backend from 50K requests → 1M requests (~16K reqs/min) on minimal resource...
Collapse
 
achintya7 profile image
Achintya • Edited

Amazing blog, can you also share the code of the backend you made.
Also, another optimization that I would have added was to use Sqlc with pgx rather than gorm as sqlc gives the performance of raw query execution with proper idomatic go models.

Collapse
 
rikenshah profile image
Riken Shah

Thanks, Achintya!
My next set of optimization is pushing the app beyond ~350 RPS, for which I might need to dump gorm and opt for something faster-ligher alternative like pgx.

Sorry, I cannot share the code for the backend as it is proprietary to my work.

Collapse
 
nadeemahmad97 profile image
Nadeem Ahmad

couldn't have worded it better !!.

Collapse
 
rikenshah profile image
Riken Shah

Thanks Nadeem :D

Collapse
 
rikenshah profile image
Riken Shah

Thanks Nadeem!

Collapse
 
krd8ssb profile image
Steven Brown

Thanks for this blog post! A lot of good information here and I'm mostly curious around the database and data set.

How large was the initial data set you're working with or was it an empty database? At first I was thinking most operations were on a single table (incredibly simple CRUD) but when you mentioned the joins, my curiosity peaked on the DB schema.

How many tables are joined on the queries you were able to bring down to ~50ms initially?
Are those included in the ones that went back to 200-400ms?

I'm also curious on the database settings and query structure.

Do you have the fields being returned added to the index (in the correct order) to better utilize the machines memory, or would that make the index is too large?

Thanks again!

Collapse
 
rikenshah profile image
Riken Shah

How large was the initial data set you're working with or was it an empty database?

The CRUD operation I mentioned touches multiple tables for authentication, verification of integrity, actual operation, and post-processing tasks. Most of the tables had records < 10K but the entity on which operations were performed had ~500K records to start with and in the end it had > 1M records.

How many tables are joined on the queries you were able to bring down to ~50ms initially?

Just two but on fairly heavy tables > 500K and 100K

Are those included in the ones that went back to 200-400ms?

Yep, My suspicion for query taking 200-400ms is on the availability of open connections. As we have capped open connections to 300, the SQL driver might wait for the connection to free up. Going to sit on this to investigate the real reason, might just be slow query execution

I'm also curious on the database settings and query structure.

We are using AWS managed Postgres service AWS Aurora, we are running this on base instance db.t4g.medium - 2 CPU, 4GB Ram, 64bit Graviton

Sorry, won't be able to share the query structure as this is proprietary work.

Do you have the fields being returned added to the index (in the correct order) to better utilize the machines memory, or would that make the index is too large?

Yep, I do have, we suffer from slightly slow writes, but it is worth it as we are READ heavy app.

Great questions and Thanks for reading this, Steve!

Collapse
 
timylv_88 profile image
timylv

Great!

Collapse
 
sam-techy profile image
Samuel Adams

Loved every bit of this. Well detailed and informative. Keep dropping them 🙌🏼

Collapse
 
rikenshah profile image
Riken Shah

Thanks Samuel, going to write more :D

Collapse
 
kingidee profile image
Idorenyin Obong

good article

Collapse
 
rikenshah profile image
Riken Shah

Thanks, Idorenyin ❤️

Collapse
 
litlyx profile image
Antonio | CEO at Litlyx.com

Really good article and value shared.

Collapse
 
rikenshah profile image
Riken Shah

Thanks Antonio :)

Collapse
 
leoantony72 profile image
Leo Antony

Excellent content, very informative. I'm gonna learn Grafana after this.

Collapse
 
rikenshah profile image
Riken Shah

Definitely, Grafana is too good.
Thanks, Leo for the kind words :)

Collapse
 
m4rcxs profile image
Marcos Silva

nice done! keep going on this excelent content!

Collapse
 
rikenshah profile image
Riken Shah

thanks marcos, happy to see you liked it :D

Collapse
 
harsh2909 profile image
Harsh Agarwal

Thanks for the article
This was really helpful. Will try to implement some of it in our Go service as well

Collapse
 
rikenshah profile image
Riken Shah

Thanks Harsh, Glad you enjoyed it :D

Collapse
 
petragrunheidt profile image
Petra Grunheidt • Edited

Total Banger 🔥🔥🔥🔥🔥
Thanks for sharing

Collapse
 
rikenshah profile image
Riken Shah

Thanks Peta, Glad you liked it :D

Collapse
 
dev_kiran profile image
Kiran Naragund

Thanks for sharing this Riken Shah!
Really Helpful

Collapse
 
rikenshah profile image
Riken Shah

Thanks Kiran, Glad it was helpful :D

Collapse
 
blinkinglight profile image
M
Collapse
 
rikenshah profile image
Riken Shah

Yep, I've read this article, it's one of the best. I got to know about file descriptors after reading this :D

Collapse
 
kalkrishnan profile image
kalyan k

You mentioned you have strong transaction handling in your middle ware. How is this implemented? Is this in Go? Great article btw

Collapse
 
rikenshah profile image
Riken Shah

Thanks for giving it a read Kalyan :D

How is this implemented? Is this in Go?
Yep, Will share this in the next article!

Collapse
 
nirmesh profile image
Nirmesh Mashru

Great, Thanks for sharing

Collapse
 
vinod_borole_bfc4d12171cf profile image
Vinod Borole

can you share this git project so we can also contribute?

Collapse
 
rikenshah profile image
Riken Shah

Sorry this is proprietary work, can't share!

Collapse
 
tablepad profile image
Tablepad

Good post, thanks!

Collapse
 
rikenshah profile image
Riken Shah

Thanks for giving it a read :D

Collapse
 
haryasa profile image
Yudi Haryasa

Great writeup! 🔥 🔥 🔥

Collapse
 
ajayer profile image
Ajay

Great blog! I understand the code is proprietary to your work, but could you provide a sample repository that you made? It would be helpful for reference and learning. ❤️

Collapse
 
suprematis profile image
Andres Salgado

What part of the equation does storage play on all this fun?

Collapse
 
mubasherusman profile image
Mubasher Usman

Amazing

Collapse
 
adegoodyer profile image
Adrian Goodyer

Great post Riken - especially for your first one! :🚀

Very much looking forward to reading about your observability/monitoring setup.

Collapse
 
bergen profile image
Gerben • Edited

now my laptop alone can generate traffic of 12K-18K requests a minute.

But the graph below it says it's 12-18K per second, not per minute. Same in the "million hits" section below it.

Collapse
 
rikenshah profile image
Riken Shah • Edited

It's a time series graph, it start slow and racks up to 12-18K reqs