DEV Community

Discussion on: Node.js CPU intensive 🔥

Collapse
pierrewahlberg profile image
Pierre Vahlberg

Seems the bottleneck is hardware so the solution should be hardware

Would it be possible (and practical) to fetch and write files from for example s3 buckets and then scale processing using apis such as lambdas or ec2s running a tiny node app, like below;

Your main app could parse the file list and distribute jobs to parse one file at a time to a scalable "parser endpoint" that gets a s3 object path, converts it and puts it in another bucket. This parser service would then scale with load on the endpoint.

You could probably write an MVP or POC with like 5 jar files

Collapse
pierrewahlberg profile image
Pierre Vahlberg

Your main app would then, as you were suggested, await the async ajax call and fire away like ~ 20 concurrent ajax calls and then wait until one is done before firing the next, should be a fairly simple loop

Collapse
adam_cyclones profile image
Adam Crockett Author

Looks like I'm going to need to learn some GCP we don't use AWS where I work 😑 not that GCP is bad.

It sounds reasonable to make an app like this, maybe I can get the whole thing running on my crappy 16 logical cores 😅 (I'm from the duel core era do anything above 4 sounds outstanding) then once I understand how my so far cobbled together node app works, take that and part it out in GCP, it's a great suggestion actually, I did wonder how far optimization of my code would go, but as I said in other threads, it's now slower and more stable because you correctly point out hardware is the bottleneck here. Still, what a learning experience this is