DEV Community

Discussion on: How to use Dependency Injection in Functional Programming

 
jesterxl profile image
Jesse Warden

I think we're talking about different things or I'm just misunderstanding. You can, and do, make multiple HTTP outgoing REST calls from a single Lambda execution context. Even if you set your Lambda reserved concurrency to 1, ensuring you only ever get 1 Lambda execution context, and only 1 actual instance being used, you can do a Promise.all with 5 HTTP calls, and they all work at the same time. Now "by same time" I don't mean parallelism vs. concurrency, or Node.js's fake concurrency vs. Go, I just mean that you can see it take 30ms with multiple calls going out and responding vs. doing a regular Promise chain of 1 call after another. Now, yes, you can lower the Lambda's memory to like less than 512k which I think gives you a super small Raspberry PI vCPU and so your network calls and code in general goes much slower, but my point is you CAN do concurrency; I've seen it work.

What that article is talking about regarding Lambda is that "running Node.js code" doesn't act like Express.js where you have a single EC2/server it's hosted on having some ALB throwing 50 calls at it, and Express.js keeps accepting those and handling them in turn using Node's non-blocking I/O concept. So while you're writing code that feels synchronous, you may have 50 routes all executing "at the same time", but because you wrote it in an immutable way and are using Express.js, you don't really know this. You also don't even know that you're 1 of 3 in a Container on an ECS cluster. Whereas, with Lambda, you typically have 1 ALB or API Gateway URL firing up "1 Lambda per request". So 50 requests == 50 Lambda instances. AWS's Firecracker has a way to re-use those instances, though, so you may actually get 50 requests only spawning 30 Lambdas; 20 of those are a single Lambda Execution Context handling 2 calls each. Meaning your handler function is called twice, but the code up top that goes const fetch = require('fetch') is only run once... because you're in the same instance.

When you're building Serverless Lambdas, your Lambda is invoked by that event; it's not like Express.js where you "just have Express listening on some port and it just fires the wired up function for that route every time it gets an incoming HTTP request". So Lambdas don't care who invoked them, you just have a handler that runs, and you return a value, and that's it. There's no "listening for other events while your Lambda is running"; it's litterly just const handler = async event => console.log("got an event:", event) ... and that's it. So with 1 Execution Context per call, that'd be 50 of those functions running independently. If it were 30, and 20 of them shared the same Execution Context, JavaScript/Node doesn't care because it's just running the handler function twice, there's no shared global variables/state between them, and that's fine.

AWS has done an amazing job of making SQS, API Gateway, ALB, SNS, etc. all "handle their own calling and retry and some batching", so your code doesn't care about any of that as opposed to the stupid process.on('SIGTERM', process.exit) crap you see in some Express/Restify code bases.

So again, that article is talking about "Your Lambda runs and stops, you can't have it sit there waiting for new API Gateway, or S3 or whatever requests triggered it". That's a fundamental of how AWS Lambda works. I have zero idea if Google/Azure works the same way.

We're on the same page with the I/O, though, again, most of my Lambdas run in 10 seconds or less; if they need more time, too bad; we just make 'em timeout. That said, I have this one AppSync Lambda function that makes 14 concurrent requests that's super close to 9.8 seconds, but I blame the Go developers I'm calling and their lack of caching vs. being constructive and helping them on said cache 😜.

Yes, ok, read your other paragraph, yeah, we're on the same page there too on re-used instances.

Yeah, you can do with Node.js what they're doing with Go. Slower, sure, but like SQS or AppSync or Dynamo; all 3 support batching. Rather than "1 SQS message is parsed by 1 Lambda", instead, you can say "no no, 200 messages are handled by this Lambda", and you either map each to a Promise to do stuff like copy data to S3, OR do Promise.all if it can be concurrently. And it works, we've done it. Now is that Lambda somehow flattening similiar to the browser those "200 HTTP requests into 1 TCP/IP call stack request, then unpacking on the receiving end to 200 requests to optimize network traffic"? Maybe, I have no idea, but I know Promise.all for our 200 DynamoDB messages in Node using Promise.all takes about 3 seconds to write back to Dynamo concurrently, but if you do one at a time, it's 20 seconds. So the end result is even if it's fake concurrency, it's working like we expect.

Thread Thread
 
jesterxl profile image
Jesse Warden

... ok I should play with Google and Azure more... so many toys...