DEV Community

How to use Dependency Injection in Functional Programming

Jesse Warden on January 16, 2022

Dependency Injection is a technique to make the classes in Object Oriented Programming easier to test and configure. Instead of a class instantiati...
Collapse
 
redbar0n profile image
Magne

How would you compare Elm and ReScript? And which do you prefer?

Collapse
 
jesterxl profile image
Jesse Warden • Edited

which do you prefer?

The only reason I use ReScript is because Elm doesn't officially work on the server, I use AWS at jobs not Lamdera, and Roc isn't ready for prime time yet. That said... I've started to love their "we're imperative and don't care about Category Theory nonsense" style/attitude these OCAML kids have. I've never met a community like that before, it's neat, I'm learning a lot from them. You can learn more about my journey in my video about JavaScript to ReScript.

Ok, comparison.

Elm works on the browser, not the server. ReScript works on both the browser and server. There are ways to hack Elm into a headless state, but it's not fun. It is fun to watch others do it, though.

Elm has no side effects, so all functions are pure. ReScript is like TypeScript; it just compiles to JavaScript, so doesn't have a runtime engine like Elm does. Thus, ReScript's side effects operate just like JavaScript, and there is no I/O Monad type insanity to worry about. Instead, you have side effects everywhere insanity just like you do in JavaScript. This makes ReScript require more unit tests. WAYYY less than JavaScript to be sure, but you need stupider tests around side effects that you don't need in Elm; meaning you have to focus on testing more things.

Elm is good enough with it's types to ensure no runtime exceptions/errors, ever. ReScript is even MORE strict, yet still allows unsafe things through, and has escape hatches making it dangerous if you're not careful. For example, Elm has Maybe, specifically Nothing to handle what we in JavaScript would handle for null or undefined. ReScript is so strict and accurate about compiling to JavaScript, it has 2 completely different modules (well 4, but...) to handle both Undefined and Null in a typed way, including ways to convert back and forth. It really is a symptom of OCAML being a lower-level system language sometimes, and people from that style of language thinking in exacts. That said, it's not thorough, because if you convert an undefined to ReScript's version of Maybe called Optional, you'll get a None, but sometimes you'll get a Some in the case of null vs. undefined. It's better to use the Js.Nullable class to safely convert when you have to deal with JavaScript types coming in like user input or parsing weird JSON. I bet some people love this level of accuracy, and claim it is needed for some reasons, but I prefer Elm's simpler way of dealing with undefined/null: erasing it from existence vs. ReScript giving you thick gloves so you don't burn yourself playing with fire.

Elm's compiler is fast; faster than TypeScript. I've not used it for a gigantor project yet, but I've seen various ways to speed it up so I'm not too worried yet. However, ReScript is light speed. While I tend to do teency microservice/functions in Serverless to ensure my code doesn't get into a large monolith on purpose, I just LOVE how fast I can iterate in ReScript. It's unbelievable how fast the compiler is in a monorepo with like 3 Lambdas and dozens of supporting files. This is the main reason I don't want to use TypeScript compared to ReScript; it's just crazy fast.

I like Elm libraries better. Both languages assume a lot so if you're a beginner it can be pretty overwhelming to even get started. Like, GraphQL library for Elm doesn't tell you how to generate; they just assume you know it's a CLI and will run cli --help and not the default loading from Github. Elm's whole focus is on the beginner and making the complex simple, and many of the library authors are in education or passionate about pedagogy, so this is the exception to the rule most times. My issue with ReScript is were still in this weird time where Reason and ReScript split, but you can still use the Reason libraries in ReScript. It's not really clear, and sometimes when I try, things don't work and there are no errors and I'm like "wtf do I do now?". This is par for the course for JavaScript stuff, though, so I give a ton of leeway to that community; Elm, the opposite, and that works well, because their libraries... always work.

The Elm compiler error messages are better, EVEN IF don't type your functions. Yes, with types they're better, but ReScript ones are nowwhere near as user friendly. Even if you do data-first programming, ReScript still is like "yeah, somewhere your stuff is broke". "Hey, uh... ReScript, how about a... you know... line number to start my investigation, eh, what do you think?" "No, good luck!" Elm's all line numbers, and formatted, and colors, and pretty, and friendly, and hinty... it's just night and day.

I like how Elm has no overuse of (), no need of {}, semi-colons, and isn't as mean as Python about spacing. ReScript has MUCH less need for {}, and doens't need semi-colons either, but you still sometimes have to type it like TypeScript to get better compiler error messages, un-confuse the compiler, and I've grown to like the ML Elm typing style better than the Java esque inline style:

Elm:

add : Int -> Int -> Int
add a b =
  a + b
Enter fullscreen mode Exit fullscreen mode

vs ReScript's inline

let add = (a:number, b:number):number => a + b
Enter fullscreen mode Exit fullscreen mode

ReScript's style is to NOT type because the type inference in OCAML style compilers is just so good, but... maybe I'm doing something wrong, but I've found it's just better to type because the compiler gives messages I can actually read, and sometimes it requires a type to get "unconfused" in long chains.

I like how Elm is data last programming like normal Functional Programming, and ReScript is data-first. I also don't like how ReScript has this data last baggage and makes new packages that are data first to be more friendly to JavaScript developers. I think it's the same stupid tactic the JavaScript devs are trying to do with the Hack style vs. the F# style in the new pipeline operator. If you're a functional programmer, you'll learn to love data last, and all the literature matches. But nNnnNNNnooooOOOo, that's not how the OCAML kids jam. Just hurts my brain to switch back and forth between Elm and ReScript is all, minor nitpick.

I like how Elm has 1 architecture and "that's it". ReScript is like "Dude, we're just a fast compiler with better types than TypeScript and we support functional things". Which means you could use it in Angular or React even if both code bases were heavily Object Oriented. Some find that amazing, I'm like "ugh". That said, it makes it easy for me to use in Serverless, Server, and CLI style architectures, including the Browser when I have to do some things in JavaScript because Elm doesn't support it (i.e. document.cookies). This is just where Elm and ReScript aren't really comparing apples to apples; one is a language, compiler, framework, and runtime whereas the other, ReScript, is just a language and compiler. There's no need for a runtime "because JavaScript" or for a framework "because JavaScript".

... that said, as a team to be full stack ish? They're the shit. I love it.

Collapse
 
redbar0n profile image
Magne • Edited

btw, did you consider Golang or Clojure for the backend?

Golang is more imperative than functional, so that would be a sacrifice. But it is apparently so easy it could be learned in a few days. If you absolutely must have FP, then Clojure could be an alternative, though a bit more foreign to most.

Both Clojure and Golang have best-in-class concurrency (goroutines aka. core/async), and are so fast they use much less resources than Node.

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

Golang: I did some Golang for 3 months and don't like it. I get why some do, it's a small language with good built-ins/standard library, super fast compiler, super fast language, and the concurrency is easy to grok. Perhaps if I had learned it before my FP days I might be more liking, but I don't need that kind of speed in the stuff I do; mainly back-end API's for front-ends or Lambda functions that aren't long running nor doing a lot of concurrency.

Clojure: It's... weird. So I like the Clojure community; it has some pretty prolific people, and bloggers who've taught me stuff. But, while I respect the JVM's power, I hate configuring/using it, the JVM blows on AWS Lambda in terms of startup time (if I were doing long running Lambdas, my tune would change), and I can't live without my static/strong types.

The above is why I gave up on Elixir/Erlang (even Gleam); I refuse to use EC2's or Docker.

We've used Golang in a 12 Lambda function orchestrated Step Function; 11 were in JavaScript, but we used Go for the one that had to run for 15 minutes, and she was beast (parsing megs of SOAP XML and doing other things at the same time). So I respect it, but again... edge case. I never do perf related work, more UI guy or orchestration API guy more concerned with correctness and data munging. So while I do a lot of HTTP REST concurrency in Node.js, that's the extent of it. If I run out of resources, I just turn up the Lambda memory slider, lelz

Thread Thread
 
redbar0n profile image
Magne • Edited

Thanks, that's very insightful.

I refuse to use EC2's or Docker.

I presume that's because they are not serverless? Just thought I'd mention that with Google Cloud Run you can actually run a Docker container as serverless. It has quite a few benefits over Cloud Functions, one of them being that you can use at least 4 vCPU's instead of just 1 [*], and you can reuse instances concurrently [**]. Thus, if you have a service in Golang you'd be able to utilize all 4 vCPU cores concurrently. Seems like the optimal setup if one wants to maximise resource utilization and minimise cost (haven't done the exact cost calculation, tho). Given that one has a boatload of simultaneous incoming requests, thus the need for speed, that is. Otherwise, one might as well go with Node.js (running ReScript compiled JS) on 1 vCPU.

[*] - But you can also set the Cloud Run vCPU count to just 1: cloud.google.com/run/docs/containe... Which is the better alternative for the single-threaded Node.js, if you don't want to muck around with the Node Cluster module to take advantage of the extra cores (.

[**] I was particularly surprised to find out that with Cloud Functions:

“One of the hidden costs of using serverless Cloud Functions is that the runtime limits the concurrent requests to each instance to one. Arguably, this can simplify some of the programming requirements because developers don’t have to worry about concurrency and usage of global variables is allowed. However, this severely underuses the efficiency of Node.js event-driven IO, which allows a single Node.js instance to serve many requests concurrently. In other words, when using Cloud Functions, the server is functionally busy during the lifetime of a single request. The result of this restricted concurrency in Cloud Functions is that the function may be scaled up if there are more requests than there are instances to handle those requests. For a service under heavy load, this can quickly result in a large amount of scaling. That scaling can have unexpected and possibly detrimental side effects.” source

Thread Thread
 
jesterxl profile image
Jesse Warden

Yeah, that's half of it. The other half is the Docker workflow is just miserable. I always loved in dynamic languages how you could go "node index.js" 50 times a minute, then once you feel it's working, going aws lambda update-function and seconds later invoke your function to test while it's deployed. Docker build even with caching is just slow and miserable and does NOT utilize my skillset. Like, I really don't care about Unix and apk vs apt-get, and why I need these things installed, and what base image to extend, blah blah blah. I just don't find any of that fun. Lambda runs my code fine, I'm not worried about missing some core piece of functionality. Different story in Gitlab pipeline, oh jeez... Alpine is slow installing Ninja for ReScript, but Debian is fast.

That's really weird that Google Cloud does that. AWS doesn't have that Lambda concurrency constraint issue. I'm an AWS kid, so not sure what Google cloud or Azure has over AWS.

Thread Thread
 
jesterxl profile image
Jesse Warden

Yeah, AWS Lambda can go up to 10 gigs of memory and 6 vCPU's but most of my code is I/O bound (meaning waiting on HTTP stuff), so I don't really need that much power. However, I'm not sure I'm smart enough to use 'em yet hah. Maybe someday I'll need to parse lots of stuff or something.

Thread Thread
 
redbar0n profile image
Magne • Edited

Ah, the workflow issue makes sense. It would be more of a hassle, indeed.

That's really weird that Google Cloud does that. AWS doesn't have that Lambda concurrency constraint issue.

It seems that AWS Lambdas work the same way:

When your function is invoked, Lambda allocates an instance of it to process the event. When the function code finishes running, it can handle another request. If the function is invoked again while a request is still being processed, another instance is allocated,

docs.aws.amazon.com/lambda/latest/...

So, theoretically, you can't use the idle time, when the Node.js instance is waiting for I/O for one request, to process new incoming requests from the same Lambda function. Most Lambda function runtime would thus be idle time (waiting for I/O). Just like with Google Cloud Functions. Resulting in massive under-utilization of resources (Lambda runtime). But you want the Lambda to do what Node is good at: process massive amounts of requests without ever idling waiting for the I/O.

Note that this is different from AWS retaining an instance of a Lambda for later reuse (i.e. will AWS Lambda reuse function instaces?), because that happens after the original request has finished. What we want is to reuse an instance while it is being used by one request (simply switching between requests when one request is waiting for I/O).

If the above described is indeed the case, it looks like the dirty little secret of serverless deployments... AWS and Google don't really want you to reuse instances, because that reduces their total runtime which is how they bill you.

On how to circumvent it, see for instance: How We Massively Reduced Our AWS Lambda Bill With Go (Update: on second reading it seems like this is talking about doing multiple outgoing I/O requests from within the same Lambda function invocation, like you mentioned Promise.all could have handled in Node.js. But what I am talking about is reusing Lambda invocations to serve multiple new incoming requests while it would otherwise be waiting for the DB response for the first request). In Cloud Run (with containers), this reuse of idle I/O timeshould be possible with Node too, but without the immediate benefit of multi-core concurrency (unless resorting to a Node clustering setup).

Thread Thread
 
jesterxl profile image
Jesse Warden

I think we're talking about different things or I'm just misunderstanding. You can, and do, make multiple HTTP outgoing REST calls from a single Lambda execution context. Even if you set your Lambda reserved concurrency to 1, ensuring you only ever get 1 Lambda execution context, and only 1 actual instance being used, you can do a Promise.all with 5 HTTP calls, and they all work at the same time. Now "by same time" I don't mean parallelism vs. concurrency, or Node.js's fake concurrency vs. Go, I just mean that you can see it take 30ms with multiple calls going out and responding vs. doing a regular Promise chain of 1 call after another. Now, yes, you can lower the Lambda's memory to like less than 512k which I think gives you a super small Raspberry PI vCPU and so your network calls and code in general goes much slower, but my point is you CAN do concurrency; I've seen it work.

What that article is talking about regarding Lambda is that "running Node.js code" doesn't act like Express.js where you have a single EC2/server it's hosted on having some ALB throwing 50 calls at it, and Express.js keeps accepting those and handling them in turn using Node's non-blocking I/O concept. So while you're writing code that feels synchronous, you may have 50 routes all executing "at the same time", but because you wrote it in an immutable way and are using Express.js, you don't really know this. You also don't even know that you're 1 of 3 in a Container on an ECS cluster. Whereas, with Lambda, you typically have 1 ALB or API Gateway URL firing up "1 Lambda per request". So 50 requests == 50 Lambda instances. AWS's Firecracker has a way to re-use those instances, though, so you may actually get 50 requests only spawning 30 Lambdas; 20 of those are a single Lambda Execution Context handling 2 calls each. Meaning your handler function is called twice, but the code up top that goes const fetch = require('fetch') is only run once... because you're in the same instance.

When you're building Serverless Lambdas, your Lambda is invoked by that event; it's not like Express.js where you "just have Express listening on some port and it just fires the wired up function for that route every time it gets an incoming HTTP request". So Lambdas don't care who invoked them, you just have a handler that runs, and you return a value, and that's it. There's no "listening for other events while your Lambda is running"; it's litterly just const handler = async event => console.log("got an event:", event) ... and that's it. So with 1 Execution Context per call, that'd be 50 of those functions running independently. If it were 30, and 20 of them shared the same Execution Context, JavaScript/Node doesn't care because it's just running the handler function twice, there's no shared global variables/state between them, and that's fine.

AWS has done an amazing job of making SQS, API Gateway, ALB, SNS, etc. all "handle their own calling and retry and some batching", so your code doesn't care about any of that as opposed to the stupid process.on('SIGTERM', process.exit) crap you see in some Express/Restify code bases.

So again, that article is talking about "Your Lambda runs and stops, you can't have it sit there waiting for new API Gateway, or S3 or whatever requests triggered it". That's a fundamental of how AWS Lambda works. I have zero idea if Google/Azure works the same way.

We're on the same page with the I/O, though, again, most of my Lambdas run in 10 seconds or less; if they need more time, too bad; we just make 'em timeout. That said, I have this one AppSync Lambda function that makes 14 concurrent requests that's super close to 9.8 seconds, but I blame the Go developers I'm calling and their lack of caching vs. being constructive and helping them on said cache 😜.

Yes, ok, read your other paragraph, yeah, we're on the same page there too on re-used instances.

Yeah, you can do with Node.js what they're doing with Go. Slower, sure, but like SQS or AppSync or Dynamo; all 3 support batching. Rather than "1 SQS message is parsed by 1 Lambda", instead, you can say "no no, 200 messages are handled by this Lambda", and you either map each to a Promise to do stuff like copy data to S3, OR do Promise.all if it can be concurrently. And it works, we've done it. Now is that Lambda somehow flattening similiar to the browser those "200 HTTP requests into 1 TCP/IP call stack request, then unpacking on the receiving end to 200 requests to optimize network traffic"? Maybe, I have no idea, but I know Promise.all for our 200 DynamoDB messages in Node using Promise.all takes about 3 seconds to write back to Dynamo concurrently, but if you do one at a time, it's 20 seconds. So the end result is even if it's fake concurrency, it's working like we expect.

Thread Thread
 
jesterxl profile image
Jesse Warden

... ok I should play with Google and Azure more... so many toys...

Collapse
 
redbar0n profile image
Magne

I just LOVE how fast I can iterate in ReScript. ... This is the main reason I don't want to use TypeScript compared to ReScript; it's just crazy fast.

Is it the speed of the type inference that is slowing you down in TypeScript? Or is it the Webpack bundling, or TS transpiling? Curious what fast refers to: HMR, type inference, or transpiling? Or all?

Have you tried Vite?

"Vite uses esbuild to transpile TypeScript into JavaScript which is about 20~30x faster than vanilla tsc, and HMR updates can reflect in the browser in under 50ms."

vitejs.dev/guide/features.html#typ...

The only thing I don't think Vite addresses is the type inference in the IDE... but how slow are they anyway?

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

When I run the compiler in watch mode. TypeScript takes seconds, ReScript takes milliseconds. No webpack, hot hot module reloading, just writing writing code so I can immediately go node theFile.js or npm test:unit.

No, I haven't tried Vite, thanks for the link. My issue, really, is... me. 10 years ago, I learned of TypeScript. As a new Flash/Flex refugee, I was looking for something that had strong types and would compile to JavaScript since I still wanted to do web development, just not go back in time 5 years using JavaScript compared to ActionScript 3. Back then, compilers/transpilers were fringe, JavaScript the language didn't have as many features + browser support as today, and most of the community said "You don't need classes and types". While I didn't agree, it was still hard to implement this stuff for client work if you weren't a sole contractor architecting your own projects for clients. Once Angular RC1 came out, TypeScript was more mature and solidified to write not just UI's but back-end code, CLI's, and libraries. Angular RC1 is also when I stopped being an OOP fan, and started learning more Functional Programming. 10 years later today, TypeScript still isn't very friendly to a Functional Programmer. The language is heavily focused on OOP developers, or heavily Object based code bases that have a lot of internal state and side effects. Despite the herculean efforts of fp-ts, and the massive growth in the job market for TypeScript acceptance... I don't really care, I don't like it.

ReScript gives me the guarentee's I want and the speed I want, with zero configuration or having to muck around with compiler settings, bike shed with a team what settings we should/should not use, etc. It's friendly to an FP programmer and has many FP features built into the language and standard libraries.

Thread Thread
 
redbar0n profile image
Magne

10 years later today, TypeScript still isn't very friendly to a Functional Programmer. The language is heavily focused on OOP developers, or heavily Object based code bases that have a lot of internal state and side effects. Despite the herculean efforts of fp-ts, and the massive growth in the job market for TypeScript acceptance...

That is a very compelling reason to go for ReScript over TypeScript indeed. TS can too easily slide out into something you don't want, and everyone on a team being guided into doing the right thing is good.

PS: check out ts-belt if you have to use typescript, it's inspired by and built with ReScript.

Thread Thread
 
jesterxl profile image
Jesse Warden

Nice, thanks, I'll check it out.

Collapse
 
redbar0n profile image
Magne

ReScript still is like "yeah, somewhere your stuff is broke". "Hey, uh... ReScript, how about a... you know... line number to start my investigation, eh, what do you think?" "No, good luck!" Elm's all line numbers, and formatted, and colors, and pretty, and friendly, and hinty... it's just night and day.

When I try the example code at the ReScript playground, and insert an error, then it gives the line number..?

"
Type Errors
[E] Line 11, column 14:

The value ms can't be found
"

Thread Thread
 
jesterxl profile image
Jesse Warden • Edited

Welcome to the party, pal!

Thread Thread
 
redbar0n profile image
Magne

I meant it as a question, since you said it doesn't give the line number.

Thread Thread
 
jesterxl profile image
Jesse Warden

Oh haha, my bad. Yeah, sometimes it does, other times it's like "somewhere". ReScript compiler is "good", I'm just whiny and expect a lot!

Collapse
 
redbar0n profile image
Magne

just read this again, and must say thank you again for a very thorough answer! <3 I would turn it into a blog post called ReScript vs. Elm if I were you!

Collapse
 
redbar0n profile image
Magne

Thank you so much for a very candid and thorough answer! :D With a video too! <3