DEV Community

Cover image for Serverless Myths
K for Fullstack Frontend

Posted on

Serverless Myths

After working with serverless technology for many years now, I encountered a bunch of recurring myths about it. Misconceptions that come from a false assumption about serverless technology.

In this article, I want to talk about each of those assumptions a bit.

Serverless Leads to Vendor Lock-In

One of the greatest fears, it seems—vendor lock-in. The problem arises when you buy into the technology, and you can't leave it quickly anymore later. Then the supplier raises the prices, and you are done for.

I have to admit; I don't think this is a myth, haha.

But...!

I think vendor lock-in is just another risk you take in your journey to a functioning application. You are also locked-in to the skills of your team. Sure, you can hire new people, but that costs money, the same as with moving to a different cloud provider.

You have to ask yourself: Is the benefit of going serverless worth that risk?

The same goes for the other direction: Is the benefit of avoiding vendor lock-in worth the risk?

Many companies using Stripe and are kind of locked-in to their system right now. They get great benefits from the system, so the risk does seem okay to them.

If you don't have an ops team yet, it can be a risky endeavor to hire a bunch of Kubernetes experts, and you could be faster by using serverless technology with the people you already have.

Serverless Means Function as a Service

Most misconceptions about serverless arise from that assumption. And I have to admit, we serverless advocates are to blame for this. FaaS is such a game-changer, the catalyst to make serverless go big, that we touted it probably too loud.

Is serverless expensive? Yes, if you think it's an API gateway with a bunch of functions behind it.

Is serverless slow? Yes, if you think it's an API gateway with a bunch of functions behind it.

Serverless means horizontal scaling with on-demand pricing. This can mean FaaS, but this can also mean storage. You pay for the CPU time you use, or you pay for the disk space you use.

I think FaaS is the critical part of serverless, but not because you should build everything with it, but because it enables gluing together services with minimal operational overhead.

You see if you have a bunch of managed services. Like S3, FaunaDB, or Auth0, they are all excellent on their own, but FaaS can multiply their value by linking them together. Sure, that was possible before; everyone could create a server that connects multiple services, but never before has the overhead been so small. You don't have to think about hardware, virtual machines, containers, write a function, upload, and you're ready.

Serverless Eliminates All Ops Work

Another point that leads to disillusionment is the operational burden that serverless brings.

Again, this might be an issue of the serverless advocacy task force.

The ease of operational work that comes with serverless shouldn't be seen in a vacuum.

I started with serverless a few years ago and was baffled by all the things I had to do to get my system working with high performance and adequate security. And I was confused because I read that it's so much more straightforward than everything else.

Then, a few months ago, I did some Kubernetes courses and built some systems with it. Suddenly I realized what serverless did for me.

Sure, it's no child play to set up a production-ready system with serverless, but it's at least a few magnitudes less work to do the same with Kubernetes.

The thing is, serverless isn't an end-user technology; it isn't Excel or something; it's still tech that needs IT-professionals to get right.

Serverless is Expensive

As I said before, if you think serverless is FaaS only, it can get expensive quickly.

This topic reminds me of a talk I had with a friend a few years ago. He said that AWS doesn't push CloudFront enough. People host stuff on S3 and are surprised that it's expensive, but when you put an S3 bucket behind a CloudFront distribution, things look quite different.

The same is valid for everything. If you use it wrong, you won't get stellar results. And I don't want to blame developers here. This goes hand in hand with the previous point of "serverless eliminating all ops." If you think everything you build with serverless will be cheap, you will have a bad time.

Systems like AWS API Gateway and AWS AppSync can directly integrate with data sources, so you can save a bunch of money by leaving the FaaS out of the equation.

Cloudflare Workers don't even need an API gateway (which usually ends up being the more significant chunk of your API bill), so you should look into other serverless cloud providers too.

Caching is also an essential part of a serverless system; if you cache your calculations, you don't have to pay FaaS costs every time.

Serverless has Bad Performance

The dreaded cold-start is probably to blame for this more than anything else.

First, serverless isn't only FaaS, and cold-starts are a FaaS phenomenon.

Second, cold-starts aren't even a problem for all FaaS providers. Cloudflare Workers don't have cold-starts.

Third, you often don't have to use FaaS in your system as the main component. As I said, AWS API Gateway and AWS AppSync can integrate directly with HTTP or AWS services, with no need for a Lambda function, which means no cold-start.

Then we are back to the "no-ops" assumption.

If you have to use FaaS, you have to invest some time tuning it with the right memory setting. The idea that the cheapest setting will result in the smallest bill isn't accurate. Sometimes giving a function more memory can lead to shorter runtimes, which means better performance and a smaller bill.

Serverless doesn't Support my Use-Case

There are still use-cases that aren't served well by serverless technology.

We learned serverless isn't only FaaS, but even if it was, there are different providers with different features for FaaS.

Maybe a Cloudflare Worker solves your use-case? It has no cold-starts, can run indefinitely, doesn't need an API gateway, and is edge deployed. And this is just one alternative to AWS Lambda. Azure and GCP have their competing products.

You should always look if a managed service can solve your problem for a sane amount of money and then drop down to implementing the solution yourself.

Then there are containers as a service. On-demand priced containers. AWS Fargate or Fly.io both offer container-based alternatives to FaaS.

Serverless doesn't Support My Programming Language

Again, only a question if you need to use FaaS, and even on FaaS, this isn't true anymore.

AWS Lambda allows you to bring a custom runtime, and building such a runtime isn't that much more challenging than building a Lambda function.

Cloudflare Workers use WebAssembly, so if your language can compile to WebAssembly, you can run it there.

With CaaS like Fargate and Fly.io, this isn't even a question.

Conclusion

Serverless is still a young technology, but it matured much in the last years. I think we serverless advocates could promote it better in the future: less marketing-speak and more truth.

If you are stuck on your serverless journey, look if your problem is related to these misconceptions. Maybe it's just a FaaS problem, or perhaps it's only an AWS Lambda problem.

Top comments (1)

Collapse
 
manishfoodtechs profile image
manish srivastava

Nice 👍👍👍👍