I have been part of the serverless ecosystem for some years now. I have spoken not only in Brazil, but also in Latin America and the US, sharing knowledge and use cases through talks, the Sem Servidor podcast, and the events I organize on this topic.
During these years I always heard the same objections about serverless. I can say they usually focus on three points:
- At scale it becomes too expensive
- Vendor lock-in is a blocker
- It adds too much complexity
I will make a series of posts about each one of them. For now, let’s start with the first one.
Breaking biases
When we talk about serverless, what comes to your mind? Let me guess: function as a service (or Lambda, for those who use AWS).
That is very good, and Lambda helped to democratize serverless a lot. But at the same time, it created a big bias: the idea that serverless is only function as a service. In reality, serverless is a much bigger ecosystem.
By concept, serverless means services where you don’t need to manage servers to have automatic scalability and high availability.
Bias 1: Serverless is only Lambda. Lambda is important and works as the glue for many serverless architectures, but we cannot reduce serverless to this service alone.
Lambda is an important type of compute, but it is not the only one. For example, in AWS we can run containers in ECS Fargate with no servers to manage, with high availability and automatic scalability. When Lambda is not a good fit, this is a strong alternative for building software with very low operational effort.
I am using it in one of the operations I am involved with, and the time spent on operations is close to zero — a great cost-benefit.
Bias 2: Containers are not serverless. Yes, we can run containers in a serverless way and enjoy the advantages of low or zero operations while still having scalability and availability.
And serverless is not only about compute. We also have serverless services for storage, databases, messaging, and more.
Let me guess again: you use S3 every day, like drinking coffee. Then you are already using serverless. If you used SQS, that is serverless too.
Most of the time these services work very well, even at scale, without being a problem.
I am not here to defend serverless at all costs. My point is to help you think clearly: not everything makes sense for every case, and not everything is bad either.
Bias 3: Serverless is only compute. In reality, we already use many serverless services like S3, SQS, and API Gateway without even thinking of them as serverless. And when people say “serverless is too expensive”, many times they are only talking about Lambda.
But there is more to explore.
Architecture that makes costs work
Designing an architecture that balances infrastructure cost and operation cost (people’s time) is complex. And this is where human thinking still adds value, even with AI growing in our work.
First, remember: we cannot look only at infrastructure costs (the cloud bill). We also need to consider the time spent by the team to make everything run: scaling services, configuring servers, deployments, and so on. All this is part of the cost.
The serverless paradigm was created to be event-driven. To get the best of scalability, availability, and costs, we need to bring our architecture closer to this model.
Just doing lift-and-shift of an old app into synchronous functions or containers will not work. If the goal is only “to be in the cloud”, then other services are cheaper and better, of course.
But if the goal is to get the best from the cloud, then serverless can be your strongest ally.
For example, the granularity of a Lambda function can make the difference between sustainable cost or exploding bills. Many successful cases of Lambda use group CRUD operations into a single function instead of splitting too much.
And don’t forget: Lambda is not the only compute. In many real scenarios you will have both containers and functions, working together, orchestrated by workflows or event brokers.
No architecture can save a bad business model
Who never worked in a company where the pressure to reduce cloud costs was constant? Of course, optimization is important, but many times we don’t stop to ask: is the business model aligned with the architecture? Or the other way around?
The truth is: no architecture can save a bad business model. What works is a good architecture aligned with the business. If costs grow but revenue does not, the problem is not only technical.
Building a cloud solution is not only about writing code, some endpoints, and a web page. Maybe it works in very the beginning to validate an idea. But when we talk about scale, the architecture and the business model must be aligned.
The services where cost really hurts
There are some services considered serverless (even with some controversy) that are expensive by design, not by misuse.
Aurora Serverless v2 is one example. It has great elasticity and value, but for small workloads the baseline cost can be too high.
OpenSearch Serverless is another case. It should be the obvious choice for vector search in LLM use cases, but many people prefer Postgres with vector extensions because the entry cost of OpenSearch is high.
These services need more attention to make sense, especially in early-stage startups where cost sensitivity is high.
Conclusion
We saw that serverless is not only Lambda or functions. It can also be containers. And it is not only compute: it includes storage, messaging, and highly managed services.
We need to avoid the trap of forcing the traditional model into serverless. Instead, we should adapt to event-driven design to get the best results, especially in cost.
And remember: the business model must be aligned with the architecture. If the factors that increase costs are not the same that increase revenue, no architecture (and no FinOps) will fix it.
Used in the right way, serverless gives cost predictability and reduces waste, especially compared with provisioned infra built for peaks.
If you want to learn from experts and companies that are succeeding with serverless, I invite you: on November 8th we will discuss this in depth at ServerlessDays São Paulo — a full day of real experiences, myths and truths, and how to scale without surprises.
About the author
Evandro Pires is co-founder of tnkr and an entrepreneur in different areas. He worked as CTO of one of the biggest tech companies in Brazil, where he led Cloud Native, Data, AI, SRE and DevOps teams, impacting more than 1 million users every month. Recognized as the first AWS Serverless Hero in Latin America, he is also an ambassador of sls.guru in the region.
He is the creator of the Sem Servidor podcast and organizer of the Sem Servidor Conf and ServerlessDays São Paulo, initiatives that grow the serverless community in Latin America.
He has been programming since he was 12 years old, when he learned to code with his father in Clipper. He is a pizza lover and enjoys making pizzas on weekends to have fun with his family.
Top comments (0)