Serverless sounds like "no servers," but of course, servers still exist in the process.
We all know that it's to help reduce the workload of managing a server for a developer.
But why do we need it? Why does it even exist?, And how did it come into existence?
The Beginning:
So in 2006, during the cloud revolution, companies like AWS started offering VMs(Virtual Machines) on the cloud (with services like EC2 and S3) that developers could rent. This was a low-cost and efficient alternative to purchasing your own physical servers.
It was Infrastructure as a Service(IaaS).
The Simpler Era:
But, even if devs could rent VMs, they didn't want the hassle of setting up and managing these servers, and that's when Platform as a Service(PaaS) showed up.
In 2008, GAE(Google App Engine) was released with the PaaS model. Around the same time, Heroku (est. 2007) debuted in 2009 as a widely admired PaaS platform, letting users quickly deploy their apps.
The Game Changer:
Around 2014, AWS launched AWS Lambda, marking a major shift in cloud computing. You no longer needed to provision or run servers. It enabled the execution of discrete code snippets (functions) triggered by events.
Strictly speaking, it was called “function-as-a-service”.
Lambda marked the birth of serverless computing, and soon other players started catching up. Developers could now write code, and the cloud providers would handle execution, scaling, and billing.
It was not only reliable but damn cheap too, allowing wider adoption among developers.
Now we have:
Microsoft Azure Functions,
Google Cloud Functions, and
Cloudflare Workers...
All of them appeared after AWS introduced Lambda, and they tried to decrease the cost of serverless further.
Serverless was cheap, reliable, and removed a ton of operational overhead.
And now you can see how much Lambda costs.
Source: Amazon AWS
Drawbacks of Serverless:
After all, no technology is complete. Serverless has many benefits, but it also comes with a few tradeoffs.
- Cold Start: This is the main issue of serverless. When a function hasn’t been used recently, the cloud provider may spin it up to save resources, which helps bring down the cost of serverless. This isn't ideal for real-time applications like trading platforms, games, etc.
- Limited Control: Serverless might not give you much control over your infrastructure due to platform restrictions.
- Cost Predictability & Vendor Lock-in Concerns: If you need a predictable billing (for budgeting) or want to avoid being tied to just one cloud provider, serverless may introduce some challenges.
When Should You Use It?
Serverless is good when your service runs sporadically.
For example:
- Event-driven tasks like file uploads, DB change triggers, etc.
- APIs or microservices with variable traffic.
- It's quite reliable for prototyping, MVPs, or for experimental features where you want to move quickly and minimize initial infrastructure cost and risk.
When not to use it:
Just because serverless seems to be cheap doesn't mean that it's the cheapest for every kind of task.
- Long-running or compute-intensive tasks that can exceed the execution time, memory, or CPU limits of serverless platforms.
- High, steady traffic where functions are always invoked heavily, which may even exceed the cost of alternatives.
- Latency-sensitive applications because of cold starts.
Top comments (0)