Let us try to understand what the term Serverless
and Containers
mean:
Serverless
For an application to be termed as serverless, the application and all of its components should satisfy these four characteristics.
- There are no servers to provision or manage
- The serverless environment should be able to scale out or scale in the application as the load (traffic) increases or decreases
- The environment should enable our application to be highly available
- Highly cost optimized, i.e. you do not pay when your application is idle
If a service fulfills all the above criteria, then it is essentially a serverless service. Some well known serverless offerings by AWS are Lambda (compute environment), DynamoDB (NoSQL database), API gateway (reverse proxy).
But for the sake of this discussion, let us just focus on compute service, which is AWS Lambda.
AWS Lambda
To explain briefly, AWS Lambda is a compute service offered by AWS which enables you to run code without provisioning or managing any servers. We will learn about other characteristics of Lambda later in this post.
Now let us see what containers are:
Containers
Containers are a single cohesive unit which packages code and all of it dependencies together in a unit. When provided with an appropriate container runtime environment, each instance of the container behaves exactly the same. The applications run reliably when deployed on multiple computing environments.
For example: A container packaged on Windows when run on Linux with the same container runtime will run exactly the same way. This helps us to deploy and run our application on any environment, as long as it supports the appropriate container runtime.
Container runtime: A set of programs or process required for packaged unit of code and its dependencies (called container images) to run.
But simply running a single container is not enough. Most of the time, our applications are too complex just to run on single containers. A typical application may have a frontend container talking to a backend container, which might be storing data in some database container. When we think about these kinds of applications, there are multiple considerations that we need to make.
For example: The group of containers should be able to talk to each other over a network. They should be able to scale up and down depending on traffic to each.
So to manage these aspects, we use container orchestrators to help us easily manage the application running on the containers for us.
Some popular container orchestrators are:
- Docker Swarm
- Kubernetes
- AWS EKS
- AWS ECS
Now that we have some idea of what each term means, let us do some comparisons between Serverless and Containers based on some factors:
Runtime Environments
For Serverless (AWS Lambda),
-
Infrastructure is completely managed by cloud provider
- Scales in and out automatically
- We do not need to worry about OS patching, software upgrades of the underlying infrastructure
-
Cannot install any custom application (e.g. web server like
Apache
or reverse proxy likeNginx
)- Can install libraries/code dependencies
Memory in AWS Lambda is limited to a max of 10 GB
Compute time for AWS Lambda is also limited to 15 minutes or 900 seconds. If the workload exceeds this time limit, we get an Exception and the Lambda instance stops the computation immediately.
Aims to solve a particular problem without the hassle of installing softwares or managing infrastructure. Just start writing business logic and focus on solving the problem.
For Containers (AWS EKS),
-
Infrastructure is managed by the user
- It is user's responsibility to manage the control plane (master) nodes on EC2. All the components related to EC2 like AMI rehydration, EC2 Scaling and making it highly available is up to the user to configure
- The user decides on VM size, Memory size and other aspects of the node
Can use container to package any custom/third-party application (e.g. MongoDB, MySQL, Nginx etc.)
Memory is managed similar to configuration of EC2 instance
Can choose between different classes of EC2 instance (t, c, m etc.)
Gives users a plethora of choices ranging from customizing the hardware and use any software/application
Where they fit ?
For Serverless (AWS Lambda),
- The main use case where AWS Lambda shows it true power in event driven architectures. It has built in integration with lots of AWS services e.g. AWS S3, AWS SNS, AWS SQS etc.
For example: If we want to process a file as soon as it is put into S3, we can set a PutEvent notification from AWS S3 to SNS topic, where an AWS Lambda has its trigger set to that SNS topic. As soon as a file is uploaded onto AWS S3 it triggers the SNS topic which in triggers the AWS Lambda. Now the AWS Lambda can pick up the file path from the notification and process it
It is better suited when traffic is sporadic and unpredictable. Since AWS Lambda automatically scales in and out based on traffic, therefore it is cost-effective. During the times when there is no traffic we pay almost nil since our lambda is not invoked at all
AWS Lambda is suited for microservices as long as it does not depend on third party software. However, code dependencies can be installed
For Containers (AWS EKS),
The main use case where container is better when compared to AWS Lambda is when we want a faster migration to cloud. Since we can have any third party applications running on containers, we can easily spin up our choice of web server or database on the cloud. This is not possible with AWS Lambda
It is better suited for continuous and predictable workload. Since we will always have some minimum amount of pods on the worker nodes always running along with the control plane (master) we will incur costs even when there is no traffic
Containers is very well suited for microservices as it can package and run any kind of third party application or software
Now let us look at how they scale in presence of traffic.
Scaling
For Serverless (AWS Lambda),
-
For every request to AWS Lambda, it spawns a new instance to process concurrent requests. After an AWS Lambda instance has finished processing one request, it becomes available to process the next one
- A major disadvantage of AWS Lambda is the cold start time which is incurred when an AWS Lambda instance is spawned for the first time. The cold start time can be significant for a high performance or critical application
- To eliminate this disadvantage, we can set up provisioned capacity for AWS Lambda so that every time we have some instance of Lambda in a warm state
- Lambda scaling is limited to 1000 instances of lambda per region per account, by default. Therefore, theoretically, it can support only up to 1000 concurrent requests. However, this can be increased to several thousand of instance by requesting a quota increase
- AWS Lambda is not suited for long-running workloads because it has an execution time limit of 15 minutes or 900 seconds.
For Containers (AWS EKS),
- A pod in the container is able to handle multiple requests before having the need to spun up another pod to service more requests
- When the traffic further increases, a new worker node is spawned and a new pod is deployed on the new worker node
- Here also whenever a new pod or worker node is spun up we do experience a cold start delay but still it is better than the cold start delay to process every concurrent requests
- Here we incur more costs for under-utilized resources. Suppose our worker node is capable of running 3 pods and the traffic is just enough that it requires 4 pods to satisfy the SLA. So in that case we require one more worker node running 1 pod. But since a worker node is spawned which is basically an EC2 instance, we have to pay for the whole EC2 instance even if we are utilizing 1 pod worth of resources.
Conclusion
After learning about all those points about serverless and containers, we can conclude that one is not better than the other. It basically depends on the use case.
Think Serverless
When traffic is spordic and unpredictable and when we have run a short lived job. It is cost optimized but compromises on flexibility on the kinds of applications it can run.
Think Containers
When traffic is spread out and predictable. Even though it may costs more in some cases but it provides a complete flexibilty on the kind of hardware and software/applications we want to run.
Top comments (0)