DEV Community

Cover image for Fargate vs Lambda: The Battle of the Future

Fargate vs Lambda: The Battle of the Future

Fargate vs Lambda has recently been a trending topic in the serverless space. Fargate and Lambda are two popular serverless computing options available within the AWS ecosystem. While both tools offer serverless computing, they differ regarding use cases, operational boundaries, runtime resource allocations, price, and performance. This blog aims to take a deeper look into the Fargate vs Lambda battle.

What is AWS Fargate?

AWS Fargate is a serverless computing engine offered by Amazon that enables you to efficiently manage containers without the hassles of provisioning servers and the underlying infrastructure. When cluster capacity management, infrastructure management, patching, and provisioning resource tasks are removed, you can finally focus on delivering faster and better quality applications. AWS Fargate works with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS), supporting a range of container use cases such as machine learning applications, microservices architecture apps, on-premise app migration to the cloud, and batch processing tasks.

Without AWS Fargate

  1. Developers build container images
  2. Define EC2 instances and deploy them
  3. Provision memory and compute resources and manage them
  4. Create separate VMs to isolate applications
  5. Run and manage applications
  6. Run and manage the infrastructure
  7. Pay EC2 instances usage charges

When AWS Fargate is implemented:

  1. Developers build container images
  2. Define compute and memory resources
  3. Run and manage apps
  4. Pay compute resource usage charges

With Fargate without Fargate clickit

In the Fargate vs Lambda context, Fargate is the serverless compute option in AWS used when you already have containers for your application and simply want to orchestrate them easier and faster. It works with Elastic Kubernetes Service (EKS) as well as Elastic Container Service (ECS).

EKS and ECS have two types of computing options:

  1. EC2 type: With this option, you need to deal with the complexity of configuring Instances/Servers. This can be a challenge for inexperienced users. You must set up your EC2 instances and put containers inside the servers with some help from the ECS or EKS configurations.

  2. Fargate type: This option allows you to reduce the server management burden while easily updating and increasing the configuration limits required to run Fargate.

What is Serverless?

Before delving deep into the serverless computing battle of Lambda vs Fargate or Fargate vs Lambda, it’s important first to gain a basic understanding of the serverless concept. Serverless computing is a technology that enables developers to run applications without needing to provision server infrastructure. The cloud provider will provide the backend infrastructure on-demand and charge you according to a pay-as-you-go model.

The term “serverless” might be misleading for some people. Indeed, it’s important to note that serverless technology doesn’t imply the absence of servers. Rather, the cloud provider will manage the server infrastructure with this technology, allowing developers to concentrate their efforts on an app’s front-end code and logic. Resources are spun when the code executes a function and terminates when the function stops. Billing is based on the duration of the execution time of the resources. Therefore, operational costs are optimized because you don’t pay for idle resources.

With serverless technology, you can say goodbye to capacity planning, administrative burdens, and maintenance. Furthermore, you can enjoy high availability and disaster recovery at zero cost. Auto-scaling to zero is also available. Finally, resource utilization is 100%, and billing is done granularly, measuring 100 milliseconds as a unit.

What is AWS Lambda?

AWS Lambda is an event-driven serverless computing service. Lambda runs predefined code in response to an event or action, enabling developers to perform serverless computing. This cross-platform was developed by Amazon and first released in 2014. It supports major programming languages such as C#, Python, Java, Ruby, Go, and Node.js. It also supports custom runtime. Some of the popular use cases of Lambda include updating a DynamoDB table, uploading data to S3 buckets, and running events in response to IoT sensor data. The pricing is based on milliseconds of usage, rounding off to the nearest millisecond. Moreover, Lambda allows you to manage Docker containers of sizes up to 50 GB via ECR.

When you compare Fargate vs Lambda, Fargate is for containerized applications running for days, weeks, or years. Lambda is designed specifically to handle small portions of an application, such as a function. For instance, a function that clears the cache every 6 hours and lasts for 30 seconds can be executed using Lambda.

Fargate vs Lambda: Key Differences

Fargate vs Lambda key differences clickit

A Typical AWS Lambda Architecture
AWS Lambda is a Function-as-a-Service (FaaS) that helps developers build event-driven apps. In the app’s compute layer, Lambda triggers AWS events.

What are the three core components of Lambda architecture?

1) Function: A function is a piece of code written by developers to perform a task. The code also contains the details of the runtime environment of the function. The runtime environments are based on Amazon Linux AMI and contain all required libraries and packages. Capacity and maintenance are handled by AWS.

  • Code Package: The packaged code containing the binaries and assets required for the code to run. The maximum size is 250 MB or 50 MB in a compressed version.

  • Handler: The starting point of the invoked function running a task based on parameters provided by event objects.

  • Event Object: A parameter provided to the Handler to perform the logic for an operation.

  • Context Object: Facilitates interaction between the function code and the execution environment. The data available for Context Objects include:

i. AWS Request ID
ii. Remaining time for the function to time out
iii. Logging statements to CloudWatch

2) Configuration: Rules that specify how a function is executed.

  • IAM Roles: Assigns permissions for functions to interact with AWS services.

  • Network Configuration: Specifies rules to run functions inside a VPC or outside a VPC.

  • Version: Reverts functions to previous versions.

  • Memory Dial: Controls resource allocations to functions.

  • Environment Variables: Values injected into the code during the runtime.

  • Timeout: Time for a function to run.

3) Event Source: The event that triggers the function.

  • Push Model: Functions triggered via S3 objects, API Gateway and Amazon Alexa.

  • Pull Model: Lambda pulls events from DynamoDB or Kinesis.

A Typical AWS Fargate Architecture
What are the four core components of the AWS Fargate architecture?

1) Task Definition: A JSON file that describes definitions for at least one of the application containers.

2) Task: Instantiation of a task definition at a cluster level.

3) Cluster: Tasks or services logically grouped in Amazon ECS.

4) Service: A process that runs tasks in Amazon ECS cluster based on task definitions.

Fargate vs Lambda: Performance

As far as performance is considered in the AWS Fargate vs Lambda debate, AWS Fargate is the winner, as it runs on dedicated resources. Lambda has certain limitations when it comes to allocating computing and memory resources. Based on the selected amount of RAM, AWS allocates the corresponding CPU resources meaning that the user cannot customize CPU resources. Moreover, the maximum available memory for Lambda functions is 10 GB, whereas Fargate allows for 120 GB of memory. Furthermore, Fargate allows you to choose up to 16 vCPU resources.

Another notable issue is that a Lambda function only has a run time of 15 minutes for every invocation. On the other hand, in the absence of runtime limitations, the Fargate environment is always in a warm state.

Fargate functions must be packaged into containers, increasing the load time to around 60 seconds. This is a very long time compared to Lambda functions which can get started within 5 seconds. Fargate allows you to launch 20 tasks per second using ECS RunTask API. Moreover, you can launch 500 tasks per service in 120 seconds with ECS Service Scheduler.

That said, scaling the environment during unexpected spike requests and health monitoring tends to cause a bit of a delay in start-up time.

Lambda Cold Starts
When Lambda receives a request to execute a task, it starts by downloading the code from S3 buckets and creating an execution environment based on the predefined memory and its corresponding compute resources. If there is any initialization code, Lambda runs it outside the environment and then runs the handler code. The time required for downloading the code and preparing the execution environment is counted as the cold start duration. After executing the code, Lambda freezes the environment so that the same function can run quickly if invoked again. If you run the function concurrently, each invocation gets a cold start. There will also be a code start if the code is updated. The typical time for cold starts falls between 100 ms and 1 second. In light of the foregoing, Lambda falls short in the Lambda vs Fargate race regarding cold starts. However, Provisioned Concurrency is a solution to reduce cold starts.

The runtime choice will also have an impact on Lambda cold starts. For instance, Java runtime involves multiple resources to run the JVM environment, which delays the start. On the other hand, C# or Node.js runtime environments offer lower latencies.

Lambda SnapStart for Java is a new feature that enables you to resume new execution environments from cached snapshots without initializing them from scratch. It helps improve startup latency. However, this feature is only available for Java 11-managed runtime environments. There are other limitations as well. It does not support provisioned concurrency, Amazon X-Ray, Amazon EFS, or arm64 architecture. Moreover, you cannot use ephemeral storage of more than 512 MB.

Fargate Cold Starts
Fargate takes time to provision resources and starts a task. Once the environment is up and running, containers get dedicated resources and run the code as defined.

Fargate vs Lambda: Support

AWS Fargate works as an operational layer of a serverless computing architecture to manage Docker-based ECS or Kubernetes-based EKS environments. For ECS, you can define container tasks in text files using JSON. There is support for other runtime environments as well. Fargate offers more capacity deployment control than Lambda, as Lambda is limited to 10GB of space and 10GB of package size for container images and 250 MB for deployments to S3 buckets.

Lambda supports all major programming languages, such as Python, Go, Ruby, PHP, C#, Node.js, and Java, and code compilation tools, such as Maven and Gradle. That said, Lambda only supports Linux-based container images.

With Fargate, you can develop Docker container images locally using Docker Compose and run them in Fargate without worrying about compatibility issues. Since development and architecture is independent of Fargate, it outperforms Lambda in this particular category.

When more control over the container environment is the key requirement, AWS Fargate is definitely the right choice.

Fargate vs Lambda: Costs

When comparing Fargate vs Lambda costs, it is important to note that both tools serve different purposes. While Lambda is a Function-as-a-Service, Fargate is a serverless computing tool for container-based workloads.

Lambda costs are billed in milliseconds. AWS Lambda charges $0.20 per 1 Million requests with $0.0000166667 for every GB-second duration for the first 6 Billion GB-seconds / month. The duration costs vary based on the allocated memory. For instance, 128 MB memory costs you $0.0000000021 per ms, and 10 GB memory costs you $0.0000001667 per ms.

For example, consider 10 GB of memory with 6 vCPU and concurrency, which is always running. The monthly cost for the foregoing would be $432.50. If the concurrency is two, the price is doubled. If the environment runs half the day, the price gets divided by two. If it’s running for 10 minutes per day, the cost would be $9.10 per month.

If you consider the same configuration in Fargate, the prices are drastically lower.

  • Fargate charges a flat rate of $0.04048 per vCPU per hour ($29.145 per month)

  • $0.004445 per GB per hour ($3.20 per month)

So, a 10 GB memory with 6 vCPUs running continuously for a month with concurrency would cost $206.87. Moreover, Fargate separates CPUs from memory, allowing you to choose the right-sized configuration. Therefore, you can save costs by reducing the CPUs depending on your needs. When you consider a concurrency of 10, the difference increases exponentially. Another advantage of Fargate is the spot pricing which offers an additional 30% savings.

Fargate vs Lambda costs clickit

Notice that Lambda costs are lower than Fargate when the idle time is greater. In light of the foregoing, we can conclude that Lambda is more suitable for workloads that are idle for long periods. Lambda is cost-effective if the resources are idle for a quarter or less of the time. Lambda is the best choice to scale fast or isolate security from an app code. Contrastingly, Fargate suits cloud environments with minimally idle workloads. We think the best option is to implement Infrastructure as Code (IaC) and begin with Lambda. When workloads increase, you can seamlessly switch to Fargate.

Fargate vs Lambda: Easy to Work

Lambda is easy to set up and operate as there are minimal knobs to adjust compared to Fargate. More abstraction implies less operational burden. However, it also implies limited flexibility. Lambda comes with a rich ecosystem that offers fully automated administration. You can use the management console or the API to call and control functions synchronously or asynchronously, including concurrency. The runtime supports a common set of functionalities and allows you to switch between different frameworks and languages.

Lambda Fargate EC2 Clickit

As far as operational burdens go, Lambda is easier compared to EC2. Fargate stands between Lambda and EC2 in this category, leaning closer towards Lambda. That said, EC2 offers more flexibility in configuring and operating the environment, followed by Fargate and Lambda.

Fargate vs Lambda: Community

Both AWS Fargate and Lambda are a part of the AWS serverless ecosystem. As such, both tools enjoy the same level of community support. Both services offer adequate support for new and advanced users, from documentation and how-to guides to tutorials and FAQs.

Fargate vs Lambda: Cloud Agnostic

Each cloud vendor manages serverless environments differently. For instance, C# functions written for AWS will not work on the Google Cloud. In light of the foregoing, developers must consider cloud-agnostic issues if multi-cloud and hybrid-cloud architectures are involved. Moving between different cloud vendors involves considerable expenses and operational impacts. As such, vendor lock-in is a big challenge for serverless functions. To overcome this, we suggest using an open-source serverless framework offered by Serverless Inc.

Moreover, implementing hexagonal architecture is a good idea because it allows you to move code between different serverless cloud environments.

Fargate vs Lambda: Scalability

In terms of Lambda vs Fargate scalability, Lambda is known as one of the best scaling technologies available in today’s market. Rapid scaling and scaling to zero are the two key strengths of Lambda. The tool instantly scales from zero to thousands and scales down from 1000 to 0, making it a good choice for low workloads, test environments, and workloads with unexpected traffic spikes. As far as Fargate is concerned, container scaling depends on resizing the underlying clusters.

Furthermore, it doesn’t natively scale down to zero. Therefore, you’ll have to shut down Fargate tasks outside business hours to save on operational costs. Tasks such as configuring auto-scaling and updating base container images add up when it comes to maintenance.

Fargate vs Lambda: Security

Lambda and Fargate are inherently secure as part of the AWS ecosystem. You can secure the environment using the AWS Identity and Access Management (IAM) service. Similarly, both tools abstract away the underlying infrastructure, which means the security of the infrastructure is managed by other services. The difference between the two tools lies in the IAM configuration. Lambda allows you to customize IAM roles for each function or service, while Fargate customizes each container and pod. Fargate tasks run in an isolated computing environment wherein CPU or memory is not shared with other tasks.

Similarly, Lambda functions run in a dedicated execution environment. Also, Fargate offers more control over the environment and more secure touchpoints than Lambda.

When to Use Fargate or Lambda?

AWS Lambda Use Cases:

  • Operating serverless websites
  • Massively scaling operations
  • Real-time processing of high volumes of data
  • Predictive page rendering
  • Scheduled events for every task and data backup
  • Parse user input and cleanup backend data to increase a website’s rapid response time
  • Analyzing log data on-demand
  • Integrating with external services
  • Converting documents into the user-requested format on-demand

Real-Life Lambda Use Cases

Serverless Websites: Bustle
One of the best use cases for Lambda is operating serverless websites. By hosting frontend apps on S3 buckets and using CloudFront content delivery, organizations can manage static websites and take advantage of the Lambda pricing model.

Bustle is a news, entertainment, and fashion website for women. The company was having difficulties scaling its application. In addition, server management, monitoring, and automation was becoming an important administrative burden. The company, therefore, decided to move to AWS Lambda with API Gateway and Amazon Kinesis to run serverless websites. Now, the company doesn’t have to worry about scaling, and its developers can deploy code at an extremely low cost.

Event-driven Model for Workloads with Idle Times: Thomson Reuters
Companies that manage workloads that are idle most of the time can benefit from the Lambda serverless feature. A notable example is Thomson Reuters, one of the world’s most trusted news organizations. The company wanted to build its own analytics engine. The small team working on this project desired a lessened administrative burden. At the same time, the tool needed to scale elastically during breaking news. Reuters chose Lambda. The tool receives data from Amazon Kinesis and automatically loads this data in a master dataset in an S3 bucket. Lambda is triggered with data integrations with Kinesis and S3. As such, Reuters enjoyed high scalability at the lowest cost possible.

Highly Scalable Real-time Processing Environment: Realtor.com
AWS Lambda enables organizations to scale resources while instantly cost-effectively processing tasks in real-time. Realtor.com is a leader in the real estate market. After the move to the digital world, the company started experiencing exponential traffic growth. Furthermore, the company needed a solution to update ad listings in real-time.

Realtor.com chose AWS for its cloud operations. The company uses Amazon Kinesis Data Streams to collect and stream ad impressions. The internal billing system consumes this data using Amazon Kinesis Firehose, and the aggregate data is sent to the Amazon Redshift data warehouse for analysis. The application uses AWS Lambda to read Kinesis Data Streams and process each event. Realtor.com is now able to massively scale operations cost-effectively while making changes to ad listings in real-time.

AWS Fargate Use Cases

AWS Fargate is the best choice for managing container-based workloads with minimal idle times.

Build, run, and manage APIs, microservices, and applications using containers to enjoy speed and immutability
Highly scalable container-based data processing workloads
Migrate legacy apps running on EC2 instances without refactoring or rearchitecting them
Build and manage highly scalable AI and ML development environments

Real-life Use Cases

Samsung
Samsung is a leader in the electronics category. The company operates an online portal called “Samsung Developers,” which consists of SmartThings Portal for the Internet of Things (IoT), Bixby Portal for voice-based control of mobile services, and Rich Communication Services (RCS) for mobile messaging. The company was using Amazon ECS to manage the online portal. After the re: Invent 2017 event, Samsung was inspired to implement Fargate for operational efficiency. After migrating to AWS Fargate, the company no longer needed dedicated operators and administrators to manage the web services of the portal.

Now, geographically distributed teams simply have to create new container images uploaded to ECR and moved to the test environment on Fargate. Developers can therefore focus more on code, and frequent deployments, and administrators can focus more on performance and security. Compute costs were downsized by 44.5%.

Quola Insurtech Startup
Quola is a Jakarta-based insurance technology startup. The company developed software that automates claim processing using AI and ML algorithms to eliminate manual physical reviews. Quola chose AWS cloud and Fargate to run and manage container-based workloads. Amazon Simple Queue Service (SQS) is used for the message-queuing service. With Fargate, Quola is able to scale apps seamlessly. When a new partner joined the network, data transactions increased from 10,000 to 100,000 in a single day. Nevertheless, the app was able to scale instantly without performance being affected.

Vanguard Financial Services
Vanguard is a leading provider of financial services in the US. The company moved its on-premise operations to the AWS cloud in 2015 and now manages 1000 apps that run on microservices architecture. With security being a key requirement in the financial industry, Vanguard operates in the secure environment of Fargate. With Fargate, the company could offer seamless computing capacity to its containers and reduce costs by 50%.

Considerations when Moving to a Serverless Architecture

Inspired by the amazing benefits of serverless architecture, many businesses are aggressively embracing the serverless computing model. Here are the steps to migrate monolith and legacy apps to a serverless architecture.

a) Monolith to Microservices: Most legacy apps are built using a monolith architecture. When such is the case, the first step is to break the large into smaller and modular microservices, after which each microservice will perform a specific task or function.

b) Implement each Microservice as a REST API: The next step is identifying the best fit within these microservices. Implement each microservice as a REST API with API endpoints as resources. Amazon API Gateway is a fully managed service that can help you.

c) Implement a Serverless Compute Engine: Implement a serverless compute engine such as Lambda or Fargate and move the business logic to the serverless tool such that AWS provisions resources every time a function is invoked.

d) Staggered Deployment Strategy: Migrating microservices to the serverless architecture can be done in a staggered process. Identify the right services and then build, test, and deploy them. Continue this process to smoothly and seamlessly move the entire application to the new architecture.

Considerations for Moving to Amazon Lambda

Migrating legacy apps to Lambda is not a difficult job. If your application is written in any Lambda-supported languages, you can simply refactor the code and migrate the app to Lambda. You simply need to make some fundamental changes, such as changing the dependency on local storage to S3 or updating authentication modules. When Fargate vs Lambda security is considered, Lambda has fewer touchpoints to secure than Fargate.

If you are using Java runtime, keep in mind that the size of the runtime environment and resources can result in more cold starts than with Node.js or C#. Another key point to consider is memory allocation. Currently, Lambda’s maximum memory allocation is 3 GB. If your application requires more computing and memory resources, Fargate is a better choice.

Considerations for Moving to AWS Fargate

While AWS manages resource provisioning, customers still need to handle network security tasks. For instance, when a task is created, AWS creates an Elastic Network Interface (ENI) in the VPC and automatically attaches each task ENI to its corresponding subnet. Therefore, managing the connectivity between the ENI and its touch points is the customer’s sole responsibility. More specifically, you need to manage ENI access to AWS EC2, CloudWatch, Apps running on-premise or other regions, Egress, Ingress, etc. Moreover, audit and compliance aspects must be carefully managed, which is why Fargate is not preferred for highly regulated environments.

Conclusion

The Fargate vs Lambda battle is getting more and more interesting as the gap between container-based and serverless systems is getting smaller with every passing day. There is no silver bullet when deciding which service is the best. With the new ability to deploy Lambda functions as Docker container images, more organizations seem to lean towards Lambda. On the other hand, organizations that need more control over the container runtime environment are sticking with Fargate.

This blog was first posted on ClickIT: https://www.clickittech.com/devops/fargate-vs-lambda/?utm_source=devto&utm_medium=referal

Top comments (0)