DEV Community

Kyle Higginson
Kyle Higginson

Posted on

Analysing Cold Starts on Node Lambda

I decided to take a dive into the Lambda cold start issue to see how long a normal cold start is and what configurations lead to higher cold start times.

TL;DR
The biggest contributor to an increased Lambda cold start time is the total size of the Lambda's package in S3.
Meaning that the more code in your lambda, or the amount of libraries you include in your package, the greater the length of the cold start. Memory, VPC, region, layers and instruction set architecture have minimal effects on cold start times.

Quick Definition of a Lambda Cold Start

When a Lambda function receives a request, the service first prepares an execution environment. Preparing the execution environment involves the following tasks:

  • Download the code from an internal AWS S3 bucket.
  • Create an environment with the memory, runtime, and specified configuration.

The Experiment

Lambda Configuration

I used the following as the default configuration for the Lambdas, then changed one configuration per test scenario:

  • Runtime: Node.js 14
  • Architecture: X86_64
  • Region: eu-west-1
  • Memory: 128MB
  • Not inside VPC

The Lambda was making a simple HTTP call and returning the data from the downstream endpoint, using the built in https library.

How I Gathered Metrics

I used AWS X-Ray to observe and gather metrics for the Lambda executions. X-Ray measures the initialization, invocation and overhead time (learn more here) for each Lambda request, as shown below:

Image description

For this experiment, we are interested in initialization, as this refers to what is more commonly known as the cold start time. This is the metric which I will refer to as the cold start time in this post.

The Results

Let's take a look at some figures. Below, I show the results of my testing when analysing cold start times for the following configurations:

  • Memory
  • Instruction set architecture
  • Region
  • VPC
  • Libraries
  • Lambda Layers

Memory

Does increasing the memory of the Lambda reduce the cold start time?

I deployed the lambda under different memory amounts ranging from 128MB to 10240MB.

Image description

The short answer is no, not really. The potential latency saving by increasing memory available to Lambda is 15 milliseconds.

Instruction set architecture

Is there a difference in cold starts between arm64 and x86_64 architecture?

Image description

The short answer again is no, not really. I found that arm64 provided a 1ms decrease to cold start times.

Region

Do some regions have shorter cold start times than others?
I tested the cold start times in 3 AWS regions: eu-west-1, us-east-1 and ca-central-1.

Image description

Again the short answer is no, no real significant difference between regions. None of the regions I tested showed noticeably slower or faster cold start times.

VPC

I've heard before that Lambdas inside a VPC have slower cold start times compared to Lambdas not in a VPC. Did I see this when testing?

Image description

Again, no. Whether a Lambda is in a VPC or not does not seem to affect the cold start time. The Lambda inside the VPC took 1 ms more to initialize the Lambda environment, which again is not a significant difference to latency.

Libraries

Does the amount of libraries included in your Lambda package increase the cold start time?

I used the following 5 packages to test this scenario, adding one more package per test:

  • axios
  • winston
  • lodash
  • moment
  • ramda

Image description

Finally I found a configuration that increases cold start times.
The amount of libraries included in the Lambda package does affect the time to initialize the Lambda environment.

Without any libraries, the cold start time is 173 ms, a fairly short time. But when we include 5 packages in the Lambda, that time jumps to 515 ms. Nearly half a second more. Which, for an API, is a significant difference and one which the consumer of the API would notice.

This makes sense when you think about it. The cold start is the time in which it takes to download the Lambda package from S3. So if we have a greater package size, it will take more time to download that package, leading to a greater cold start time.

Lambda Layer

Does having a layer attached to the Lambda affect the cold start time?

Image description

Again, there wasn't much difference in cold start times when adding a layer onto the Lambda. The cold start time was only 2.5ms quicker without a Lambda layer. 

Conclusion

When using the NodeJS 14 runtime, you are going to experience cold start times of around 170ms, under the default configuration.

The only real significant contributor to cold start times was the amount of packages used within the Lambda. The other configurations had a very small difference to the cold start times, but may lead to a greater difference if used in a combination.
For example, you may see a difference if you use a Lambda in a VPC, with multiple lambda layers.

Summary of all the data collected:

Memory
A Lambda with 2048MB memory had a slower cold start of 15ms compared to a Lambda with 8192MB or 10240MB memory

Instruction set architecture
x86_64 had a cold start time of 173 compared to arm64 which had a 1ms less cold start with 172ms.

Region
us-east-1 had the highest cold start of 179.5ms. ca-central-1 had the quickest cold start time with 168.5.

VPC
A Lambda inside a VPC is 1ms slower than a Lambda not inside a
VPC.

Libraries
A lambda with no libraries had a 342.5ms quicker cold start compared to a Lambda with 5 libraries.

Lambda Layer
A Lambda without a layer was 2.5ms quicker than a Lambda with 1 layer.

Provisioned Concurrency
I will also point out that I also experimented with provisioned concurrency, which does result in a 0ms initialization time. If you are concerned with the cold start time, this may be something to look into.

Top comments (0)