DEV Community

Cover image for Lower Your AWS Lambda Bill by Increasing Memory Size— yep!
Taavi Rehemägi for Dashbird

Posted on • Updated on • Originally published at dashbird.io

Lower Your AWS Lambda Bill by Increasing Memory Size— yep!

When we specify the memory size for a Lambda function, AWS will allocate CPU proportionally. For example, a 256 MB function will receive twice the processing power of a 128 MB function. That looks simple and straightforward, but...

I had this question: would there be an ideal memory size that minimizes the cost of running a given task on Lambda?

In order to answer that, I tested the same task running on multiple memory sizes to check whether such cost/memory trade-off sweet spot exists.

Benchmark Lambdas

I created two Lambda functions to run this test:

  • Fibonacci: basic code that generates a sequence of... you guessed it, Fibonacci numbers! It's just a low-memory, CPU-intensive task.
  • Benchmarker: invokes the Fibonacci function (or any other function) multiple times, switching memory sizes; in the end, it averages out the results to determine which memory size optimizes speed and cost.

The code is open sourced, in case you'd like to test your own Lambdas. The results presented below will certainly vary according to the function you test, so I encourage you to download the Benchmarker Lambda and run it for yourself.

Image for post

Photo by Stephen Dawson on Unsplash

Test Parameters

  • AWS region: us-east-1 (N. Virginia)
  • Memory sizes tested: 128, 256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304, 2560, 2752, 3008
  • Fibonacci was invoked 20 times for each memory size
  • Invocations ran in batches of 10 concurrent requests to speed up the process
  • On each invocation, the Fibonacci function built a sequence of the first 30 Fibonacci numbers
  • Cold starts were ignored to standardize duration results

Test Results

Image for post

Average cost (USD) per million executions of the Fibonacci sequence builder (n=30)

The sweetest spots in terms of cost were:

  • 768 MB: that's the cheapest we can get for this task on Lambda; why 128 MB isn't cheaper? It takes longer to process, long enough to make it more expensive in total!
  • 2048 MB: although the price is ~3% higher than 768 MB, it runs 2.5x faster; in some cases, it might be worth spending the extra pennies to speed up the processing.

It's counter-intuitive that a task running with 768 MB can cost less in comparison to 128 MB, for example. It means we can actually lower our AWS bill by increasing memory size in some cases. Of course, we need to know what is the minimum memory our function requires when considering changing our settings. We created ]Dashbird](https://dashbird.io/features/aws-lambda-serverless-monitoring/) to make it easier to profile Lambda memory usage and identify thresholds for this kind of benchmarking analysis.

The sharp rise in the line slope (chart above) for higher memory sizes caught my attention. From that point on, it has been reported --- although not officially --- that Lambda provides two cores. My hypothesis is that the processing power is split among cores and, since my job was using only one core, the test was actually punishing the dual-core function setting. That's something to look more closely in a future test, with a task that can take advantage of multiple cores.

Image for post

Average duration (milliseconds) for running the Fibonacci sequence builder (n=30)

In terms of duration, the chart above seems to have no surprises, but I actually found something consistently weird in the results: 2048 MB always performs faster than 2304 and 2560 MB, which is unexpected. Zooming in to the highest memory sizes we can notice the difference.

Image for post

Average duration (milliseconds) for running the Fibonacci sequence builder (n=30)

It might be negligible since it represents roughly 2% in extra execution time. Nonetheless, if we're running this function millions of times or if latency is super important, those extra milliseconds can be relevant.

Understanding exactly which factors are playing a role in producing these unexpected results is hard. Lambda infrastructure is sort of a black box. Maybe there are differences in hardware serving each request, which would introduce some undesirable variability in our tests. The bottom line is: if you want to optimize your Lambda usage for either the fastest execution or lowest cost, you should definitely benchmark your functions.

We've released the benchmarking function so that you can deploy and test your own Lambda functions for yourself.

Save Money with Real-Time Lambda Cost Tracking

To reap the benefits and save even more on Lambda a performance monitoring tool tailored specifically for AWS Lambda, you can sign up today to Dashbird -- a free end-to-end Serverless monitoring platform.

Get detailed overviews of your Lambda functions, how healthy and efficient they are, and how much each one is costing. With full Lambda metric knowledge, you'll be able to identify trends to help save money.

Dashbird helps you build and operate complex Serverless applications by monitoring, providing observability and insights, and real-time error alerts.

  • Quick 3-minute setup. Start here!
  • Zero code changes
  • Start debugging, monitoring, and receiving alerts immediately
  • Centralized, easy-to-access data
  • Monitoring for errors, cold starts, and anomalies 
  • Automatic alerts already set up
  • Customized insights for optimization
  • Complex data visualized
  • No security or performance implications

Top comments (0)