Introduction
In part 1 of the series, we explained the ideas behind AWS Lambda Managed Instances and introduced our sample application. In part 2, we explained what a Lambda Capacity Provider is and how to create it using AWS SAM. In this article, we'll cover how to create Lambda functions and attach them to the already created Capacity Provider.
Create Lambda function with Lambda Managed Instance compute type
If we'd like to create a Lambda function with Lambda Managed Instance (LMI) compute type and would do it on the AWS Lambda service page, we now see an "Additional configuration" option with 2 choices for the Compute type:
- Lambda (default) - this is how we created Lambda functions until now
- Lambda managed instance -new - new option with LMI
So, we need to make our choice in advance what compute type we should choose and can't change this choice later. This is also due to different concurrency models, which we'll cover later in this article.
Let's use AWS SAM as IaC. You can find a code example of our sample application in my GitHub aws-lambda-java-25-lmi repository. The relevant infrastructure as code part is located in AWS SAM.
Let's look at the relevant parts that we defined in the global section for the Lambda function, so they are valid for all Lambda functions defined in the AWS SAM template:
Globals:
Function:
Runtime: java25
MemorySize: 2048
CapacityProviderConfig:
Arn: !Sub ${MyCapacityProvider.Arn}
ExecutionEnvironmentMemoryGiBPerVCpu: 2
PerExecutionEnvironmentMaxConcurrency: 32
FunctionScalingConfig:
MinExecutionEnvironments: 1
MaxExecutionEnvironments: 3
We use the java25 managed runtime and set the memory size to 2048 MB (currently the minimum supported RAM). As we defined the CapacityProviderConfig means that we use "Lambda managed instance" compute type. The following runtimes currently support Lambda Managed Instances :
Java: Java 21 and later.
Python: Python 3.13 and later.
Node.js: Node.js 22 and later.
.NET: .NET 8 and later.
with more runtime support to come later.
Here are the properties of the CapacityProviderConfig that we can set:
Arn defines the Amazon Resource Name of the existing capacity provider (see part 2).
ExecutionEnvironmentMemoryGiBPerVCpu defines the amount of memory in GiB allocated per vCPU for execution environments. Valid Range is between the value of 2.0 (2:1) to 8.0 (8:1). We set it to 2.
PerExecutionEnvironmentMaxConcurrency defines the maximum number of concurrent execution environments that can run on each compute instance. We set it to 32, which is also the recommended default for the Java runtime. See more information for each https://docs.aws.amazon.com/lambda/latest/dg/lambda-managed-instances-runtimes.html, including Runtime-specific considerations like multithreading and others.
The PerExecutionEnvironmentMaxConcurrency configuration provides the fundamental difference between the Lambda default compute type where there is only 1 request per execution enviroment (FireCracker microVM) at time but could set the Lambda function's memory between 128 MB and 10 GB and the LMI compute type where we can and also should set multiple concurrent requests per execuction environment (like 32 in our case) but currently have to set at least 2048 MB or memory for the Lambda function. We can monitor the memory consumption to start our application and additional memory per request (roughly), and play with memory and PerExecutionEnvironmentMaxConcurrency settings to achieve the best price-performance.
FunctionScalingConfig defines the scaling behavior for a Lambda Managed Instances function, including the minimum and maximum number of execution environments that can be provisioned.
MinExecutionEnvironments is an optional setting that defines the minimum number of execution environments that can be provisioned for the function. Valid range is between 0 and 15000. We set it to 1.
MaxExecutionEnvironments is an optional setting that defines the maximum number of execution environments that can be provisioned for the function. Valid range is between 0 and 15000. We set it to 3.
Please read more about Scaling Lambda Managed Instances.
When creating or updating the function with sam deploy (-g), publishing a function version happens automatically. Function's version becomes active on capacity provider instances once published.
After we create the Lambda function(s), we can view the configuration of the capacity provider and function scaling in the "configuration" section:
I set both MinExecutionEnvironments and MaxExecutionEnvironments to 0 to "disable" the Lambda functions so that they don't produce any costs.
In case you set different values 1 and 3 respectively, like in the IaC code above, you can see the regular EC2 instances running on the Amazon EC2 service page:
We can distinguish them by the Operator property, which starts with scaler.lambda. We don't have any rights to connect to such EC2 instances used by the LMI or even stop them.
If we now switch to the "Capacity providers" configuration and select our capacity provider with the name CapacityProviderForJava25LMI and go to the "Function versions" tab, we can see all Lambda functions and their versions linked to this capacity provider:
Conclusion
In this article, we explained how to create Lambda functions and attach them to a capacity provider. In the next part of the series, we'll cover the remaining topics like monitoring, (currently) unsupported features, current challenges, and pricing.
If you like my content, please follow me on GitHub and give my repositories a star!
Please also check out my website for more technical content and upcoming public speaking activities.





Top comments (0)