Here’s a list of most common AWS Lambda issues and ways to fix them.
Lack of memory
The only resource you can configure in AWS Lambda is memory size, and this value affects the CPU resources and memory of the Lambda. If all of the allocated memory is used, it might take more time for the Lambda execution to complete or even the Lambda might end up with a timeout. First, you need to analyze the circumstances under which the issue happens to find the possible cause. It could be some performance issue within your code that happens under certain conditions, some specifics about memory management of the underlying framework, etc. Simply it could be that the Lambda needs more memory so increasing the memory size will fix the issue.
Lack of processing power
If the allocated memory is not fully used but the Lambda execution is slow or even ends with timeouts, then it might be that the Lambda has a lack of processing power. This is usually the case when Lambda handles CPU intensive workloads. By adding more memory, Lambda gets more processing power. Even it might seem that the memory is overprovisioned, the CPU intensive workload will benefit from the increased processing power. Adding more memory to the Lambda function can even lead to lower costs since the code will execute much faster.
You can configure your AWS Lambda function to run up to 15 minutes. As a best practice, you should set the timeout value based on your expected execution time to prevent your function from running longer than intended. When the specified timeout is reached, AWS Lambda terminates the execution of your Lambda function. The usual suspects for the timeout are lack of memory, lack of processing power, or a call to some service that takes longer to complete. By assigning proper memory size you can avoid lack of memory or processing power issues. If you are making calls to other service from Lambda, make sure the service is performant enough to handle the requests in timely manner.
AWS Lambda’s default concurrent execution limit is 1,000 concurrent function executions across all functions in a given region per account. Concurrent executions refers to the number of executions of your function code that are happening at any given time. A sudden spike can cause the number of maximum concurrent executions to be reached and thus all other requests to any of your Lambda functions within the region will be throttled. Because of the sudden increase of requests, new instances of the Lambda have to be created. But each new instance has an additional execution delay caused by the cold start, thus more and more Lambda instances are needed to handle the requests. Using provisioned concurrency can help with fixing this issue. Additionally, it could be that Lambda makes calls to a service that is not scalable enough to handle the spike, and thus might add additional delay to the Lambda’s execution time, which means fewer requests can be handled. If you are making calls to other service from Lambda, make sure the service can scale as Lambda scales.
Hitting the limit of outgoing connections
There are some limits for the number of concurrent outgoing connections that can be created in the Lambda. Based on my experience, when more concurrent outgoing connections are started (e.g. in my case more than 700 HTTP requests) Lambda cannot handle them at once. By limiting the number of concurrent connections not to go above that threshold the issue was fixed.
Hope the above will help in resolving your AWS Lambda issues.