Originally posted on cloudacademy.com
So, ...no servers?
Yeah, I checked and there are definitely no servers.
Well…the cloud service providers do need servers to host and run the code, but we don't have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the architecture is managed by the provider.
But that doesn't mean there's no management at all. It's a common misconception that in a serverless paradigm, we don't have to care about monitoring, testing, securing, and other details that we are used to managing in other paradigms. So let's explore the main characteristics that we need to take in consideration when building a serverless solution.
First, why serverless?
One of the great advantages of serverless is that you only pay for what you use. This is commonly known as "zero-scale" which means that when you don't use it, the function can be reduced down to zero replicas so it stops consuming resources — not only network I/O, but also CPU and RAM — and then brought back to the required amount of replicas when it is needed.
The trigger of a function on an AWS Lambda can be an API gateway event, a modification on a DynamoDB table or even a modification on an S3 file as defined in What Are AWS Lambda Triggers? But to really save money on serverless, you need to take into consideration all of the services that a Lambda needs to work. Serverless architecture provides many advantages, but it also introduces new challenges. In this article, we'll provide best practices when building a serverless solution.
To deep dive into building, deploying, and managing the serverless framework, check out Cloud Academy's Serverless Training Library. It's loaded with content and hands-on labs to give you the practical experience you need to integrate serverless architecture into your cloud IT environment.
A common mistake is to confuse zero administration with zero monitoring. On a serverless environment, we still need to pay attention to the metrics, and these will be a bit different from the traditional ones like CPU, memory, disk size, etc. Lambda CloudWatch Metrics provides very useful metrics for every deployed function. According to the AWS documentation, these metrics include:
- Invocation Count: Measures the number of times a function is invoked in response to an event or invocation API call.
- Invocation Duration: Measures the elapsed time from when the function code starts executing to when it stops executing.
- Error Count: Measures the number of invocations that failed due to errors in the function (response code 4XX).
- Throttled Count: Measures the number of Lambda function invocation attempts that were throttled due to invocation rates exceeding the customer's concurrent limits (error code 429).
- Iterator Age: Measures the age of the last record for each batch of records processed. Age is the difference between the time the Lambda received the batch, and the time the last record in the batch was written to the stream. This is present only if you use Amazon DynamoDB stream or Kinesis stream.
- DLQ Errors: Shows all the messages that Lambda failed to handle. If the event was configured to be handled by the DLQ, it can be sent again to the Lambda function, generate a notification, or just be removed from the queue.
Besides the default metrics, there are plenty of monitoring services like Dashbird, Datadog, and Logz.io that can be integrated, so we can have additional metrics for a better logs visualization.
Right now, everything seems very clear and straightforward, right? We have some new metrics and configurations to learn, but it is pretty similar to our traditional structures.
But what about tests? Can we even make local tests for serverless?
Since we don't manage the infrastructure anymore, can we run it locally? If so, how can we do that?...
keep reading on cloudacademy.com