The hype around serverless computing is everywhere, with some describing it as the most awesome thing ever. However, this enthusiasm often leads to serverless being applied as a one-size-fits-all solution—even in cases where it may not be the most appropriate choice.
While I’m not opposed to serverless, it’s important to emphasize that it’s not a remedy for every challenge in the software world. Serverless isn’t inherently problematic; in fact, it can be an excellent solution when applied to the right scenarios. Many of the challenges associated with serverless can also be managed effectively with the proper approach.
Outlined below are some key limitations of serverless that should be considered when deciding whether it’s the right solution.
Limitations of Serverless
-
Leaky Abstractions and Complexity: Serverless platforms aim to simplify development by abstracting away server management, but this abstraction can be leaky. This means developers need to understand how the underlying infrastructure works to use serverless effectively.
- For example, database connections are often broken by default in serverless environments because each function invocation typically gets a new execution context. This can disrupt connection pooling in traditional databases like MySQL or PostgreSQL, leading to database failures. To mitigate this, developers need to implement workarounds like connection pooling libraries, RDS Proxy (in the case of AWS), or use cloud databases that are less sensitive to connection pooling.
-
Challenging Developer Experience: Testing and debugging in serverless environments can be difficult. While unit tests can verify logic, they may not accurately reflect the function's behavior in a real-world environment. End-to-end testing often requires manual efforts or deploying to a development environment, which can be disruptive to other teams.
- Observability also presents challenges. Gaining clear insights into function execution often requires adding specific code for tracing and monitoring. Tracing request flows through multiple serverless functions and message queues can be particularly difficult, and developers need to carefully consider anti-patterns to avoid issues.
-
Cold Starts: Cold starts, the latency incurred when a function is invoked for the first time or after a period of inactivity, are a significant performance drawback. While cloud providers are working to mitigate cold starts, they remain a factor.
- Its mentioned on AWS documentation that, a Lambda service can retain an execution context for an unpredictable amount of time. This uncertainty makes it challenging to rely on warm starts for performance-critical applications. Developers might need to employ workarounds like provisioned concurrency, which comes at an additional cost.
-
Vendor Lock-In: Serverless functions heavily depend on the cloud provider's ecosystem, including SDKs and services. This creates vendor lock-in, making it difficult to migrate applications to a different cloud platform without significant code rewriting.
- For instance, an Azure function utilizing Azure's message queuing system would require substantial modification to run on AWS Lambda due to the differences in SDKs and service APIs.
-
Concurrency Limitations: Serverless platforms impose concurrency limits on function execution. In AWS, the default concurrency limit is 1,000 executions per account. This can restrict the performance of applications that require high throughput.
- An AWS account could potentially require 8,470 concurrency units to handle the same throughput as demonstrated in the TechEmpower benchmarks, which would necessitate creating multiple accounts to handle the load. Workarounds like using multiple AWS accounts or optimizing function execution time might be necessary to address concurrency constraints.
-
Unpredictable Costs: Serverless pricing models are based on usage, which can lead to unpredictable costs, especially for applications with variable workloads or those that interact with multiple metered services. Unexpected surges in traffic or the use of additional cloud services can result in surprisingly high bills.
- An example case would be that of an app developer who received a $100,000 bill after a couple of days due to unexpected popularity. Developers need to carefully consider cost management strategies, such as throttling, optimizing function performance, and selecting appropriate pricing plans to avoid unexpected cost overruns.
As we’ve discussed some of the limitations associated with serverless, let’s now explore its key advantages and when it may be beneficial to consider a serverless approach.
Advantages of Serverless
- Speed to Market: Serverless allows developers to deploy applications quickly without managing servers or infrastructure. This speed is advantageous for rapid prototyping, MVP development, and scenarios where time-to-market is critical.
-
Cost-Effectiveness for Specific Use Cases: Serverless can be cost-effective for specific workloads, particularly those with low throughput or infrequent execution. Examples include:
- Internal applications
- Background processes
- Telemetry collection
- Developer environments
- Queue-based systems
- Out-of-band processing
- Small batch processing jobs with short execution times
- Static site generation (using platforms like Vercel and Netlify) which we can discuss later in detail
Scalability for Bursty Traffic: Serverless excels at handling unpredictable, bursty traffic patterns. The platform automatically scales resources up and down based on demand, ensuring responsiveness during traffic spikes without the need for manual intervention.
Use Cases of Serverless
Mentioned below are some use cases where serverless would have an advantage
Latency-Insensitive Applications: Serverless is a suitable choice for applications where latency is not a primary concern. Examples include background tasks, data processing, and internal tools where occasional cold starts are acceptable.
Queue-Based Systems: Serverless functions are well-suited for processing tasks triggered by messages in a queue. They can scale automatically based on queue length, ensuring efficient processing without the need for constant server availability.
Out-of-Band Processing: Tasks that can be performed asynchronously, outside the main application flow, are good candidates for serverless. Examples include sending emails, generating reports, or processing data after a user action is complete.
Low-Throughput Applications: Applications with low request volumes, such as developer environments or internal tools, benefit from the cost-effectiveness of serverless, as they are only billed for actual usage.
Build Systems and Static Site Generation: Serverless platforms like Vercel and Netlify are commonly used for static site generation and build processes. They simplify deployment and offer efficient scaling for website content.
Handling Bursty Traffic: Applications that experience unpredictable surges in traffic, such as e-commerce sites during sales events or online gaming platforms, can leverage the automatic scaling capabilities of serverless to handle peak loads without over-provisioning infrastructure.
Top comments (0)