Modern APIs are trending more and more away from monolith architecture towards smaller, independent services.
Serverless applications are currently at the cutting edge of this trend. Cloud providers now offer hosting for tiny, independent functions that run within the host’s own managed environment. Clients don’t need to manage the underlying infrastructure or worry about considerations such as scaling or security. Examples include AWS Lambda, Azure Functions and Google Cloud Functions.
Since these services are charged in direct proportion to the resources used, it’s highly important to make sure resources aren’t wasted. In this article, we’ll look at Java serverless memory optimization to find out how best to keep our running costs to a minimum.
Key Concepts Behind Serverless Applications
Serverless applications are functions that run in the host’s infrastructure in response to a trigger. Instead of paying for an environment, the client pays for actual usage. For the right type of application, this results in lower cloud costs, and a reduced requirement for in-house skills. Deployment is fast, and scaling is simple.
They’re ideal for small, on-request tasks, such as uploading a file to the server or responding to a click on a website. They’re designed for short-running tasks, and most cloud providers place a limit of 10 to 15 minutes run time per request. State is not predictably maintained between requests. They therefore don’t work for long-running batch jobs, complex operations or stateful tasks. These small functions are often known as lambdas.
To optimize serverless functions, we need to understand a little of how they’re managed within the host’s environment. The first time a function is triggered, the host must fire up a new JVM. The function itself may have start-up tasks, such as loading reference information. This is known as a cold start, and may take 3 to 10 seconds. When the task completes, the function remains loaded for a short time in its current state. If it’s triggered again during this time, the host will simply re-use this instance, resulting in a warm start, improving response time drastically.
Cold starts can severely affect performance for intermittently-used functions, but they don’t happen often on functions with a high traffic volume. Various techniques have been developed to mitigate the cold start problem, such as AWS SnapStart.
We have no way of knowing when we write the function whether it will be triggered by a cold start or a warm start, so we need to beware of data contamination by variables that were carried over with a warm start. It’s perfectly possible, for example, for memory leaks to persist over time and cause OutOfMemory errors.
How Are Serverless Charges Calculated?
Charges are directly proportional to the amount of memory configured for the function and the total time it runs during the accounting period. Cold starts are included in the run time. Two ways to reduce charges, therefore, are to reduce memory usage, and to reduce the run time for each trigger.
So, reducing memory is always good, right? Wrong! Sometimes reducing memory can actually increase charges:
Some tasks may run faster if they have more memory available, thereby reducing the run time.
Cloud providers generally allocate other resources, such as CPU time and network bandwidth, in proportion to the configured memory rather than allowing them to be configured (and charged) separately. Again, extra CPU and bandwidth may result in decreased run time.
It’s actually really difficult to calculate the optimal memory size to find the sweet spot between reduced memory charges, and increased run time charges.
Java serverless memory optimization usually requires a repeated cycle of configuration, analyzing diagnostics and fine tuning for best results.
Java Serverless Memory Optimization: Finding the Sweet Spot
There are four things we need to do to make sure our memory configuration is optimized to keep cloud costs to the minimum:
Minimize memory footprint;
Configure accurately;
Test thoroughly;
Monitor carefully and adjust as needed.
We’ll look at each of these in turn.
1. Reducing Java Memory Footprint for Faster Serverless Performance
Any excess memory used by the application means we’re paying cloud costs for something we don’t need. Yes, every kilobyte does count! Here are some areas to look at.
Eliminate memory leaks. See this article to learn how to check whether memory is leaking, and diagnose and fix the problem: Java Memory Leaks: The Definitive Guide.
Use memory-saving coding techniques. An incredible amount of heap memory in most Java applications is, in fact, wasted by poor coding. See this article: How Much Memory Is My Application Wasting?
Reduce memory-heavy dependencies. Third party libraries are often very resource-hungry. Opt for lightweight lambda-friendly frameworks such as Micronaut or Quarkus.
Don’t write complex, multi-function lambdas. A lambda should be simple and deal with a single task.
Consider using GraalVM. This allows us to pre-compile the application, which results in a tiny footprint and much faster execution.
2. Setting Optimal Memory and CPU Allocation for Java Functions
There are several areas to look at when configuring serverless applications.
-
Configuring the JVM:
- The heap size must be configured accurately using the -Xms (initial heap size) and -Xmx (maximum heap size) command line arguments. For lambdas, the initial and maximum size should be set to the same, as we don’t want to waste precious run time (and money!) in resizing the heap on a cold start. If heap settings are set too low, the GC will become inefficient, and we risk OutOfMemory errors. If they’re set too high, the extra memory allocated is simply wasted and will incur costs. Alternatively, it may be a good idea to use the -MaxRAMPercentage to scale the heap memory according to the available RAM. Lambdas usually run in a container, with cgroup configured according to the lambda settings. Setting the heap as a percentage of available RAM will avoid having to reconfigure the JVM as well as the lambda.
- Choose and configure the most suitable garbage collection algorithm. Since memory sizes less than 3-4GB are unlikely to be allocated more than one vCPU, serial GC works best for these tasks (-XX:+UseSerialGC). For memory sizes larger than this, G1GC is usually the best choice (-XX:+UseG1GC).
- Enable GC logging. This uses very little overhead, and it’s one of the most valuable troubleshooting artifacts. Since persistent storage is not easily available, the logs should be sent to stdout, which is preserved in the cloud logs. Use tagging so the GC logs can be easily filtered out from these logs. Typically, the argument is -Xlog:gc*:stdout:time,level,tags.
-
Configuring the lambda: Useful options include:
- Set the required memory size: remember, the heap only accounts for about 70% of the total JVM memory requirements;
- Set a realistic timeout. If, for some reason, the application hangs, we don’t want to pay high costs for running time;
- Diagnostic requirements. This is essential for fine-tuning, monitoring, and raising alerts.
Since it’s so difficult to predict optimum offsets between memory and CPU requirements, the best way to approach this is to configure, test, monitor and compare performance with different settings.
3. How to Benchmark and Test Java Serverless Memory Usage
Lambdas must be thoroughly tested, both for functionality and for performance. Cloud providers usually offer useful testing environments, including:
Downloadable local emulators. These are perfect for unit and functional testing.
Cloud-based staging/test environments. These are useful for load testing, performance monitoring, and testing alongside other services.
Beta testing in the live environment. We can restrict the initial release to a trusted user base to make sure everything is working as it should.
4. Observability for Java Serverless: Memory, GC, and Performance Metrics
Monitoring and diagnostics play a key role in Java serverless memory optimization.
However, it’s not usually possible to use standard diagnostic tools in a serverless environment, so we have to adapt our strategy.
Cloud providers usually supply their own set of monitoring tools, allowing us to obtain information on:
Memory usage;
Execution duration;
Cold starts;
Downstream latency;
Retry behavior.
This lets us adapt our configuration to achieve peak efficiency, and also lets us proactively detect any developing problems.
For more diagnostics, we can enable Java Flight Recorder in the JVM and use Java Mission Control to interpret results. This is especially useful in stage testing and beta testing.
If JMX is enabled during testing on a local emulator, we can link to the running application with profiling tools. This lets us observe behavior, take heap dumps, and carry out other diagnostic tasks. During early testing, it’s a good idea to submit a heap dump to a tool such as HeapHero, which not only lets us explore memory usage but also gives a full analysis of wasted memory.
In both live and test environments, GC logs should be regularly monitored using a tool such as GCeasy. These logs let us detect problems before they affect live performance. Memory leaks, GC overload and key performance indicators can all be picked up from the logs in time to prevent production issues.
Conclusion
Java serverless memory optimization is tricky. Achieving the best performance usually requires fine-tuning configurations while monitoring regularly. There is always a trade-off between keeping memory low and reducing run time by adding more resources. This is difficult to estimate and is best achieved by carefully monitoring and adjusting settings.
We can keep the application’s memory footprint low by eliminating memory leaks and waste, keeping functions small, and using lightweight frameworks. It’s also worth considering creating pre-compiled object code using facilities such as GraalVM.
Serverless applications are only cost-effective if resources are kept to a minimum, so it’s worth putting effort into achieving the best results.
Top comments (0)