DEV Community

Lukasz Seremak
Lukasz Seremak

Posted on

Serverless Functions in Java – Does It Even Make Sense?

Java is rarely the first choice for serverless functions. Interpreted languages like JavaScript, Python, and Go dominate the space due to their faster startup times compared to applications running on the JVM.

In this article series, I’ll explore cases where writing serverless functions in Java can be not only feasible but actually beneficial. We'll also examine how to optimize cold start time and what techniques can help build efficient serverless solutions using Java.

What Exactly Is a Serverless Function?

First, let's break down the term function. In mathematics, a function defines the relationship between an input value (x) and an output value (f(x)), where for any given x, there’s always a unique result. We can express this as:

f(x) = x² + 3
Enter fullscreen mode Exit fullscreen mode

So, if x = 2, we get f(2) = 7.

A serverless function operates on a similar principle – it takes an input, processes it, and returns a defined result. It doesn't retain history or store state (stateless) – it runs only when invoked, executes the required computation, and then disappears, consuming zero system resources when idle.

This is, of course, a simplified view, but it captures the essence of what serverless functions truly are.

How Does a Serverless Function Work?

Imagine you're running an online store and need a mechanism to calculate discounted prices on demand. Instead of deploying a full application server, you can create a serverless function that:

  1. Takes the base price and discount percentage as input.
  2. Calculates the final discounted price using a predefined formula.
  3. Returns the result to the user.

Serverless functions can be triggered in multiple ways:

  • Via an API, when a user sends an HTTP request.
  • Through a scheduled job, for tasks like daily data analysis.
  • In response to an event, such as a new file added to a database or a message arriving in a queue.

The Magic Begins – Scaling from Zero to... Sky Is the Limit

One of the biggest advantages of serverless functions is automatic scaling – from zero when there's no traffic, to thousands of concurrent instances when demand spikes.

How Does This Work in Practice?

Let's return to our discount calculator example. Imagine your store allows customers to enter a product price to see the discount applied. During the day, traffic is low – just a few requests per hour. But in the evening, your store runs a flash sale, and suddenly thousands of users flood in to check prices.

With a traditional infrastructure, you’d need to:

  1. Predict traffic spikes (which is tricky – it could be 500 users or 50,000).
  2. Manually scale up resources to handle peak demand.
  3. Pay for all the infrastructure, even if traffic turns out lower than expected.

With serverless functions, it's different:

  • If a single request arrives, only one instance of the function runs.
  • If the store receives 10,000 requests in a few minutes, the cloud automatically spins up 10,000 instances, each handling one request.
  • When the sale ends and users stop checking prices, all instances shut down, meaning zero resource consumption and zero cost.

You Only Pay for What You Use

Thanks to the pay-as-you-go model, you don’t pay for idle infrastructure – only for actual function execution time. If no users access your application all day, your cost is zero. If for an hour, 100,000 requests flood in, functions will only run as needed, without wasting resources.

Sky Is the Limit – Effortless Scaling Beyond Traditional Infrastructure

Classic applications require manual scaling or autoscalers, predicting demand and managing resources. Serverless removes this overhead. If 100, 1,000, or even 100,000 instances are needed, the system handles it automatically. While cloud providers impose some limits, for most use cases, serverless enables scaling that traditional setups struggle to match.

Why Reality Isn’t So Perfect

Serverless functions sound too good to be true – they scale automatically, require zero server management, and you only pay for execution. Sounds amazing, right? Well… not everything is that perfect. Serverless comes with challenges and limitations that developers should be aware of.

Cold Start – JVM’s Biggest Pain Point

One of the biggest drawbacks of serverless functions is the cold start, which refers to the time needed to spin up a function after a period of inactivity. Why does this matter?

  • Interpreted languages (Python, JavaScript) start almost instantly, while Java, running on the JVM, requires loading the runtime environment, which can take several seconds.
  • For interactive applications, even a few-second delay is problematic, especially if functions handle user-facing API requests.

Execution Time Limits

Most cloud providers set a maximum execution time for serverless functions:

  • AWS Lambda – Up to 15 minutes.
  • Azure Functions – Up to 10 minutes (Consumption Plan).
  • Google Cloud Functions – Typically several minutes, depending on configuration.

This means serverless isn’t suited for long-running processes – for complex tasks, a dedicated application server or distributed processing system is often a better choice.

Stateless Nature

Serverless functions are stateless – every time they run, they start in a fresh environment. They can’t retain data between executions, so if persistent storage is required, you need to use:

  • Databases (e.g., DynamoDB, Firebase).
  • Queues and message streams (e.g., Kafka, SQS).
  • Cloud storage (e.g., S3, Cloud Storage).

Debugging and Diagnostics

Debugging serverless functions can be challenging, as there’s no direct access to the infrastructure. Instead, developers must rely on:

  • Cloud provider logs.
  • Performance monitoring tools to analyze issues in real time.
  • Local testing frameworks like AWS SAM or LocalStack for simulating execution.

Serverless Discount Calculator Project

As part of this series, we’ll implement a serverless application for dynamic discount calculations using AWS Lambda, API Gateway, and SQS.

Architecture Overview

Our system follows an event-driven architecture with key components:

  1. REST API Gateway (AWS API Gateway) – Handles PUT /calculatePrice, forwarding user ID and base price to Lambda.
  2. AWS Lambda – Discount Calculation
    • Fetches personal discount via external REST API.
    • Retrieves global discount from another API (which updates dynamically).
    • Computes the final discount and returns it via API Gateway.
    • Sends a message to AWS SQS with discount details.
  3. AWS Lambda – Data Storage
    • Listens to SQS queue, storing data in the database.

Image description


If you're interested, subscribe to my profile – the next part of this series is coming soon! 🚀

Top comments (0)