DEV Community

Vadym Kazulkin for AWS Heroes

Posted on • Edited on

Serverless applications with Java and Aurora DSQL - Part 1 Introduction and sample application setup and configuration

Introduction

In this article series, we'll cover how to develop Serverless applications on AWS using Amazon Aurora DSQL as a database of choice and managed Java 21 runtime on AWS Lambda. I've already covered how to developer similar application using DynamoDB in the article series about Lambda SnapStart and also covered the usage of Aurora Serverless v2 with Data API and RDS Proxy in the article series Data API for Amazon Aurora Serverless v2.

Throughout this series, we'll develop the sample application and also measure the performance of the Aurora DSQL database using different SQL CRUD queries as well as the Lambda functions. Of course, we'll use Lambda SnapStart and priming techniques to improve Lambda performance (especially decrease the cold starts). We'll also use Strands Agent with Amazon Aurora DSQL to talk to our application using the natural language.

Introduction to Amazon Aurora DSQL

Here I'll provide a collection of sources which will help you understand the architecture and trade-offs behind this database.

Sample Application with Java and Aurora DSQL

For our experiments, we will use our sample application.

First of all we need to create an Aurora DSQL cluster. As currently there is no support for AWS SAM, we'll create it manually and reference the Cluster ID in the template later. We'll do it in the us-east-1 AWS region and start with the single-region cluster first (we'll explore the multi-cluster option later).

Go to Aurora DSQL Clusters and then click on "Create cluster". Select single-region and then once again "Create cluster".

You can of course use  AWS ClI and dsql create-cluster command.

After a short period of time you DSQL cluster will be created.

Please copy Cluster ID (not Endpoint URL), we'll need it later.

Now we need to connect to this created cluster and create the database tables.

For creating a multi-region Aurora DSQL cluster you will need to define the peer cluster region (for example us-east-2) and the peer-witness region (for example us-west-2)

Please follow the instructions here or here to connect to the PostgreSQL-compatible Aurora DSQL cluster and then execute the following SQL commands to create orders and order_items database tables.

CREATE TABLE orders (id int PRIMARY KEY,  user_id  int NOT NULL, total_value int NOT NULL, status varchar (255) NOT NULL, created timestamp ); 

CREATE TABLE order_items (id int PRIMARY KEY,  product_id int NOT NULL, order_id int NOT NULL, value int NOT NULL, quantity int NOT NULL);

CREATE INDEX ASYNC order_items_order_id_idx ON order_items (order_id);
CREATE INDEX ASYNC order_created_idx ON orders (created);
Enter fullscreen mode Exit fullscreen mode

In this application, we will create order and order items and retrieve them by their ID and use Amazon Aurora DSQL as a relational database for the persistence layer. We use Amazon API Gateway which makes it easy for developers to create, publish, maintain, monitor and secure APIs and AWS Lambda to execute code without the need to provision or manage servers. We also use AWS SAM, which provides a short syntax optimised for defining infrastructure as code (hereafter IaC) for serverless applications.

We have several Lambda functions: CreateOrderFunction, GetOrderByIdFunction, GetOrdersByCreatedDatesFunction and UpdateOrderStatusByOrderIdFunction (later others may also come) defined in the AWS SAM template.yaml.

Now let's explore how to connect our application to the created Aurora DSQL Cluster.

First of all we declare AuroraDSQLClusterId input parameter in the template.yaml:

Parameters:  
  AuroraDSQLClusterId:
    Description: "Aurora DSQL Cluster ID"
    Type: String
Enter fullscreen mode Exit fullscreen mode

So, first time executing sam deploy -g we'll need to pass the Aurora SQL Cluster ID (see above).

Then we declare several Global (valid of all Lambda functions) Lambda function environment variables:

Globals:
  Function:
    CodeUri: target/aws-lambda-java-21-with-aurora-dsql-and-pgjdbc-1.0.0-SNAPSHOT.jar
    Runtime: java21
    Environment:
      Variables:
        AURORA_DSQL_CLUSTER_ENDPOINT: !Sub ${AuroraDSQLClusterId}.dsql.${AWS::Region}.on.aws
        REGION: !Sub ${AWS::Region}

Enter fullscreen mode Exit fullscreen mode

AURORA_DSQL_CLUSTER_ENDPOINT environment variable constructs the Aurora DSQL endpoint URL using the input parameter variable AuroraDSQLClusterId using the standard schema how DSQL endpoint URLs are created. REGION environment variable sets the AWS region we deploy our application in (us-east-1).

Before we look at how to use them in the source code let's also give all Lambda function the permission to execute SQL queries against the Aurora DSQL database (here we use CreateOrderFunction Lambda function as an example):

  CreateOrderFunction:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: CreateOrderWithJava21Lambda
      Handler: software.amazonaws.example.order.handler.CreateOrderHandler::handleRequest
      Policies:
        - Version: '2012-10-17' # Policy Document
          Statement:
            - Effect: Allow
              Action:
                - dsql:DbConnectAdmin
              Resource: !Sub arn:aws:dsql:${AWS::Region}:${AWS::AccountId}:cluster/${AuroraDSQLClusterId}
Enter fullscreen mode Exit fullscreen mode

Now let's take a look at the source code of our application.

Among others, we need to define several key dependencies in the pom.xml

<dependency>
   <groupId>software.amazon.awssdk</groupId>
   <artifactId>dsql</artifactId>
</dependency>
<dependency>
   <groupId>org.postgresql</groupId>
   <artifactId>postgresql</artifactId>
   <version>42.7.6</version>
</dependency>       
<dependency>
   <groupId>com.zaxxer</groupId>
   <artifactId>HikariCP</artifactId>
   <version>6.3.0</version>
</dependency>
Enter fullscreen mode Exit fullscreen mode

We use dsql artifact as a part of AWS SDK for Java, PostgreSQL JDBC database driver and Hikari datasource pool for managing database connections.

Let's explore DsqlDataSourceConfig class where we manage datasource and JDBC connection.

First of all we read REGION and AURORA_DSQL_CLUSTER_ENDPOINT environment variable defined in the SAM template previously, construct JDBC_URL and create DsqlUtilities object:

private static final String REGION = System.getenv("REGION");
private static final DsqlUtilities utilities = DsqlUtilities.builder().region(Region.of(REGION.toLowerCase()))
.credentialsProvider(DefaultCredentialsProvider.create()).build();

private static final String AURORA_DSQL_CLUSTER_ENDPOINT = System.getenv("AURORA_DSQL_CLUSTER_ENDPOINT");

private static final String JDBC_URL = "jdbc:postgresql://"
+ AURORA_DSQL_CLUSTER_ENDPOINT
+ ":5432/postgres?sslmode=verify-full&sslfactory=org.postgresql.ssl.DefaultJavaSSLFactory";
Enter fullscreen mode Exit fullscreen mode

Then we create Hikari datasource pool for managing database connections like this :

private static HikariDataSource hds;
static {

final HikariConfig config = new HikariConfig();
config.setUsername("admin");
config.setJdbcUrl(JDBC_URL);
config.setMaxLifetime(1500 * 1000); // pool connection expiration time in milliseconds, default 30
config.setMaximumPoolSize(1); // default is 10

String authToken = getAuthTokenForAdminUser();
config.setPassword(authToken);
hds = new HikariDataSource(config);      
}
Enter fullscreen mode Exit fullscreen mode

We set maximum pool size to 5 (default is 10) and maximum life time of the pool connection expiration to 1500 seconds.
It's important to understand how to generate a password to obtain the JDBC connection, which we set with config.setPassword(authToken). For this we use the getAuthTokenForAdminUser method which uses a previously created DsqlUtilities object. We set an expiration time of the authentication token to 90 minutes, you might decide to set the shorter one (default is 15 minutes).

private static String getAuthTokenForAdminUser() {
String authToken= utilities.generateDbConnectAdminAuthToken(builder -> 
builder.hostname(AURORA_DSQL_CLUSTER_ENDPOINT).
region(Region.of(REGION.toLowerCase())).
expiresIn(Duration.ofMillis(90*60*1000))); // Token expiration, default is 900 seconds

return authToken;   

}
Enter fullscreen mode Exit fullscreen mode

Then we have OrderDao class with the bunch of methods like createOrder, getOrderById, updateOrderStatusByOrderId which are used by the Lambda functions (see the package software.amazonaws.example.order.handler for its implementation). All these methods use JDBC API and invoke getConnection method of the OrderDao class to obtain the JDBC connection from the DsqlDataSourceConfig class.

private static final Connection getConnection() throws SQLException {
   return DsqlDataSourceConfig.getPooledConnection();
}
Enter fullscreen mode Exit fullscreen mode

This is how implementation of the getPooledConnection method for the Hikari pool looks like in the DsqlDataSourceConfig class :

public static Connection getPooledConnection() throws SQLException {
  String authToken = getAuthTokenForAdminUser();
  hds.setPassword(authToken);
  return hds.getConnection();
 }
Enter fullscreen mode Exit fullscreen mode

Of course, we can decide against the usage of the datasource pool in the single-threaded Function as a Service application and instead simply construct and manage JDBC connection on your own. For this you can use the getJDBCConnection method of the DsqlDataSourceConfig class. Don't forget to remove Hikari-related code from this class and delete the dependency to it in the pom.xml

public static Connection getJDBCConnection() throws SQLException {
   if (jdbConnection == null || jdbConnection.isClosed()) {
    Properties props = new Properties();
    props.setProperty("user", "admin");
    String authToken = getAuthTokenForAdminUser();
    props.setProperty("password", authToken);
    jdbConnection = DriverManager.getConnection(JDBC_URL, props);
  }
return jdbConnection;
 }
Enter fullscreen mode Exit fullscreen mode

Then you only need to change the implementation of the getConnection method of the OrderDao class to use the JDBC connection instead of te Hikrao Pool.

private static final Connection getConnection() throws SQLException {
     return DsqlDataSourceConfig.getJDBCConnection();
}
Enter fullscreen mode Exit fullscreen mode

In order to build and deploy the sample application, we need the following local installations: Java 21, Maven, AWS CLI and SAM CLI.

To build the application please run the following Maven command mvn clean package (function.zip is created and stored in the subdirectory named target) and to deploy it with the following SAM command sam deploy -g. We will see our customised Amazon API Gateway URL in the return. We can use it to create orders and order items and retrieve them by ID. The interface is secured with the API key. We have to send the following as HTTP header: "X-API-Key: a6ZbcDefQW12BN56WEDS7", see MyApiKey definition in template.yaml. To create the new order we can use the following curl query:

curl -m PUT -d '{"userId":12345,"totalValue":350, "orderItems":[{"productId":230, "value":100,"quantity":3},{"productId":233, "value":250,"quantity":3}] }' -H "X-API-Key: a6ZbcDefQW12BN56WEDS7" https://{$API_GATEWAY_URL}/prod/orders

Aurora DSQL doesn't currently support sequences, that's why for simplicity I generate random order id between 1 and 10000000 in the backend. The generated order id will be displayed in the API Gateway response, so use it to retrieve the order later. You can re-write the application a bit and use UUID instead, but please also change the database schema, as it uses int as the column type for the primary key order id in the orders database table.

For example, to query the existing order by with ID=1, we can use the following curl query:

curl -H "X-API-Key: a6ZbcDefQW12BN56WEDS7" https://{$API_GATEWAY_URL}/prod/orders/1

In both cases, we need to replace the {$API_GATEWAY_URL} with the individual Amazon API Gateway URL that is returned by the sam deploy -g command. We can also search for this URL when navigating to our API in the Amazon API Gateway service in the AWS console.

Conclusion

In this article, we learned the concepts behind the Amazon AuroraDSQL and explored how to create a single-region PostgreSQL-compatible DSQL cluster and connect to it using the sample application for creating and retrieving order and order items which uses API Gateway and Lambda with the managed Java 21 runtime.

In the next part of the series, we'll perform some Aurora DSQL performance measurements using different CRUD statements.

Top comments (0)