DEV Community

Cover image for Java Database Access Without the ORM Overhead: 5 Practical SQL Techniques That Actually Work
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

Java Database Access Without the ORM Overhead: 5 Practical SQL Techniques That Actually Work

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

Let's talk about getting data from a database in a Java application. Sometimes, the big, fancy tools—the full Object-Relational Mappers (ORMs)—feel like using a rocket launcher to open a door. They're powerful, but they add a lot of complexity and can sometimes get in your way. You might want to just write SQL, know exactly what's being sent to your database, and keep things fast and clear.

This is where lightweight SQL libraries come in. They sit between writing raw JDBC code yourself (which is very tedious) and using a full ORM (which can be opaque). They give you helpful tools without hiding the database from you. I want to share five practical methods I use to make database work simpler, safer, and faster.

First, let's talk about a common headache: writing a SQL query as a string in your Java code. You make a typo, or you change a column name in the database, and you only find out when your application crashes at runtime. It's frustrating. We can do better by making the compiler check our SQL for us.

Some libraries let you build your queries using real Java methods and classes that represent your tables and columns. If you try to reference a column that doesn't exist, your code won't even compile. This immediate feedback is a game-changer. It turns database errors from runtime surprises into compile-time fixes.

Here’s what this looks like. Instead of typing "SELECT * FROM orders WHERE customer_id = ?", you work with objects.

// A query built with type-safe references
List<Order> orders = ctx.select()
    .from(ORDERS)
    .where(ORDERS.CUSTOMER_ID.eq(customerId)
        .and(ORDERS.STATUS.eq(OrderStatus.PENDING)))
    .orderBy(ORDERS.CREATED_AT.desc())
    .fetchInto(Order.class);
Enter fullscreen mode Exit fullscreen mode

The beauty here is that ORDERS, CUSTOMER_ID, and STATUS are generated classes based on your actual database schema. Your IDE can autocomplete them. You can also build complex queries dynamically, adding conditions only when needed, and it's still all type-safe.

// Building a WHERE clause piece by piece
Condition condition = DSL.noCondition();
if (statusFilter != null) {
    condition = condition.and(ORDERS.STATUS.eq(statusFilter));
}
if (minAmount != null) {
    condition = condition.and(ORDERS.TOTAL.greaterThan(minAmount));
}

List<Order> results = ctx.selectFrom(ORDERS)
    .where(condition)
    .fetchInto(Order.class);
Enter fullscreen mode Exit fullscreen mode

This approach keeps your SQL visible and under your control, but with the safety net of the Java compiler. I've found it drastically reduces simple bugs in database code.

The second technique is about reducing repetitive code. Basic JDBC involves a lot of boilerplate: getting a connection, creating a statement, binding parameters, iterating a result set, handling exceptions, and always, always closing resources. It's easy to make a mistake, like forgetting to close a connection, which can lead to resource leaks.

Modern libraries offer a more declarative style. You can define what you want to do—find an order, save a customer—and the library handles the routine steps for you. You focus on the what, and it handles much of the how.

One way is through annotated interfaces. You write an interface method and put a SQL annotation on it. The library implements the method at runtime.

// Define your data access as an interface
@RegisterBeanMapper(Order.class)
public interface OrderRepository {

    @SqlQuery("SELECT * FROM orders WHERE customer_id = :custId")
    List<Order> findForCustomer(@Bind("custId") String customerId);

    @SqlUpdate("UPDATE orders SET status = :status WHERE id = :id")
    void updateStatus(@Bind("id") String orderId, @Bind("status") String newStatus);
}
Enter fullscreen mode Exit fullscreen mode

You then get an instance of this interface from the library and just call the methods. No try-catch-finally blocks in sight for these simple operations. For more complex logic, you can drop back into a more manual, fluent style within a controlled handle, which is like a supercharged JDBC connection wrapper.

// Using a handle for a custom, complex query
List<Order> bigOrders = jdbi.withHandle(handle -> {
    return handle.createQuery("""
            SELECT o.*, c.name 
            FROM orders o
            JOIN customers c ON o.customer_id = c.id
            WHERE o.total > :minTotal
            """)
        .bind("minTotal", 1000.00)
        .mapToBean(Order.class)
        .list();
});
Enter fullscreen mode Exit fullscreen mode

This fluent style keeps the SQL front and center but removes nearly all the clutter. The withHandle method also guarantees that the database connection is properly closed when you're done, which is a huge relief.

My third point is about a critical resource: database connections. Creating a connection is slow. You don't want your application to create a new one for every single query. Conversely, you can't just leave hundreds of connections open forever; your database will struggle.

You need a pool—a collection of reusable connections that your application can borrow and return. This isn't a library for writing SQL, but it's essential for performance. A good pool manages the lifecycle intelligently.

I configure my connection pool with a few key settings. I tell it the minimum and maximum number of connections to hold. I set a timeout so a request doesn't wait forever if all connections are busy. I also make sure it tests connections before giving them out, in case the database restarted.

// Setting up a robust connection pool
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://localhost/mydb");
config.setUsername("app");
config.setPassword("secret");

config.setMaximumPoolSize(20); // Don't overload the DB
config.setMinimumIdle(5);      // Keep a few ready
config.setConnectionTimeout(30000); // 30 second wait max
config.setIdleTimeout(600000); // Close idle connections after 10 minutes

// This helps with performance for repeated queries
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");

HikariDataSource dataSource = new HikariDataSource(config);
Enter fullscreen mode Exit fullscreen mode

Then, I inject this dataSource into my lightweight SQL libraries. They will draw connections from this pool. Proper pooling is invisible when it works well, but you notice immediately when it's missing—your app becomes slow and unstable under load.

The fourth technique is for when you need to change a lot of data at once. Imagine inserting 10,000 new order items. Doing it one at a time with 10,000 separate database network trips is incredibly slow. Batching is the answer.

Batching means grouping multiple SQL commands together and sending them to the database in one go. The database can process them much more efficiently. You can do this with plain JDBC, and the lightweight libraries make it even easier.

Here's the basic JDBC pattern. You prepare a statement, add many sets of parameters to it, and then execute the whole batch.

String sql = "INSERT INTO log_entries (message, level) VALUES (?, ?)";
try (PreparedStatement stmt = connection.prepareStatement(sql)) {

    for (LogEntry entry : entries) {
        stmt.setString(1, entry.getMessage());
        stmt.setString(2, entry.getLevel());
        stmt.addBatch(); // Add this set of parameters to the batch

        // Execute every 1000 entries to manage memory
        if (++count % 1000 == 0) {
            stmt.executeBatch();
        }
    }
    // Execute any remaining entries
    stmt.executeBatch();
}
Enter fullscreen mode Exit fullscreen mode

The libraries can abstract this further. With a type-safe query library, you can often just give it a list of objects and tell it to insert them all. The library constructs and executes the optimal batch command for you. This can improve performance by orders of magnitude for bulk data operations.

Finally, we need to talk about getting data out of the database. A ResultSet is a low-level, row-by-row cursor. You almost always want to convert those rows into the objects your business logic understands—an Order, a User, a Product.

Doing this conversion manually in every method is more boilerplate. The solution is a mapper. A mapper's job is to take a row from a ResultSet and turn it into an object. Lightweight libraries come with simple, automatic mappers that work if your object fields match your column names.

But sometimes you need more control. Perhaps your query joins tables, or you need to combine data in a special way. Writing your own mapper gives you that power.

// A custom mapper that handles a complex result
public class OrderWithCustomerMapper implements RowMapper<OrderSummary> {

    @Override
    public OrderSummary mapRow(ResultSet rs, int rowNum) throws SQLException {
        // Extract data from the joined tables
        String orderId = rs.getString("o.id");
        BigDecimal total = rs.getBigDecimal("o.total");
        String customerName = rs.getString("c.name");

        // Create a new object, perhaps a Record for immutability
        return new OrderSummary(orderId, customerName, total);
    }
}

// Using the custom mapper
List<OrderSummary> summaries = jdbcTemplate.query(
    "SELECT o.id, o.total, c.name FROM orders o JOIN customers c ON o.customer_id = c.id",
    new OrderWithCustomerMapper()
);
Enter fullscreen mode Exit fullscreen mode

For very large results that don't fit in memory, you can use a streaming approach. Instead of mapping all rows to a list upfront, you process each row as it comes from the database. This is crucial for data export or reporting features.

// Stream results to avoid loading everything into memory
jdbi.withHandle(handle -> {
    handle.createQuery("SELECT * FROM very_large_table")
          .setFetchSize(5000) // Hint to stream in chunks
          .mapToMap() // Get each row as a Map
          .useStream(stream -> {
              stream.forEach(row -> {
                  // Process one row at a time
                  processRow(row);
              });
          });
    return null;
});
Enter fullscreen mode Exit fullscreen mode

These five ideas—type-safe SQL, declarative APIs, connection pooling, batching, and flexible mapping—form a solid foundation. They give you clarity and control. You write the SQL you need, you get performance where it counts, and you keep your code clean and maintainable. It’s a pragmatic approach that has served me well in building applications that are efficient and easy to understand.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)