DEV Community

Cover image for How Slow Queries Lead to HikariCP Connection Timeouts
Anas Anjaria
Anas Anjaria

Posted on • Edited on • Originally published at Medium

How Slow Queries Lead to HikariCP Connection Timeouts

Originally published on Medium. Lightly revised for Dev.to


Cover Image Credits: Caspar Camille Rubin on Unsplash


If you've used HikariCP for connection pooling in Java or Scala apps, you've likely seen this dreaded exception:

java.sql.SQLTransientConnectionException: Connection is not available, request timed out after 30000ms.
Enter fullscreen mode Exit fullscreen mode

This error can occur for several reasons, but in my case, it was triggered by long-running queries blocking the connection pool.

I’ve seen this pattern repeatedly in our production system, and I’d like to walk you through what’s happening when this error surfaces and how to address it.

Understanding the Exception: What’s Happening?

When this exception occurs, it’s essentially telling you that HikariCP couldn’t provide a connection from its pool within the given timeout window.

In many instances I’ve worked on, the root cause was queries taking too long to execute. When queries take longer, they hold on to connections for an extended period, which means fewer connections are available for new incoming queries. This leads to a buildup of queries waiting for a connection, eventually triggering a timeout if no connection becomes available in time.

How slow queries can lead to connection pool exhaustion and timeouts.
How slow queries can lead to connection pool exhaustion and timeouts.

To make this easier to visualize, I’ve created a simple illustration that demonstrates the concept of query buildup due to busy connections.

⚠️ Disclaimer: This illustration is a simplified conceptual idea and doesn't reflect HikariCP's exact implementation.
Enter fullscreen mode Exit fullscreen mode

Reproducing the Problem with a Minimal Example

To better understand this behavior, I created a minimal example using Scala and HikariCP. The goal was to simulate a scenario where a long-running query blocks the pool, causing other queries to time out.

Here’s the setup:

  • A single connection in the pool.
  • Two parallel queries run using separate threads.
  • One of them simulates a long-running task with pg_sleep(4), while the other is short but must wait for the connection.
class HikariCPDemoSpec extends AnyWordSpec with Matchers with BeforeAndAfterAll {

  private val databaseUrl = "postgresql://localhost:5432/demo"
  private val driver = "org.postgresql.Driver"
  private val connectionTimeoutInMillis = 3000

  private val config = new HikariConfig()
  ...
  config.setMaximumPoolSize(1)
  config.setConnectionTimeout(connectionTimeoutInMillis)

  private val ds = new HikariDataSource(config)

  private val executorService = Executors.newFixedThreadPool(2, new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
  private implicit val executionContext: ExecutionContext = ExecutionContext.fromExecutor(executorService)

  private def executeQuery(sleepSeconds: Int, callback: () => Unit): Future[Unit] = Future {
    printWithThreadName("Executing query with sleep: " + sleepSeconds)
    val connection = ds.getConnection
    try {
      val stmt = connection.prepareStatement(s"SELECT pg_sleep($sleepSeconds)")
      stmt.execute()
      callback()
    } finally {
      connection.close()
    }
  }

  private def printWithThreadName(message: String): Unit = {
    val threadName = Thread.currentThread.getName
    println(s"${LocalDateTime.now} - [$threadName] - $message")
  }

  "Demo" should {
    "throw an exception when long running query block a connection" in {
      intercept[SQLTransientConnectionException] {
        // Both computations will start immediately in parallel, since we have two threads in the pool.
        // However, the long-running query will block the other query since we only have one connection in the pool.
        val longQuery = executeQuery(4, () => printWithThreadName("Quick Task."))
        val blockedQuery = executeQuery(1, () => printWithThreadName("Quick Task."))

        val r = (for {
          _ <- longQuery
          _ <- blockedQuery
        } yield ())
        Await.result(r, Duration.Inf)
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The result? A SQLTransientConnectionException, just like we saw in production.

This snippet illustrates a real-world pattern:

Even a short query can fail if all connections are held by slow or blocking queries.

🧠 Lesson learned: It’s not just how many queries you run, but how long each one holds a connection that impacts your pool.

How to Mitigate the Issue (Based on What I’ve Learned)

Here’s what has worked for me to mitigate these connection timeout issues:

1. Optimize Your Queries

Focus on optimizing any slow-running queries. Use database tools like EXPLAIN to identify inefficiencies, such as missing indexes or sub optimal joins, that are slowing things down.

2. Adjust HikariCP’s Pool Settings

While you’re optimizing your slow queries, you can temporarily adjust HikariCP’s connectionTimeout settings. This provides a buffer to support your customers and avoid service disruptions during peak load, acting as a short-term workaround until your queries are fully optimized.

3. Use Query Timeouts

Implement query-level timeouts to prevent any single query from hogging connections for too long. This can help free up resources and reduce the chances of running into the timeout error.

Conclusion

Based on my experience, this error frequently surfaces when slow queries overwhelm the available connections in the pool. By optimizing your queries, tuning the connection pool settings, and implementing safeguards like query timeouts, you can significantly reduce the frequency of these timeouts.

Have you encountered similar issues with HikariCP? How did you solve them? Feel free to share your experience in the comments below.


📘 I write about PostgreSQL, devOps, backend engineering, and real-world performance tuning.

🔗 Find more of my work, connect on LinkedIn, or explore upcoming content: all-in-one

Top comments (0)