As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!
Let’s talk about what happens when your application needs to talk to a database. Every chat, every request for data, starts with a connection. If you make a new connection every single time, it’s like hanging up the phone after every sentence and dialing the number again. It’s slow, it wastes time, and it puts unnecessary strain on your database.
That’s where the idea of a connection pool comes in. Think of it as a team of pre-established phone lines kept open and ready. When your code needs to query the database, it simply picks up an available line. After the call is done, it hangs up but leaves the line open, ready for the next conversation. This simple shift is often the single biggest improvement you can make to your database performance.
In Go, while the standard database/sql package provides a basic pool, we can build something smarter. We can add features like statement caching, health checks, and detailed monitoring. This is what we’ll build together.
We start by defining the structure of our pool. It needs to hold connections, manage their lifecycle, and keep track of how it's performing.
type DBPool struct {
mu sync.RWMutex
connections chan *DBConnection
maxConns int
idleTimeout time.Duration
queryCache *QueryCache
stats PoolStats
}
The connections channel acts as our ready-to-use stack of database connections. Using a channel here gives us a thread-safe way to hand out and receive connections. The queryCache will store prepared SQL statements so we don't have to re-prepare them every time. PoolStats is a simple counter to help us understand our performance.
A single connection in our pool is more than just the raw database driver object. We wrap it to add useful metadata.
type DBConnection struct {
db *sql.DB
lastUsed time.Time
activeQueries int32
healthChecked bool
}
lastUsed lets us know if a connection has been sitting idle for too long. activeQueries could be used for advanced load balancing. This wrapper gives us the hooks we need for intelligent management.
Creating the pool involves setting up the channel and creating an initial set of connections so the first queries aren’t slowed down by setup time.
func NewDBPool(dsn string, maxConns int, idleTimeout time.Duration) (*DBPool, error) {
pool := &DBPool{
connections: make(chan *DBConnection, maxConns),
maxConns: maxConns,
idleTimeout: idleTimeout,
queryCache: &QueryCache{
stmts: make(map[string]*sql.Stmt),
maxSize: 100,
},
}
// Create half the max connections upfront
for i := 0; i < maxConns/2; i++ {
conn, err := pool.createConnection(dsn)
if err != nil {
return nil, err
}
pool.connections <- conn
}
go pool.healthCheck()
go pool.monitorStats()
return pool, nil
}
We start background goroutines for maintenance as soon as the pool is born. One checks the health of idle connections, the other periodically prints stats so we can see what’s happening.
When your code needs to run a query, it first asks the pool for a connection. This GetConnection method is the gatekeeper.
func (p *DBPool) GetConnection(ctx context.Context) (*DBConnection, error) {
select {
case conn := <-p.connections:
if time.Since(conn.lastUsed) > p.idleTimeout {
if err := p.healthCheckConnection(conn); err != nil {
atomic.AddUint64(&p.stats.queryErrors, 1)
return p.createNewConnection(ctx)
}
}
atomic.AddUint64(&p.stats.connectionsReused, 1)
conn.lastUsed = time.Now()
return conn, nil
default:
return p.createNewConnection(ctx)
}
}
It first tries to grab an existing connection from the channel. If one is available instantly, it checks if it’s been idle too long. If it has, a quick health check (a simple PING) ensures it’s still alive. If the channel is empty, the default case triggers, and we create a brand new connection on the fly. This prevents your code from waiting if all connections are busy.
After the query is done, you must give the connection back. ReleaseConnection handles this.
func (p *DBPool) ReleaseConnection(conn *DBConnection) {
conn.lastUsed = time.Now()
select {
case p.connections <- conn:
// Successfully returned to the pool.
default:
// The pool is full. Close this extra connection.
conn.db.Close()
atomic.AddUint64(&p.stats.connectionsCreated, ^uint64(0))
}
}
The select with a default is clever here. If the pool’s channel is full (we’ve hit maxConns), we simply close the connection instead of trying to push it in. This keeps our pool size stable and prevents a memory leak.
Now, let's run a query. The naive way is to prepare and execute a statement every single time. This is inefficient. Databases work hard to parse your SQL, validate it, and create an optimal execution plan. Doing this repeatedly is wasteful.
We can cache those prepared statements. Our ExecuteQuery method handles this.
func (p *DBPool) ExecuteQuery(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error) {
start := time.Now()
stmt, err := p.getCachedStatement(ctx, query)
if err != nil {
return nil, err
}
rows, err := stmt.QueryContext(ctx, args...)
if err != nil {
atomic.AddUint64(&p.stats.queryErrors, 1)
return nil, err
}
atomic.AddUint64(&p.stats.queriesExecuted, 1)
p.recordQueryLatency(time.Since(start))
return rows, nil
}
It gets a prepared statement from our cache, then executes it with the given arguments. Notice we also record the start time. This lets us track query latency, a critical metric for finding slow spots.
The real magic is inside getCachedStatement.
func (p *DBPool) getCachedStatement(ctx context.Context, query string) (*sql.Stmt, error) {
p.queryCache.mu.RLock()
stmt, exists := p.queryCache.stmts[query]
p.queryCache.mu.RUnlock()
if exists {
atomic.AddUint64(&p.stats.cacheHits, 1)
return stmt, nil
}
atomic.AddUint64(&p.stats.cacheMisses, 1)
// ... (code to prepare a new statement and store it in the cache)
}
We use a read lock first to check if the statement is already cached. A cache hit is fast and cheap. A cache miss means we need to prepare the statement fresh, which we then store in the map for next time. This simple cache can improve the speed of repeated queries dramatically.
Applications often need to perform operations in a transaction—a group of queries that must all succeed or all fail together. Our ExecuteTransaction method makes this safe and straightforward.
func (p *DBPool) ExecuteTransaction(ctx context.Context, txFunc func(*sql.Tx) error) error {
conn, err := p.GetConnection(ctx)
if err != nil {
return err
}
defer p.ReleaseConnection(conn)
tx, err := conn.db.BeginTx(ctx, &sql.TxOptions{
Isolation: sql.LevelReadCommitted,
ReadOnly: false,
})
if err != nil {
return err
}
defer func() {
if p := recover(); p != nil {
tx.Rollback()
panic(p)
}
}()
err = txFunc(tx)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
}
It gets a connection, starts a transaction, and then calls your custom function txFunc, passing the transaction object. If your function returns an error, the transaction is rolled back. If it panics, our defer function catches it and rolls back before re-throwing the panic. If all goes well, it commits. This pattern ensures you never forget to roll back on an error.
Sometimes you need to insert thousands of rows. Doing it one by one with individual INSERT statements is terribly slow. Batch inserts are the answer.
func (p *DBPool) BatchInsert(ctx context.Context, table string, columns []string, data [][]interface{}) error {
// ... (build query and parameter lists)
for i, row := range data {
// ... (add row to batch)
if (i+1)%1000 == 0 || i == len(data)-1 {
batchQuery := query + joinStrings(placeholders, ",")
if _, err := p.ExecuteQuery(ctx, batchQuery, params...); err != nil {
return err
}
params = params[:0]
placeholders = placeholders[:0]
}
}
return nil
}
This builds a single SQL query with multiple value sets, like INSERT INTO table (col1, col2) VALUES ($1, $2), ($3, $4), .... We execute in chunks of 1000 rows to avoid creating a gigantic SQL string or overwhelming the database with one massive operation. This method can be hundreds of times faster than single-row inserts.
A connection pool can’t be a "set it and forget it" component. Connections can go bad due to network issues, database restarts, or timeouts. We need a health check.
func (p *DBPool) healthCheck() {
ticker := time.NewTicker(time.Minute)
for range ticker.C {
connections := make([]*DBConnection, 0, p.maxConns)
for i := 0; i < len(p.connections); i++ {
select {
case conn := <-p.connections:
connections = append(connections, conn)
default:
break
}
}
for _, conn := range connections {
if err := p.healthCheckConnection(conn); err != nil {
conn.db.Close() // Discard the broken connection
} else {
p.connections <- conn // Return healthy connection to pool
}
}
}
}
Every minute, this goroutine drains the pool's channel, pings each connection, and only puts the healthy ones back. Broken connections are closed and removed. This ensures your application doesn't suddenly try to use a dead connection, which would cause a query to fail for no apparent reason.
Finally, we need to see what’s happening. The monitorStats goroutine gives us a live view.
func (p *DBPool) monitorStats() {
ticker := time.NewTicker(30 * time.Second)
for range ticker.C {
stats := p.GetStats()
fmt.Printf("Connections: %d/%d | Queries: %d/s | Cache: %.1f%% | Errors: %d\n",
len(p.connections), p.maxConns,
stats.QueriesPerSecond,
float64(stats.CacheHits)/float64(stats.CacheHits+stats.CacheMisses)*100,
stats.QueryErrors)
}
}
Seeing a low cache hit percentage tells you your queries are too varied and caching isn’t helping. A high error count points to database or network problems. This feedback is essential for running a reliable service.
Let's see it all work together in a simple main function.
func main() {
dsn := "host=localhost port=5432 user=postgres password=secret dbname=test"
pool, err := NewDBPool(dsn, 20, 5*time.Minute)
if err != nil {
log.Fatal(err)
}
defer func() {
close(pool.connections)
for conn := range pool.connections {
conn.db.Close()
}
}()
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
rows, err := pool.ExecuteQuery(ctx, "SELECT id, name FROM users WHERE active = $1", true)
if err != nil {
log.Fatal(err)
}
defer rows.Close()
for rows.Next() {
var id int
var name string
if err := rows.Scan(&id, &name); err != nil {
log.Fatal(err)
}
fmt.Printf("User: %d - %s\n", id, name)
}
stats := pool.GetStats()
fmt.Printf("Total queries: %d, Cache efficiency: %.1f%%\n",
stats.QueriesExecuted,
float64(stats.CacheHits)/float64(stats.CacheHits+stats.CacheMisses)*100)
}
This example shows the entire lifecycle: creating the pool, executing a query with a context timeout, processing results, and finally checking the statistics. The defer function ensures we cleanly close all connections when the program ends.
Building a system like this teaches you about resource management, concurrency, and performance optimization. You move from thinking about single database calls to managing a whole ecosystem of interactions. The goal is to make the database a fast, reliable partner in your application, not a bottleneck.
The techniques shown here—pooling, caching, batching, and monitoring—are not unique to Go. They are universal concepts for building efficient data-driven applications. Implementing them yourself gives you fine-grained control and a much deeper understanding of what’s happening between your code and your database. This understanding is what lets you build systems that are not just functional, but fast and resilient under real-world load.
📘 Checkout my latest ebook for free on my channel!
Be sure to like, share, comment, and subscribe to the channel!
101 Books
101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.
Check out our book Golang Clean Code available on Amazon.
Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!
Our Creations
Be sure to check out our creations:
Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools
We are on Medium
Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva
Top comments (0)