How to monitor your PostgreSQL database with metrics collection, log aggregation, and beautiful dashboards
Introduction
If you're running PostgreSQL in production, you need to know what's happening under the hood. Slow queries, lock contention, vacuum issues – these can quietly kill your application's performance. But setting up proper observability doesn't have to be complicated.
In this post, I'll show you how to build a complete PostgreSQL observability stack using Docker Compose, Grafana Alloy, Prometheus, Loki, and Grafana. All running locally, all open-source.
The Architecture
Here's what we're building:
Components:
- PostgreSQL - Our database
- Grafana Alloy - Collects metrics from PostgreSQL and ships logs to Loki
- Prometheus - Stores time-series metrics
- Loki - Log aggregation (like Prometheus, but for logs)
- Grafana - Dashboards for everything
Prerequisites
You need:
- Docker
- Docker Compose That's it.
The Code
Let's look at the key files. First, our Docker Compose setup:
services:
postgres:
image: postgres:18.1
volumes:
- ./postgres.conf:/etc/postgresql/postgresql.conf
- ./init-exporter.sh:/docker-entrypoint-initdb.d/init-exporter.sh
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_EXPORTER_PASSWORD=${POSTGRES_EXPORTER_PASSWORD}
command: ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]
grafana-alloy:
image: grafana/alloy:v1.11.3
ports:
- "12345:12345"
volumes:
- "./config.alloy:/etc/alloy/config.alloy:ro"
- "./secrets/postgres_db_url.txt:/var/secrets/postgres_db_url.txt:ro"
prometheus:
image: prom/prometheus:v3.9.1
ports:
- "9090:9090"
grafana:
image: grafana/grafana:11.6
ports:
- "3000:3000"
The init script enables pg_stat_statements and creates the exporter user:
psql <<-EOSQL
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
CREATE ROLE postgres_exporter WITH LOGIN PASSWORD '$POSTGRES_EXPORTER_PASSWORD';
GRANT pg_monitor TO postgres_exporter;
GRANT SELECT ON pg_stat_statements TO postgres_exporter;
EOSQL
The Alloy configuration collects PostgreSQL metrics:
prometheus.exporter.postgres "postgres_db" {
data_source_names = [local.file.postgres_db_url.content]
enabled_collectors = [
"database",
"locks",
"stat_bgwriter",
"stat_statements", // Query performance metrics
"stat_activity_autovacuum",
"long_running_transactions",
]
}
prometheus.scrape "postgres_db" {
job_name = "postgres_db_metrics"
scrape_interval = "15s"
forward_to = [prometheus.remote_write.send_to_prometheus.receiver]
}
Getting Started
- Clone the repository
- Copy
.env.exampleto.envand set your passwords - Run:
docker-compose up -d
- Access Grafana at
http://localhost:3000(admin/admin) ## What Metrics Do You Get? Once running, you can query things like:
# Top 10 most called queries
topk(10, rate(pg_stat_statements_calls_total[5m]))
# Slowest queries
topk(10, rate(pg_stat_statements_total_time_seconds_total[5m]))
# Database connections over time
pg_stat_database_numbackends
You also get lock contention metrics, vacuum progress, buffer statistics - everything you need to debug performance issues.
The Dashboard
In Grafana, create a new dashboard and add panels using your PostgreSQL metrics. You'll see query performance, database statistics, and can correlate with logs from Loki.
Why This Setup?
A few reasons this works well:
- Alloy instead of postgres_exporter - Grafana Alloy is newer and more modern. It handles both metrics and log collection in one agent.
- pg_stat_statements - This extension is gold for understanding query performance. Enable it, and you can see exactly which queries are slowest.
- Loki for logs - Instead of grepping through files, you get structured logs in Grafana that you can filter and query.
- Everything in Docker - No system packages to install. Spin it up, tear it down, reproduce issues. ## What's Next? This is a practice project, so here's where you could extend it:
- Add alerts for slow queries or long-running transactions
- Set up retention policies in Prometheus
- Add more PostgreSQL exporters for connection pooling metrics
- Deploy to Kubernetes with proper secrets management ## Conclusion Observability doesn't have to be complex. With Docker and open-source tools, you can have a production-grade monitoring setup running locally in minutes. The code is on GitHub - clone it, play with it, break it. That's the best way to learn. - -
GitHub link: https://github.com/Shawmeer/postgres-observability-stack
Check more blogs at: https://khanalsamir.com
Have questions or ran into issues? Drop a comment below.

Top comments (0)