Why Your TimescaleDB Queries Are Slow (Even With Perfect Indexes)
You have tuned your indexes, verified your execution plan, and confirmed that PostgreSQL is reading exactly the rows it should. Yet your query still takes half a second. The execution time in EXPLAIN ANALYZE says 2ms. Where did the other 498ms go?
The answer is hiding in the very first line of the EXPLAIN ANALYZE output: Planning Time. And on TimescaleDB hypertables with many chunks, planning time can dwarf execution time by orders of magnitude.
The Mechanics of Chunk Constraint Exclusion
A TimescaleDB hypertable is not a single table. It is a virtual layer on top of potentially thousands of real PostgreSQL tables called chunks. Each chunk covers a specific time range, defined by a CHECK constraint.
When you issue a query, the PostgreSQL planner must evaluate every chunk's CHECK constraint to figure out which chunks could possibly contain matching rows. This process is called constraint exclusion. For a hypertable with 100 chunks, 100 constraints are evaluated. For 4,000 chunks, 4,000 constraints are evaluated. The cost is linear.
Each individual constraint check is cheap -- a few microseconds. But multiply that across thousands of chunks and microseconds become hundreds of milliseconds.
-- How many chunks does each hypertable have?
SELECT
hypertable_name,
count(*) AS total_chunk_count
FROM timescaledb_information.chunks
GROUP BY hypertable_name
ORDER BY total_chunk_count DESC;
If any number coming back is in the thousands, you have a planning problem whether you have noticed it yet or not.
Benchmark Results
To quantify the impact, the same dataset was loaded into two hypertables with different chunk intervals. One used 15-minute chunks (producing ~2,880 chunks over 30 days) and the other used 1-day chunks (producing ~30 chunks). The same aggregation query with a one-hour time predicate was run against both:
EXPLAIN ANALYZE
SELECT
device_id,
AVG(metric_value) AS avg_value,
COUNT(*) AS event_count
FROM experiment_planning_overhead
WHERE event_timestamp_utc >= now() - INTERVAL '1 hour'
GROUP BY device_id
ORDER BY avg_value DESC
LIMIT 10;
| Configuration | Chunks | Planning Time | Execution Time |
|---|---|---|---|
| 15-min interval | ~2,880 | 50-200 ms | 2 ms |
| 1-day interval | ~30 | 1-5 ms | 2 ms |
Execution time was identical -- the same rows were scanned in both cases. The planner simply spent 10-100x longer deciding which chunks to skip.
At the extreme end with 4,322 chunks, planning time reached 443ms for a query that executed in 2ms.
What Happens Without a Time Predicate
Remove the WHERE clause on the time column and constraint exclusion cannot eliminate anything. The planner evaluates all chunks, generates an Append node for all of them, and the executor scans every single one:
-- Without time predicate: full table scan across all chunks
SELECT device_id, AVG(metric_value) AS avg_value
FROM experiment_planning_overhead
GROUP BY device_id
LIMIT 10;
Now both planning time and execution time explode. This is the worst case: thousands of index scans, one per chunk, each opening and closing the chunk's heap file.
Why Dashboards Suffer Most
A monitoring or analytics dashboard typically loads 10-20 panels at once. Each panel fires an independent query. If every query pays 400ms in planning overhead, the dashboard takes 4-8 seconds before any data is read.
Users experience this as sluggishness that does not improve with better hardware, more memory, or additional indexes. The bottleneck is purely CPU-bound constraint evaluation in the planner, and throwing resources at it does not help.
Connection pooling makes it worse. If your pool reprepares statements on checkout, each new session pays the full planning cost again.
Four Fixes
1. Always include time predicates
This is the single highest-impact change. Every query against a hypertable should include a WHERE clause on the partitioning column:
-- Misses constraint exclusion entirely
SELECT * FROM sensor_readings WHERE device_id = 42;
-- Limits planning to ~7 chunks
SELECT * FROM sensor_readings
WHERE device_id = 42
AND recorded_at >= now() - INTERVAL '7 days';
Even if your application logic does not need the time bound, add one. It transforms planning from an O(all_chunks) operation to O(matching_chunks).
2. Cap chunk count with retention policies
The most reliable way to keep chunk counts bounded:
SELECT add_retention_policy('sensor_readings', drop_after => INTERVAL '30 days');
With a 1-day chunk interval and 30-day retention, chunk count stays around 30. Without retention, it grows indefinitely.
3. Increase chunk_time_interval
If your chunks are too small, increase the interval for future data:
SELECT set_chunk_time_interval('sensor_readings', INTERVAL '1 day');
Existing chunks are unaffected and will age out through retention. If you need to consolidate historical data immediately, you need to create a new hypertable with the correct interval and migrate data.
4. Use prepared statements
Prepared statements cache the query plan after the first execution. Subsequent calls skip planning entirely:
PREPARE device_query(int, interval) AS
SELECT AVG(metric_value)
FROM sensor_readings
WHERE device_id = $1
AND recorded_at >= now() - $2;
EXECUTE device_query(42, '7 days');
This eliminates planning overhead for repeated queries -- ideal for application code and dashboards -- but does not help ad-hoc queries.
The Rule of Thumb
Every WHERE clause on a hypertable should include a time predicate. Not because it is a nice optimization, but because without it, the planner does work proportional to your entire history on every single query. Bound the time range, cap the chunk count with retention, and planning overhead stays in single-digit milliseconds where it belongs.
Top comments (0)