Real-time dashboards, financial summaries, and trend graphs โ they all need fast aggregation. But when youโre dealing with millions of rows, even a simple SUM or COUNT across a time range can grind your system.
If youโre using PostgreSQL and struggling with this, Iโve got good news:
โ
You donโt need to move to Redshift, BigQuery, or TimescaleDB.
โ
You can roll up and query large time-based data quickly โ using PostgreSQL alone.
This blog breaks down how we built cumulative rollups and time-bucketed summaries at scale โ without sacrificing flexibility or performance.
๐จ The Problem
Imagine a table like this:
transactions (
id SERIAL,
user_id UUID,
amount NUMERIC,
action_type TEXT,
created_at TIMESTAMP,
...
)
You want to build a dashboard that shows:
- Daily revenue totals
- 30-minute betting volumes
- Weekly unique users
- But you canโt scan millions of rows every time a user opens a report.
โ Real-time aggregate queries become slow and unpredictable
โ Caching is hard when the data is dynamic
โ GROUP BY with date truncation doesn't scale well
โ The Solution: Rollups with Time Buckets
The fix? Pre-compute your aggregates in advance using time buckets.
๐งฑ Step 1: Create a Rollup Table
Instead of querying raw data, store pre-aggregated values:
CREATE TABLE transaction_rollups (
bucket_start TIMESTAMP,
user_id UUID,
total_bet NUMERIC,
total_win NUMERIC,
action_type TEXT,
PRIMARY KEY (bucket_start, user_id, action_type)
);
Each row represents a time bucket, like 30 min or 1 hour.
๐ Step 2: Run Periodic Rollup Jobs (Every 5โ15 Min)
Use a cron job or background queue (like Bull in Node.js) to run:
INSERT INTO transaction_rollups (bucket_start, user_id, total_bet, total_win,
action_type)
SELECT
date_trunc('30 minutes', created_at) AS bucket,
user_id,
SUM(CASE WHEN amount_type = 0 THEN amount ELSE 0 END) AS total_bet,
SUM(CASE WHEN amount_type = 1 THEN amount ELSE 0 END) AS total_win,
action_type
FROM transactions
WHERE created_at >= now() - interval '1 hour'
GROUP BY bucket, user_id, action_type
ON CONFLICT (bucket_start, user_id, action_type)
DO UPDATE SET
total_bet = EXCLUDED.total_bet,
total_win = EXCLUDED.total_win;
You can truncate older raw data later or use rollups for long-range analytics.
โก Step 3: Querying Is Now Fast & Predictable
Need last 24 hours of betting volume, grouped by 30-minute intervals?
SELECT
bucket_start,
SUM(total_bet) as sum_bet
FROM transaction_rollups
WHERE action_type = 'bet'
AND bucket_start >= now() - interval '1 day'
GROUP BY bucket_start
ORDER BY bucket_start;
Itโs fast because:
- Youโre scanning just 48 rows/hour instead of thousands
- No joins or heavy computation at runtime
๐ Bonus: Running Cumulative SUMs for Charts
Want a cumulative sum for the graph?
SELECT
bucket_start,
SUM(SUM(total_bet)) OVER (ORDER BY bucket_start) AS cumulative_bet
FROM transaction_rollups
WHERE action_type = 'bet'
AND bucket_start >= now() - interval '1 day'
GROUP BY bucket_start
ORDER BY bucket_start;
PostgreSQLโs SUM(...) OVER makes this super smooth.
๐ง Why Not Just Use date_trunc() On Raw Tables?
While date_trunc('hour', created_at) looks simple, it forces PostgreSQL to:
- Scan all rows
- Compute aggregation on the fly
- Often skip indexes due to the function on the timestamp
- Rollups shift the work to off-peak time, letting you deliver blazing-fast reports.
๐ Tooling Tips
- Use materialized views if you prefer built-in caching, but be careful with refresh times.
- If you need real-time + historical data, combine rollup + latest raw data with a UNION ALL.
- Schedule the job every 5โ10 mins using corn.
๐ Final Thoughts
Rollups are a must-have when youโre:
- Handling high-traffic applications
- Running financial or gaming systems
- Building analytics dashboards
With just PostgreSQL, you can:
- Scale reporting
- Reduce DB load
- Keep your users happy
๐ฅ Tip: If you plan ahead, you can support multiple granularities โ hourly, daily, weekly โ all from the same strategy.
๐ Letโs Connect
Have questions about PostgreSQL, time-series optimizations, or real-time dashboards?
Letโs talk on LinkedIn or drop a comment.
Top comments (0)