Offset pagination is slow mainly because the database still has to process (read and skip) all the rows before the offset, even though it doesn’t return them.
What happens under the hood
A typical offset query looks like:
Sql
SELECT *
FROM transactions
ORDER BY created_at
LIMIT 50 OFFSET 100000;
Step-by-step execution
- The database sorts rows by created_at
- It reads the first 100,050 rows
- It throws away the first 100,000
- It returns only 50 rows The work done grows linearly with the offset size
Why it gets slower as page number increases
- Rows are scanned, not jumped Databases cannot jump directly to row 100,000. They must:
- Traverse the index or table
- Count rows until the offset is reached
Even with indexes, the engine still walks through index entries.
- Sorting cost increases
If the ORDER BY column:
- Is not indexed, the DB must sort everything first
- Is indexed, the index traversal still scales with offset Large offsets = more CPU + memory.
- Disk I/O increases For large datasets:
- Earlier rows may no longer be in cache
- Disk reads increase
- Latency spikes unpredictably This is especially painful in high-traffic fintech APIs.
- Concurrency makes it worse Multiple users requesting: Page 2000 Page 5000 Page 10000
- Each query repeats the same expensive skip work
- No reuse of previous results
Better alternative: Keyset (Cursor) Pagination
Instead of skipping rows, you continue from the last seen value.
Sql
SELECT *
FROM transactions
WHERE created_at > :last_seen
ORDER BY created_at
LIMIT 50;
Why this is fast
- Uses index directly
- No skipping
- Constant-time performance per page
This is the standard approach for:
- Financial transactions
- Audit logs
- Infinite scroll APIs
When OFFSET is acceptable
OFFSET pagination is okay when:
- Tables are small
- Page depth is limited (e.g., admin dashboards)
- You need random page access (page 3, page 7)
If users can scroll deeply or data grows unbounded don’t use OFFSET
Top comments (0)