DEV Community

Mohamed Idris
Mohamed Idris

Posted on

Cursor Pagination Step by Step

Recap: the other way

In the previous post we built offset pagination (?page=2). The server runs LIMIT 10 OFFSET 10. Simple, but it has two pains:

  1. Deep pages are slow. OFFSET 1000000 makes the database walk and throw away a million rows.
  2. New rows shift the pages. If someone inserts a row while you scroll, page 2 might repeat a row from page 1 or skip one.

Cursor pagination fixes both. Twitter, Instagram, GitHub, Slack — almost every infinite-scroll feed uses it.

Cute mental model

You are reading a long book. You do not say "give me page 47". You put a bookmark on the last word you read. Next time you open the book, you flip straight to the bookmark and keep going.

  • The bookmark = the cursor
  • "Keep going" = WHERE id > bookmark
  • You never count pages. You just move forward from where you stopped.

That is the whole idea.

Step 1: what exactly is a cursor?

A cursor is just a value that points to a specific row. It is usually:

  • the row's id, or
  • the row's createdAt timestamp, or
  • a combination of both for safety

It is the answer to "where did I leave off?"

If page 1 returned rows with ids [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], the cursor is 10. Page 2 means "rows after id 10".

Step 2: the SQL is dead simple

Offset pagination:

SELECT * FROM workers ORDER BY id ASC LIMIT 10 OFFSET 1000000;
-- database walks 1,000,010 rows, throws away 1,000,000
Enter fullscreen mode Exit fullscreen mode

Cursor pagination:

SELECT * FROM workers WHERE id > 999999 ORDER BY id ASC LIMIT 10;
-- database uses the index on id, jumps straight there, reads 10 rows
Enter fullscreen mode Exit fullscreen mode

Note: in the offset query, 1000000 is a count (how many rows to skip). In the cursor query, 999999 is an id value (the last row we saw). Same neighborhood of the table, totally different meaning.

Both return the same rows. The cursor version is fast no matter how deep you are, because the database can use the index on id to jump directly.

That is the speed win in one picture.

Step 3: the request shape

Client sends:

GET /workers?limit=10
GET /workers?limit=10&cursor=10
GET /workers?limit=10&cursor=20
Enter fullscreen mode Exit fullscreen mode

First request has no cursor (start from the beginning). Each response gives the client the next cursor to use.

Server returns:

{
  "data": [ ... 10 workers ... ],
  "nextCursor": 20
}
Enter fullscreen mode Exit fullscreen mode

When nextCursor is null or missing, you have hit the end.

Step 4: building it from scratch (NestJS + Prisma flavor)

async get(parameters: { cursor?: number; limit: number }) {
  const { cursor, limit } = parameters;

  const workers = await this.prisma.worker.findMany({
    take: limit + 1,                          // ask for one extra (we will see why)
    ...(cursor && {
      cursor: { id: cursor },
      skip: 1,                                // skip the cursor row itself
    }),
    orderBy: { id: "asc" },
  });

  const hasMore = workers.length > limit;
  const data = hasMore ? workers.slice(0, limit) : workers;
  const nextCursor = hasMore ? data[data.length - 1].id : undefined;

  return { data, nextCursor };
}
Enter fullscreen mode Exit fullscreen mode

Three little tricks in there. Let us unpack them.

Trick 1: ask for limit + 1 rows

In the offset post, we ran a separate count query to know if a next page exists. Here we use a cheaper trick: ask for one extra row.

  • If you wanted 10 and got 11 back, there is more. Return the first 10 to the client and throw the 11th away (its only job was to prove "more exists").
  • If you wanted 10 and got 10 or fewer, you are at the end.

The next cursor is the id of the last row you returned (the 10th), not the 11th. The 11th gets discarded.

No second query. No count. Just one extra row.

Trick 2: cursor: { id: cursor } plus skip: 1

Prisma's cursor option means "start from the row with this id". But it includes that row. We already returned it last time, so we skip it with skip: 1. (Yes the word "skip" is back, but skipping 1 row is free, unlike skipping 1,000,000.)

If you wrote raw SQL you would just do WHERE id > cursor and there would be no skip needed. Same idea, different ORM.

Trick 3: take the last id as the next cursor

The cursor for next time is just the id of the last row you returned. Easy.

Step 5: stable ordering matters even more here

Cursor pagination assumes the rows are ordered by something that does not change and is unique. id is perfect: it is unique, it is indexed, and it never changes.

createdAt is risky on its own because two rows can share the same timestamp. If two rows have createdAt = '2025-01-01 12:00:00' and your cursor lands between them, one might be skipped. The fix is a tiebreaker: order by (createdAt, id). The id breaks ties.

Rule of thumb: cursor on something unique, immutable, and indexed.

Step 6: why it does not lose or duplicate rows

Imagine 30 rows, page size 10. You fetch page 1 (ids 1 to 10). Someone inserts a new row with id 31. Then you fetch page 2.

Offset version: OFFSET 10 LIMIT 10. The new row 31 is at the end, so the offset still works here. But if the new row were inserted at the start (or rows were sorted by createdAt desc), the offsets would shift and you would re-see a row or miss one.

Cursor version: WHERE id > 10 LIMIT 10. You always continue from id 10. Inserts before id 10 do not matter. Inserts after id 10 just appear in their natural place. No drift.

This is why feeds (Twitter, Slack, Instagram) use cursors. Data is constantly being added at the top.

Step 7: the next link, HATEOAS style

Same pattern as the offset post: the server gives the client a ready-to-call URL.

function nextLink(req: Request, nextCursor?: number) {
  if (!nextCursor) return undefined;
  const url = new URL(`${req.protocol}://${req.get("Host")}${req.originalUrl}`);
  url.searchParams.set("cursor", nextCursor.toString());
  return url.toString();
}
Enter fullscreen mode Exit fullscreen mode

Client just keeps following links.next until it is missing. Done.

Step 8: encoded cursors (a small upgrade)

Heads up: this step is a side topic, not a core concept. If you just want to understand cursor pagination, you can skip to Step 9. Read this later when you care about API design polish.

Returning a raw id like cursor=42 works, but it tells the client too much about your database. A common upgrade is to base64-encode the cursor:

const encoded = Buffer.from(JSON.stringify({ id: 42 })).toString("base64");
// "eyJpZCI6NDJ9"
Enter fullscreen mode Exit fullscreen mode

The client treats it as an opaque string. They cannot tell if it is an id, a timestamp, or a combination. This lets you change the cursor's structure later without breaking the API.

For an interview answer, mention this as a "production-grade detail" and you will impress.

Step 9: trade-offs you should be ready to recite

Need Use
Page numbers in the UI ("Page 3 of 24") Offset
Jump to an arbitrary page Offset
Small to medium datasets Offset is fine
Infinite scroll / live feeds Cursor
Huge tables (millions of rows) Cursor
Stable results when data is being inserted Cursor
Going backward easily Offset (cursor needs extra work)

Neither is "better". They are tools for different jobs.

Step 10: the gotchas

  • You cannot easily jump to page 47. Cursor pagination is forward-only by default. To go backward you store both a before and after cursor (this is what GraphQL Relay-style pagination does).
  • You still need ORDER BY. Same as offset. Without ordering, "after this row" has no meaning.
  • Filters change the cursor's meaning. If the client sends a different filter, the old cursor is meaningless. Treat cursors as valid only for the same query parameters.
  • Do not invent a cursor from nothing. If a request comes in with a malformed cursor, return an error or ignore it and start from the beginning. Do not let the client confuse your indexes.

Putting it next to the offset version

If you read the offset post, here is the side-by-side. Same goal, different mechanics:

Offset:  client sends ?page=N    server runs LIMIT size OFFSET (N-1)*size
Cursor:  client sends ?cursor=X  server runs WHERE id > X LIMIT size+1
Enter fullscreen mode Exit fullscreen mode
Offset:  needs separate count query (or a peek) to know if more exists
Cursor:  asks for limit+1 rows, the extra row tells you
Enter fullscreen mode Exit fullscreen mode
Offset:  slow on deep pages, drifts when data is inserted
Cursor:  constant speed, drift-proof
Enter fullscreen mode Exit fullscreen mode

Recap in one screen

  • A cursor is a bookmark — usually an id — pointing to the last row you saw.
  • The query becomes WHERE id > cursor LIMIT size + 1.
  • Ask for one extra row to detect "is there more" without a count query.
  • Order by something unique, immutable, and indexed (usually id).
  • Constant time, drift-proof, perfect for infinite scroll.
  • Cost: no random page jumps, no "page 3 of 24" UI.
  • Encode the cursor (base64) in production so the client treats it as opaque.

If an interviewer asks "offset or cursor?", the right answer is "depends on the use case" followed by the trade-off table above. That is the senior answer.

Top comments (0)