System DesignAdvancedcomparison

Pagination at Scale: Keyset vs Offset

Why OFFSET 100,000 kills DB performance. Master cursor-based pagination for large-scale datasets.

Sachin SarawgiApril 20, 20265 min read5 minute lesson

Pagination at Scale

Pagination looks simple until your table has tens of millions of rows, users sort by multiple fields, and product asks for infinite scroll with strict latency SLOs.

Many APIs start with LIMIT ... OFFSET ... because it is easy to implement:

  • page 1 -> LIMIT 20 OFFSET 0
  • page 2 -> LIMIT 20 OFFSET 20
  • page 5000 -> LIMIT 20 OFFSET 99980

This works for small datasets, but offset-based pagination becomes expensive and unstable at scale.

Why OFFSET gets slower over time

The database still needs to walk past skipped rows before returning the page.

Even with an index, OFFSET 100000 often means:

  1. scan index entries for 100,020 rows
  2. discard the first 100,000
  3. return only 20

That extra work creates:

  • higher CPU and I/O on DB nodes
  • bigger latency tail (p95 and p99)
  • noisy-neighbor risk for other workloads

In other words, deep pagination is not "20 rows of work", it is often "100,020 rows of work".

Correctness issue with OFFSET: drifting pages

Offset pages can duplicate or miss records if new rows are inserted between requests.

Example:

  • User loads page 1 sorted by created_at DESC
  • New rows arrive
  • User requests page 2 with OFFSET 20

Now some items that were at positions 19-25 shifted, causing overlap or gaps. For user-facing feeds, this feels broken.

Keyset pagination (cursor pagination)

Keyset pagination uses the last seen sort key as the anchor for the next query, instead of counting rows to skip.

If sorted by (created_at DESC, id DESC):

  • first page: WHERE true ORDER BY created_at DESC, id DESC LIMIT 20
  • next page: WHERE (created_at, id) < (:last_created_at, :last_id) ORDER BY created_at DESC, id DESC LIMIT 20

The DB can jump directly to the correct range through the index.

Benefits of keyset pagination

  • Near-constant query cost across pages
  • Better cache and index locality
  • More stable user experience under concurrent writes
  • Lower latency variance for very large datasets

Choosing stable sort keys

A keyset strategy is only as good as your ordering key.

Good rules:

  • Use immutable or rarely changed columns for cursor position
  • Make ordering deterministic with a tiebreaker (usually id)
  • Match index order to query order exactly

Bad example:

  • sorting only by updated_at (not unique) without tiebreaker

Good example:

  • sorting by updated_at DESC, id DESC
  • cursor carries both values

Index design for keyset queries

If query is:

WHERE tenant_id = ? AND (created_at, id) < (?, ?) ORDER BY created_at DESC, id DESC LIMIT 50

Then index should typically be:

(tenant_id, created_at DESC, id DESC)

Without index alignment, keyset pagination loses most of its performance advantage.

Cursor format and API contract

Do not expose raw DB internals directly. Use an opaque cursor token:

  • JSON payload with sort keys + filter hash + optional expiry
  • base64url encode it
  • sign it (HMAC) to prevent tampering

A response format:

  • items: current page
  • nextCursor: token for next page
  • hasMore: boolean

For reverse navigation, add prevCursor with inverted query direction.

Handling filters and multi-tenant data

Cursors must be bound to filter context. If filter changes, old cursor should be invalid.

Include in cursor metadata:

  • tenant/account scope
  • search/filter params checksum
  • sort mode

If mismatch is detected, return a clear validation error and ask client to restart from first page.

Keyset drawbacks and trade-offs

Keyset is not perfect:

  • No direct "jump to page 237" semantics
  • Cursor complexity is higher than offset
  • Frontend needs to treat pages as a sequence, not absolute page numbers

For admin panels that need random page jumps over small datasets, offset can still be acceptable.

Hybrid strategy used in production

A practical approach:

  • Offset for shallow pages (e.g., first 5 pages) in low-volume backoffice UIs
  • Keyset for user feeds, transaction history, logs, and high-volume APIs

For analytics-like experiences that need random access, consider a search engine or materialized snapshots instead of deep offset scanning.

Migration plan: OFFSET to keyset

  1. Keep existing endpoint behavior unchanged
  2. Add optional cursor parameters (cursor, limit)
  3. Return both nextCursor and legacy pagination metadata temporarily
  4. Migrate clients incrementally
  5. Remove deep offset usage after adoption

Measure:

  • query plan changes (EXPLAIN ANALYZE)
  • p95/p99 latency
  • DB CPU and buffer cache hit ratio

Failure modes to guard against

  • Non-deterministic ordering: duplicates across pages
  • Mutable sort fields: item jumps between pages
  • Unsigned cursors: tampering and data leakage
  • Cursor not tied to filters: inconsistent results
  • Missing tiebreaker: skipped rows

Practical decision rule

Use offset when:

  • dataset is small
  • traffic is low
  • random page jumps are required

Use keyset when:

  • data volume or write rate is high
  • feed/history experience matters
  • strict latency SLOs exist

At scale, pagination is not just API design; it is a database cost-control mechanism.

Practical engineering notes

Get the next backend guide in your inbox

One useful note when a new deep dive is published: system design tradeoffs, Java production lessons, Kafka debugging, database patterns, and AI infrastructure.

No spam. Just practical notes you can use at work.

Sachin Sarawgi

Written by

Sachin Sarawgi

Engineering Manager and backend engineer with 10+ years building distributed systems across fintech, enterprise SaaS, and startups. CodeSprintPro is where I write practical guides on system design, Java, Kafka, databases, AI infrastructure, and production reliability.

Keep Learning

Move through the archive without losing the thread.

Related Articles

More deep dives chosen from shared tags, category overlap, and reading difficulty.

System DesignIntermediate

API Pagination at Scale: Why OFFSET 100,000 is a Database Killer

API Pagination at Scale: Moving Beyond OFFSET Designing a paginated API seems simple: just use LIMIT 20 OFFSET 100. This works perfectly for the first few pages. However, once your users reach page 5,000, your database p…

Apr 20, 20262 min read
Deep DiveBackend Systems Mastery
#api-design#pagination#sql
System DesignBeginner

gRPC vs REST: The Decision-Maker's Guide for Backend Architecture

gRPC vs REST: Which One for Your Microservices? In modern backend architecture, how services talk is as important as what they say. Choosing between REST and gRPC isn't just about syntax; it's about the trade-off between…

Apr 20, 20262 min read
ComparisonBackend Systems Mastery
#grpc#rest#api-design
System DesignBeginner

gRPC vs REST: A Decision-Maker's Guide for Backend Architecture

gRPC vs REST: Which One for Your Microservices? > Prerequisite: Before diving into protocols, ensure you understand the fundamentals of Load Balancing and API Idempotency. Choosing between REST and gRPC is one of the mos…

Apr 20, 20262 min read
ComparisonBackend Systems Mastery
#grpc#rest#api-design
System DesignAdvanced

API Rate Limiting at Scale: Redis-Based Strategies

API Rate Limiting at Scale with Redis Rate limiting is essential for protecting your APIs from abuse, ensuring fair usage, and preventing cascading failures. Redis is the ideal store for rate limiting because of its spee…

Apr 20, 20262 min read
Deep Dive
#redis#api-gateway#rate-limiting

More in System Design

Category-based suggestions if you want to stay in the same domain.