System DesignAdvancedcase studyPart 7 of 29 in Backend Systems Mastery

System Design: Designing a Global Distributed Rate Limiter

How to prevent service abuse at scale. A deep dive into Rate Limiting algorithms (Token Bucket, Sliding Window), Redis Lua scripting, and capacity math.

Sachin SarawgiApril 20, 20266 min read6 minute lesson

System Design Masterclass: Designing a Distributed Rate Limiter

In a distributed environment, a single malicious script, a misconfigured client, or a massive traffic spike can easily overwhelm your backend servers, bringing down your entire business.

A Rate Limiter is the shield that protects your infrastructure. It dictates how many requests a specific user, IP address, or API key is allowed to make within a specific time window.

Designing a rate limiter that operates globally across thousands of microservices, with sub-millisecond latency and perfect accuracy, is a classic test of a senior engineer's ability to handle concurrency and distributed state.


1. Capacity Estimation (The Math)

Let's assume we are building a rate limiter for a public API like Stripe or Twitter.

Assumptions:

  • DAU (Daily Active Users): 10 Million.
  • Traffic: Each user makes an average of 50 requests per day. Peak QPS is 20,000 requests/sec.
  • Limits: Free tier allows 100 requests/minute. Premium tier allows 1,000 requests/minute.

Memory Storage Estimates: To track the rate limit, we need to store data in memory (Redis). For a simple counter approach, we need to store:

  • user_id (8 bytes)
  • count (4 bytes integer)
  • timestamp (8 bytes) Total = 20 bytes per user.

For 10 Million users actively making requests in the same minute: 10M * 20 bytes = 200 Megabytes. Conclusion: The memory footprint is tiny. A single Redis node can easily hold this data, but we will need a Redis Cluster to handle the 20,000 QPS network throughput.


2. High-Level Architecture

The rate limiter should sit as close to the edge as possible to prevent malicious traffic from ever reaching your application servers. It is typically integrated directly into the API Gateway.

graph TD
    Client1[Client App] --> GW[API Gateway]
    Client2[Malicious Bot] --> GW
    
    subgraph Rate Limiting Subsystem
        GW -->|1. Check Limit| Redis[(Redis Cluster)]
        Redis -->|2. Allow or Deny| GW
    end
    
    GW -.->|3a. HTTP 429 Too Many Requests| Client2
    GW -->|3b. Forward Request| Services[Backend Microservices]
    
    style GW fill:#1e40af,stroke:#fff,stroke-width:2px,color:#fff
    style Redis fill:#b91c1c,stroke:#fff,stroke-width:2px,color:#fff

3. The Core Algorithms

You cannot design a rate limiter without selecting an underlying mathematical algorithm. Here are the three most common.

Algorithm A: Fixed Window Counter (The Flawed Approach)

You divide time into fixed windows (e.g., 12:00 to 12:01). You increment a counter for every request. If the counter hits 100, reject. When the minute rolls over, reset to 0.

  • The Problem: Traffic spikes at the edges. A user could send 100 requests at 12:00:59 and another 100 requests at 12:01:01. They just pushed 200 requests in 2 seconds, completely bypassing the intended 100 req/min limit and crashing your database.

Algorithm B: Sliding Window Log (The Expensive Approach)

Instead of a fixed counter, you store the exact timestamp of every single request the user makes in a Redis Sorted Set. When a new request arrives, you delete all timestamps older than 1 minute, and count the remaining timestamps.

  • The Problem: It uses a massive amount of memory. Storing 1,000 individual timestamps for a premium user takes 1000 * 8 bytes = 8KB per user, compared to 20 bytes for a counter.

Algorithm C: Token Bucket (The Industry Standard)

This is the algorithm used by Amazon API Gateway and Stripe. Imagine a bucket associated with a user. The bucket has a capacity of C tokens.

  • A background process "refills" the bucket at a constant rate of R tokens per minute.
  • When a request arrives, it takes 1 token out of the bucket.
  • If the bucket is empty, the request is dropped.

Why it's the best: It requires very little memory (just the current token count and the last refill timestamp), and it naturally handles bursts of traffic smoothly.


4. The Deep Dive: Distributed Concurrency and Lua Scripts

Here is where junior engineers fail the interview. Let's assume you are using Redis to store the Token Bucket data.

Your API Gateway receives a request, reads the token count (GET tokens), sees count = 5, decrements it to 4, and saves it (SET tokens 4).

The Read-Modify-Write Race Condition

In a distributed system, you have 50 API Gateway instances processing requests concurrently. If Gateway A and Gateway B both receive a request for the same user at the exact same millisecond, they both run GET tokens. They both see 5. They both calculate 4, and they both run SET tokens 4.

Two requests were processed, but the token count only went down by 1. The user has successfully bypassed the rate limiter.

The Solution: Redis Lua Scripting

To solve the race condition without using slow distributed locks, we use Redis Lua Scripts.

Redis executes Lua scripts atomically. Because Redis is single-threaded, while a Lua script is running, no other Redis command from any other server can execute.

You write a tiny Lua script that performs the GET, does the math to check the bucket, and performs the SET. You send this script from the API Gateway to Redis. It executes in microseconds, completely eliminating the race condition.


5. Scaling to Global Traffic

What happens when your API is deployed in US-East, Europe, and Asia? Do you have one global Redis cluster in the US?

If a user in Tokyo hits the Asia API Gateway, and the Gateway has to perform a Lua script execution against a Redis cluster in Virginia, the network latency will be ~200ms. Adding 200ms of latency to every single API request is unacceptable.

The Asynchronous Sync Pattern (Local + Global)

To solve global latency, you must deploy a Redis cluster in every region.

  • The Asia Gateway checks the Asia Redis cluster (1ms latency).
  • But what if the user routes traffic to Asia, uses up their 100 requests, and then routes traffic to Europe to get another 100 requests?

The solution is Eventual Consistency. Each regional Redis cluster enforces the limit locally. Every few seconds, a background worker synchronizes the counters across all global regions using a message broker like Kafka.

Yes, a malicious user might briefly exceed their limit by a few percentage points if they rapidly switch continents before the sync occurs. But in system design, allowing a 5% inaccuracy to guarantee 1ms global latency is the correct engineering trade-off.


Summary Checklist for the Interview

When designing a rate limiter, ensure you cover:

  1. Choose Token Bucket or Sliding Window Counter to optimize memory.
  2. Defend Redis as the storage layer due to its in-memory speed.
  3. Explicitly solve the distributed race condition using Atomic Lua Scripts.
  4. Address multi-region latency by proposing local Redis clusters with asynchronous global synchronization.
📚

Recommended Resources

Designing Data-Intensive ApplicationsBest Seller

The definitive guide to building scalable, reliable distributed systems by Martin Kleppmann.

View on Amazon
Kafka: The Definitive GuideEditor's Pick

Real-time data and stream processing by Confluent engineers.

View on Amazon
Apache Kafka Series on Udemy

Hands-on Kafka course covering producers, consumers, Kafka Streams, and Connect.

View Course

Practical engineering notes

Get the next backend guide in your inbox

One useful note when a new deep dive is published: system design tradeoffs, Java production lessons, Kafka debugging, database patterns, and AI infrastructure.

No spam. Just practical notes you can use at work.

Sachin Sarawgi

Written by

Sachin Sarawgi

Engineering Manager and backend engineer with 10+ years building distributed systems across fintech, enterprise SaaS, and startups. CodeSprintPro is where I write practical guides on system design, Java, Kafka, databases, AI infrastructure, and production reliability.

Continue Series

Backend Systems Mastery

Lesson 7 of 29 in this learning sequence.

Next in series
1

Beginner

What is Load Balancing? A Simple Guide for Backend Engineers

What is Load Balancing? Load balancing is a core component of any distributed system. It acts as a traffic cop sitting in front of your servers and routing client requests across all servers capable of fulfilling those r…

2

Beginner

System Design: Designing a Distributed ID Generator (Snowflake)

Designing a Distributed ID Generator > Prerequisite: To understand why distributed IDs are hard, first read about Database Sharding and Partitioning. In a distributed system, you often need to generate unique identifiers…

3

Beginner

gRPC vs REST: A Decision-Maker's Guide for Backend Architecture

gRPC vs REST: Which One for Your Microservices? > Prerequisite: Before diving into protocols, ensure you understand the fundamentals of Load Balancing and API Idempotency. Choosing between REST and gRPC is one of the mos…

4

Advanced

SQL vs NoSQL: Which One for Your Next Production MVP?

SQL vs NoSQL: Making the Right Choice One of the most debated topics in software engineering is whether to use a Relational (SQL) or Non-Relational (NoSQL) database. As a senior engineer, your choice shouldn't be based o…

5

Intermediate

System Design: Designing a URL Shortener (TinyURL)

System Design Masterclass: Designing a URL Shortener (TinyURL) Designing a URL shortener like TinyURL or Bitly is the most ubiquitous System Design interview question in the world. While it sounds trivial on the surface…

6

Advanced

Database Indexing Deep Dive: B-Trees, Hash Indexes, and Query Planning

Indexes are the single most impactful optimization in database performance. A 10-second query becomes 20ms with the right index. A wrong index slows writes and misleads the query planner. Understanding the internals — no…

7

Advanced

System Design: Designing a Global Distributed Rate Limiter

System Design Masterclass: Designing a Distributed Rate Limiter In a distributed environment, a single malicious script, a misconfigured client, or a massive traffic spike can easily overwhelm your backend servers, bring…

8

Advanced

Designing a Database Sharding Strategy for 100 Million Users

Vertical scaling has a ceiling. For most applications, that ceiling arrives somewhere between 1 million and 10 million users, depending on write patterns and data size. At 100 million users, the question is not whether t…

9

Beginner

gRPC vs REST: The Decision-Maker's Guide for Backend Architecture

gRPC vs REST: Which One for Your Microservices? In modern backend architecture, how services talk is as important as what they say. Choosing between REST and gRPC isn't just about syntax; it's about the trade-off between…

10

Advanced

System Design: Designing a Global Payment Gateway (Stripe Scale)

System Design Masterclass: Designing a Payment Gateway (Stripe) Designing a system to serve photos or short URLs is fundamentally about optimizing for read-latency and disk space. If a user's photo fails to load, they re…

11

Intermediate

Optimistic vs. Pessimistic Locking: Concurrency Control in Practice

Optimistic vs. Pessimistic Locking Imagine two users trying to book the last seat on a flight at the same time. If both read the count as "1" and decrement it, you've oversold the flight. This is the Lost Update Problem,…

12

Advanced

System Design: Designing a Distributed Task Scheduler

System Design Masterclass: Designing a Distributed Task Scheduler Every backend engineer has written a cron job. It's simple: you put a script on a Linux server and tell the OS to run it every night at midnight. But what…

13

Intermediate

Docker for Java Developers: A Production Guide to Containerization

Docker for Java Developers: Production Guide A common mistake in Java containerization is copying a fat JAR into a single-layer image. This results in 200MB+ images and slow deployment cycles. Here is how to build produc…

14

Advanced

Beyond CAP: Why PACELC is the Real Rule for Distributed Databases

Beyond CAP: Understanding the PACELC Theorem The CAP theorem (Consistency, Availability, Partition-tolerance) is a useful abstraction, but it only describes what happens when the network is broken. In the real world, the…

15

Advanced

Distributed Caching at Scale: Mitigating the Thundering Herd

Distributed Caching at Scale In a distributed system, caching is often the difference between a sub-100ms response and a total system collapse. However, most developers treat Redis as a simple "key-value bucket." At scal…

16

Advanced

The Transactional Outbox Pattern: Reliability in Microservices

The Transactional Outbox Pattern In a microservice, you often need to save data to a database (e.g., Order) and send an event to Kafka (e.g., OrderCreated). If the DB write succeeds but the Kafka send fails, your system…

17

Intermediate

API Pagination at Scale: Why OFFSET 100,000 is a Database Killer

API Pagination at Scale: Moving Beyond OFFSET Designing a paginated API seems simple: just use LIMIT 20 OFFSET 100. This works perfectly for the first few pages. However, once your users reach page 5,000, your database p…

18

Advanced

Inside the Linux Page Cache: The Invisible Database Accelerator

Inside the Linux Page Cache When your database (PostgreSQL, MongoDB, etc.) reads a row from disk, it doesn't just read the bytes and forget them. The Linux kernel intercepts the request and caches the data in a region of…

19

Intermediate

System Design: Designing Stateless Authentication

System Design: Designing Stateless Authentication In a microservices architecture, you can't rely on server-side sessions (stored in memory/database) because every request might hit a different service instance. Stateles…

20

Advanced

The Shadow Database Pattern: Verifying Schema Changes with Production Traffic

The Shadow Database Pattern Changing the schema of a 10TB database that is processing 50,000 requests per second is a high-stakes operation. Even with perfect testing in a staging environment, production traffic often re…

21

Intermediate

Kubernetes Networking: What Happens Between the Load Balancer and Your Pod?

Kubernetes Networking for Backend Developers As a backend engineer, you usually stop thinking about a request once it hits the Load Balancer. In Kubernetes, that is just the beginning. Understanding the network hop betwe…

22

Expert

S3 Express One Zone: When to Use it for Stateful Workloads

S3 Express One Zone Amazon S3 Express One Zone stores data in a single AZ, reducing network hops and latency. It's not a general-purpose storage; it's a specialized tool. 1. Use Case: Transient Data Perfect for Spark Shu…

23

Advanced

Service Mesh Internals: How Envoy and Istio Manage the Mesh

Service Mesh Internals A Service Mesh is a dedicated infrastructure layer for handling service-to-service communication. It's responsible for the reliable delivery of requests through a complex topology of services. 1. T…

24

Advanced

S3 Express One Zone: When to use it

S3 Express One Zone For stateful data processing (like Spark shuffle files), standard S3 latency is too high. S3 Express One Zone offers sub-millisecond access for transient data.

25

Advanced

Testing Distributed Systems: Chaos Mesh and Failure Injection

Testing Distributed Systems: Embracing Chaos In a distributed system, failure is the default state. To build resilient systems, you must move beyond unit tests and proactively inject failure into your production-like env…

26

Advanced

Terraform for Backend Engineers: Managing Your Own Infra

Terraform for Backend Engineers In modern engineering teams, the boundary between "Code" and "Infra" is blurring. As a backend developer, you should be able to spin up your own SQS queues or Postgres instances without op…

27

Advanced

The Expand-Contract Pattern: Zero-Downtime Database Schema Changes

The Expand-Contract Pattern: Zero-Downtime Migration The most dangerous operation in backend engineering is a breaking database schema change (e.g., renaming a column). If you just rename it, your existing application co…

28

Intermediate

System Design: Designing Idempotent APIs for Reliable Services

System Design: Designing Idempotent APIs In a distributed system, network failures are inevitable. A common failure scenario is: "The client sends a request -> The server processes it -> The server's response fails to re…

29

Advanced

LSM-Tree Compaction Strategies: Leveled vs. Size-Tiered

LSM-Tree Compaction Strategies LSM-tree based databases (Cassandra, RocksDB, ScyllaDB) don't update data in place. They write immutable SSTables. Over time, these files must be merged to reclaim space and improve reads.…

Keep Learning

Move through the archive without losing the thread.

Related Articles

More deep dives chosen from shared tags, category overlap, and reading difficulty.

System DesignAdvanced

System Design: Designing a Distributed Task Scheduler

System Design Masterclass: Designing a Distributed Task Scheduler Every backend engineer has written a cron job. It's simple: you put a script on a Linux server and tell the OS to run it every night at midnight. But what…

Apr 20, 20266 min read
Case StudyBackend Systems Mastery
#system-design#task-scheduler#cron
System DesignAdvanced

System Design: Designing a Global Payment Gateway (Stripe Scale)

System Design Masterclass: Designing a Payment Gateway (Stripe) Designing a system to serve photos or short URLs is fundamentally about optimizing for read-latency and disk space. If a user's photo fails to load, they re…

Apr 20, 20265 min read
Case StudyBackend Systems Mastery
#system-design#fintech#payment-gateway
System DesignBeginner

System Design: Designing a Distributed ID Generator (Snowflake)

Designing a Distributed ID Generator > Prerequisite: To understand why distributed IDs are hard, first read about Database Sharding and Partitioning. In a distributed system, you often need to generate unique identifiers…

Apr 20, 20262 min read
Deep DiveBackend Systems Mastery
#system-design#snowflake#distributed-id
System DesignIntermediate

System Design: Designing a URL Shortener (TinyURL)

System Design Masterclass: Designing a URL Shortener (TinyURL) Designing a URL shortener like TinyURL or Bitly is the most ubiquitous System Design interview question in the world. While it sounds trivial on the surface…

Apr 20, 20266 min read
Case StudyBackend Systems Mastery
#system-design#tinyurl#url-shortener

More in System Design

Category-based suggestions if you want to stay in the same domain.