Pagination is one of the most common requirements when building APIs. Almost every real-world application needs to return data in chunks instead of sending everything at once.
In this post, I’ll show you a clean and modern way to implement pagination in FastAPI using Python’s itertools.batched() function, without manual slicing or complex index calculations.
✅ Why Pagination Matters
Without pagination:
- Responses become very large
- Performance decreases
- Memory usage increases
- Mobile clients suffer
With pagination:
- Faster API responses
- Better user experience
- Scalable backend design
✅ Project Idea
We’ll build a simple /posts endpoint that:
- Loads posts from a JSON file
- Supports page-based pagination
- Uses
batched()instead of manual slicing - Returns clean, structured API responses
✅ Example JSON Database
Here’s a sample of the JSON data we’ll be using:
{
"posts": [
{
"id": 1,
"userId": 101,
"title": "Learning Python Basics",
"content": "Today I started learning Python and wrote my first script.",
"likes": 23,
"comments": 2,
"created_at": "2025-01-01T10:03:11Z"
},
{
"id": 2,
"userId": 102,
"title": "Exploring SQLAlchemy ORM",
"content": "SQLAlchemy relationships were confusing at first but now they're amazing.",
"likes": 40,
"comments": 5,
"created_at": "2025-01-02T14:12:33Z"
},
{
"id": 3,
"userId": 103,
"title": "How I Built My First API",
"content": "Used FastAPI to build a simple notes app today.",
"likes": 55,
"comments": 7,
"created_at": "2025-01-03T09:45:20Z"
}
]
}
✅ The FastAPI Pagination Code
Here is the full implementation of our paginated endpoint:
from fastapi import FastAPI
from itertools import batched
import json
# Load posts from json DB
with open('data/fake_db.json', 'r') as f:
posts_db = json.load(f)['posts']
app = FastAPI()
@app.get('/posts')
def show_posts(page_num: int = 0, posts_len: int = 3):
sliced_posts = [
list(posts) for posts in batched(posts_db, n=posts_len)
][page_num]
return {'posts': sliced_posts}
✅ How It Works
Let’s break it down step by step:
1️⃣ Load the JSON Database
We load all posts into memory:
with open('data/fake_db.json', 'r') as f:
posts_db = json.load(f)['posts']
2️⃣ Use batched() for Clean Chunking
Instead of manual slicing like this:
posts[start:end]
We use:
batched(posts_db, n=posts_len)
This automatically splits the list into groups of size posts_len.
3️⃣ Select the Page
We convert the batches into a list and select the requested page:
[list(posts) for posts in batched(posts_db, n=posts_len)][page_num]
This makes the logic:
- Clean
- Readable
- Easy to maintain
✅ Example API Requests
🔹 Get First Page (Default)
GET /posts
🔹 Get Second Page With 5 Posts
GET /posts?page_num=1&posts_len=5
✅ Example API Response
{
"posts": [
{
"id": 4,
"userId": 104,
"title": "Another Post",
"content": "This is another example post.",
"likes": 10,
"comments": 1,
"created_at": "2025-01-04T12:00:00Z"
}
]
}
✅ Benefits of This Approach
✅ No manual index math
✅ Cleaner Python code
✅ Easier to read and debug
✅ Works great with small & medium datasets
✅ Perfect for demos, tutorials, and prototypes
⚠️ Important Note for Production
While this approach is great for learning and small projects, you should use database-level pagination for large datasets (LIMIT & OFFSET) instead of loading everything into memory.
✅ Conclusion
Using itertools.batched() with FastAPI gives you a very elegant way to implement pagination without messy slicing logic. It’s a great technique to keep your API clean and maintainable.
If you enjoyed this article, feel free to like, share, and follow for more backend & Python tips 🚀
Top comments (0)