Advanced Database Optimization: Solving N+1 Queries and Pagination
In our previous guide, we covered how database indexing acts as the search engine for your tables. But as your application grows, simply having an index isn't enough. Even a perfectly indexed database can crawl to a halt if your code interacts with it inefficiently.
In 2026, a "senior" developer isn't just someone who can write a query -- it's someone who knows how to keep that query fast when 10,000 users hit it at once. Whether you are following the Database Roadmap or building a Modular Monolith, understanding these bottlenecks is critical.
In this guide, we'll tackle the two silent performance killers: The N+1 Query Problem and Inefficient Pagination.
TL;DR (60 seconds)
- Fix N+1 with SQL joins or ORM eager loading.
- Use cursor (keyset) pagination on an indexed, stable sort column like
created_atplusid. - Avoid deep
OFFSETfor large datasets; it gets slower as pages grow.
1. The N+1 Query Problem in SQL and ORMs
The N+1 query problem is one of the most common bottlenecks in modern web development. It occurs when your application makes one query to fetch a list of items, and then executes additional queries for every single item in that list to fetch related data.
I once watched a feed endpoint jump from 200ms to 4s right after launch because a single author lookup lived inside a loop. The code looked innocent. The database did not agree.
The Scenario: A Social Media Feed
Imagine you want to display 10 posts and the name of the author for each post.
The "Bad" Way (N+1):
- Query 1:
SELECT * FROM posts LIMIT 10;(Fetches the 10 posts). - Queries 2-11: For each of those 10 posts, your code runs:
SELECT name FROM users WHERE id = post.author_id;.
If you have 10 posts, you make 1 + 10 = 11 queries. If you display 50 posts, you make 51 queries. This creates massive network latency and hammers your database with unnecessary work.
The Solution: Eager Loading (and a Join)
Instead of fetching related data one by one, fetch everything in a single, efficient operation.
- Using SQL Joins: Combine the tables at the database level so you get all the data in one trip.
1SELECT p.id, p.title, p.created_at, u.name AS author_name
2FROM posts p
3JOIN users u ON u.id = p.author_id
4ORDER BY p.created_at DESC
5LIMIT 10;
6- Using Prisma (Eager Loading): Modern ORMs handle this with an
includestatement:
1const posts = await prisma.post.findMany({
2 take: 10,
3 include: { author: true }, // This solves the N+1 problem automatically!
4});
5
6Eager loading reduces round trips, but nested relationships can still trigger N+1 if you fetch deeper data inside loops or forget to include it up front. If performance suddenly drops, query logging will usually reveal the pattern. See Prisma's relation queries and query logging for practical examples.
Quick detection tip: enable query logging during development and look for repeated, nearly identical queries.
1const prisma = new PrismaClient({ log: ["query"] });
22. Pagination Strategies: Offset vs. Cursor (Keyset)
Sending 10,000 rows to your React or Next.js frontend is a recipe for a crashed browser. You must paginate, but the method you choose depends on your data size.
Strategy A: Offset Pagination (The Beginner Way)
This is the most common method, using LIMIT and OFFSET.
- SQL Example:
SELECT * FROM products LIMIT 20 OFFSET 40;(This skips 40 rows and gives you the next 20).
The Bottleneck: The database must still "read" through the first 40 rows before discarding them. When you reach OFFSET 100000, the query becomes incredibly slow because the database has to scan 100,000 rows just to find the start point.
Best For: Small datasets or admin panels where users rarely go past page 5.
Strategy B: Cursor (Keyset) Pagination (The Pro Way)
Instead of skipping rows, you tell the database: "Give me 20 rows that come after the last cursor I saw". This is often called keyset pagination.
The Logic:
- Fetch 20 items.
- Remember the ID (the "cursor") of the 20th item.
- The Next Query:
SELECT * FROM products WHERE id > last_id LIMIT 20;.
For real-world feeds, use a stable sort column with a tie-breaker (like created_at plus id):
1SELECT *
2FROM products
3WHERE created_at > :last_created_at
4 OR (created_at = :last_created_at AND id > :last_id)
5ORDER BY created_at, id
6LIMIT 20;
7| Feature | Offset Pagination | Cursor Pagination |
|---|---|---|
| Performance | Slows down on deep pages | Constant speed (Very Fast) |
| Stability | Data can shift if rows are deleted | Very stable for real-time data |
| UI Type | Page Numbers (1, 2, 3...) | Infinite Scroll / "Load More" |
Tradeoff: Cursor pagination does not support jumping straight to page 50. If you need random access, offset is still the simplest choice for small datasets.
3. High-Performance Query Tips for 2026
To truly optimize your data layer, follow these rules of thumb:
- Stop Using
SELECT *: Fetching 50 columns when you only needusernameandemailwastes memory and bandwidth. - Check Your "Explain" Plan: Use the
EXPLAIN ANALYZEcommand to see if your query is actually using your indexes or doing a "Full Table Scan". The PostgreSQL docs on EXPLAIN are a great starting point. - Index Your Cursor: If you paginate by
created_at, add an index oncreated_at(or a composite index oncreated_at, id) to keep keyset pagination fast. - Connection Pooling: In serverless environments, use a pooler or data proxy (for Prisma, see Prisma Accelerate or the Data Proxy).
- Use Constraints: Rely on the database (NOT NULL, UNIQUE, Foreign Keys) to keep data clean; a well-structured database is a fast database.
- Batch Where It Makes Sense: If your API is chatty, group related reads into a single query or endpoint to reduce round trips.
FAQ
Q: When is offset pagination OK?
A: Small datasets, internal dashboards, or when users rarely go beyond a few pages.
Q: What index do I need for cursor pagination?
A: Index the column you sort on. For feeds, a composite index on (created_at, id) usually keeps it fast and stable.
Q: How do I detect N+1 queries?
A: Enable query logging, then watch for the same query repeating dozens of times with different IDs.
Q: Why do I need a tie-breaker like id?
A: If two rows share the same created_at, the tie-breaker keeps ordering stable and prevents duplicates or skips.
Final Thoughts
Optimization isn't about making code complex; it's about being intentional. Start by fixing N+1 queries, it's the biggest quick win for application speed.
This naturally builds on our Database Roadmap and Indexing Guide.
What's the first app you're going to optimize? Whether it's an infinite-scroll social feed or a massive e-commerce catalog, drop a comment below -- I'd love to hear your strategy. If you want, share a query or schema and I'll suggest indexes.
