DEV Community

Cover image for Common Database Mistakes That Kill Performance (And How to Avoid Them)
Akshay Kurve
Akshay Kurve

Posted on

Common Database Mistakes That Kill Performance (And How to Avoid Them)

When your app starts slowing down, most developers blame the backend, the frontend, or even the server. But more often than not, the real culprit is your database design and usage.

The tricky part? These mistakes don't usually show up when you're building locally. They appear later — when traffic grows, data scales, and queries start choking.

2026 Industry Data: According to recent studies, 70% of application performance issues trace back to database problems. Yet only 23% of developers regularly analyze their query performance.

In this article, we'll break down the most common database mistakes that quietly destroy performance — in a way that's easy to understand even if you're just getting started.


Table of Contents


1. Fetching More Data Than You Need

The Mistake

Using queries like:

SELECT * FROM users;
Enter fullscreen mode Exit fullscreen mode

This might feel convenient, but it's inefficient.

Why It Hurts

Issue Impact
Network overhead Unnecessary data transfer
Memory usage Higher server memory consumption
Response time Slower API responses
Bandwidth costs Increased cloud expenses

The Fix

Only select what you need:

SELECT id, name, email FROM users;
Enter fullscreen mode Exit fullscreen mode

Real-World Tip

If you're building APIs, always think:

"What exactly does the client need?" — nothing more.

Back to Top


2. Missing Indexes (or Using Them Wrong)

The Mistake

Not adding indexes on frequently queried columns.

Why It Hurts

Without indexes, the database does a full table scan — checking every row.

Think of it like searching for a name in a phonebook without alphabetical order.

Performance comparison:

Table Size Without Index With Index
10,000 rows ~50ms ~1ms
1,000,000 rows ~5,000ms ~5ms
10,000,000 rows ~50,000ms ~10ms

The Fix

Add indexes on:

  • WHERE conditions
  • JOIN columns
  • ORDER BY fields
  • Foreign keys
CREATE INDEX idx_users_email ON users(email);

-- Composite index for multiple columns
CREATE INDEX idx_orders_user_date ON orders(user_id, created_at);
Enter fullscreen mode Exit fullscreen mode

Important Caveat

Too many indexes can slow down:

  • INSERT operations
  • UPDATE operations
  • DELETE operations

Rule of Thumb

Index read-heavy fields, not everything.

Back to Top


3. N+1 Query Problem

The Mistake

// BAD: N+1 Problem
const users = await getUsers();

for (const user of users) {
  const posts = await getPostsByUser(user.id);
}
Enter fullscreen mode Exit fullscreen mode

Why It Hurts

Users Queries Executed
10 11 queries
100 101 queries
1,000 1,001 queries

This explodes quickly and remains the leading cause of slow APIs in 2026.

The Fix

Option 1: Use JOINs

SELECT users.*, posts.*
FROM users
LEFT JOIN posts ON users.id = posts.user_id;
Enter fullscreen mode Exit fullscreen mode

Option 2: Batch Loading

// GOOD: Batch loading
const users = await getUsers();
const userIds = users.map(u => u.id);
const posts = await getPostsByUserIds(userIds);
Enter fullscreen mode Exit fullscreen mode

Option 3: ORM Eager Loading

// Prisma (2026 recommended)
const users = await prisma.user.findMany({
  include: { posts: true }
});

// Drizzle ORM
const users = await db.query.users.findMany({
  with: { posts: true }
});
Enter fullscreen mode Exit fullscreen mode

Back to Top


4. Poor Database Schema Design

The Mistake

Problem Issue
Over-normalization Complex joins everywhere
Over-denormalization Duplicated data, inconsistency
No relationships Data integrity problems

Why It Hurts

  • Harder queries to write and maintain
  • Slower reads/writes
  • Maintenance nightmare
  • Scaling difficulties

The Fix

Balance is key:

Normalize for data integrity
          ↕
Denormalize for performance (when needed)
Enter fullscreen mode Exit fullscreen mode

Example

Before (5-table join):

SELECT o.id, u.name, p.title, c.name, s.status
FROM orders o
JOIN users u ON o.user_id = u.id
JOIN products p ON o.product_id = p.id
JOIN categories c ON p.category_id = c.id
JOIN statuses s ON o.status_id = s.id;
Enter fullscreen mode Exit fullscreen mode

After (strategic denormalization):

-- Store computed/cached fields
SELECT id, user_name, product_title, category_name, status
FROM orders_view;
Enter fullscreen mode Exit fullscreen mode

Back to Top


5. Ignoring Query Performance Analysis

The Mistake

Running queries without checking how they perform.

Why It Hurts

You don't know:

  • Which query is slow
  • Why it's slow
  • How to fix it

The Fix

Use EXPLAIN/ANALYZE:

-- PostgreSQL
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 10;

-- MySQL
EXPLAIN SELECT * FROM orders WHERE user_id = 10;
Enter fullscreen mode Exit fullscreen mode

Understanding the output:

Seq Scan on orders     → Full table scan (bad)
Index Scan on orders   → Using index (good)
Index Only Scan        → Best case scenario
Enter fullscreen mode Exit fullscreen mode

Modern Tools (2026)

Tool Best For
pgMustard PostgreSQL query analysis
Arctype Visual query builder
DataGrip Multi-database IDE
Metis Automated query optimization
Postgres.ai AI-powered optimization

Mindset Shift

Don't guess. Measure.

Back to Top


6. Using Database as a Dumping Ground

The Mistake

Storing everything in your primary database:

  • Application logs
  • Large JSON blobs
  • File metadata without cleanup
  • Analytics events

Why It Hurts

Problem Impact
Table bloat Slower queries
Backup size Longer backup/restore
Storage costs Higher cloud bills
Query complexity Harder to maintain

The Fix

Use the right tool for the job:

Data Type Better Storage
Logs Elasticsearch, Loki, CloudWatch
Analytics ClickHouse, TimescaleDB
Files S3, CloudFlare R2
Cache Redis, Valkey
Sessions Redis, DynamoDB

Implement data lifecycle:

-- Archive old data
INSERT INTO orders_archive
SELECT * FROM orders WHERE created_at < NOW() - INTERVAL '2 years';

DELETE FROM orders WHERE created_at < NOW() - INTERVAL '2 years';
Enter fullscreen mode Exit fullscreen mode

Back to Top


7. Not Using Pagination

The Mistake

SELECT * FROM posts;
Enter fullscreen mode Exit fullscreen mode

If there are 100,000 rows — you're in trouble.

Why It Hurts

  • Huge memory usage
  • Slow API responses
  • Bad user experience
  • Wasted bandwidth

The Fix

Option 1: Offset Pagination (Simple)

SELECT * FROM posts 
ORDER BY created_at DESC
LIMIT 10 OFFSET 0;
Enter fullscreen mode Exit fullscreen mode

Warning: Offset pagination becomes slow on large datasets (OFFSET 100000 is expensive)

Option 2: Cursor-based Pagination (Recommended)

-- First page
SELECT * FROM posts 
ORDER BY id DESC 
LIMIT 10;

-- Next pages (using last seen id)
SELECT * FROM posts 
WHERE id < 12345
ORDER BY id DESC 
LIMIT 10;
Enter fullscreen mode Exit fullscreen mode

Option 3: Keyset Pagination (Best for 2026)

// Modern approach with Prisma
const posts = await prisma.post.findMany({
  take: 10,
  skip: 1,
  cursor: { id: lastPostId },
  orderBy: { id: 'desc' }
});
Enter fullscreen mode Exit fullscreen mode

Back to Top


8. Running Expensive Queries on Every Request

The Mistake

Recomputing the same heavy query again and again.

// Runs on every request
app.get('/dashboard', async (req, res) => {
  const stats = await db.query(`
    SELECT COUNT(*), SUM(amount), AVG(amount)
    FROM orders
    WHERE created_at > NOW() - INTERVAL '30 days'
  `);
  res.json(stats);
});
Enter fullscreen mode Exit fullscreen mode

Why It Hurts

  • CPU overload
  • Increased latency
  • Poor scalability

The Fix

Implement caching layers:

// With Redis caching
import { Redis } from '@upstash/redis';

app.get('/dashboard', async (req, res) => {
  const cacheKey = 'dashboard:stats';

  // Check cache first
  let stats = await redis.get(cacheKey);

  if (!stats) {
    stats = await db.query(`...`);
    await redis.set(cacheKey, stats, { ex: 300 }); // 5 min cache
  }

  res.json(stats);
});
Enter fullscreen mode Exit fullscreen mode

Caching Tools (2026)

Tool Use Case
Redis/Valkey General caching
Upstash Serverless Redis
Dragonfly Redis alternative
Query result cache Database-level caching

Back to Top


9. No Connection Pooling

The Mistake

Opening a new database connection for every request.

// BAD: New connection per request
app.get('/users', async (req, res) => {
  const client = new Client(connectionString);
  await client.connect();
  const result = await client.query('SELECT * FROM users');
  await client.end();
  res.json(result.rows);
});
Enter fullscreen mode Exit fullscreen mode

Why It Hurts

Issue Impact
Connection overhead ~20-50ms per connection
Database limits Max connections exhausted
Memory usage Each connection uses memory
Crashes Under high load

The Fix

// GOOD: Connection pooling
import { Pool } from 'pg';

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,              // Maximum connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
});

app.get('/users', async (req, res) => {
  const result = await pool.query('SELECT * FROM users');
  res.json(result.rows);
});
Enter fullscreen mode Exit fullscreen mode

Connection Pooling Tools (2026)

Tool Best For
PgBouncer PostgreSQL
ProxySQL MySQL
Prisma Accelerate Serverless
Neon Built-in pooling
Supabase Supavisor pooling

Back to Top


10. Ignoring Transactions

The Mistake

Running multiple related queries separately.

// BAD: No transaction
await db.query('UPDATE accounts SET balance = balance - 100 WHERE id = 1');
// What if server crashes here?
await db.query('UPDATE accounts SET balance = balance + 100 WHERE id = 2');
Enter fullscreen mode Exit fullscreen mode

Why It Hurts

  • Data inconsistency
  • Partial updates
  • Hard to debug issues
  • Financial errors

The Fix

BEGIN;

UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;

COMMIT;
-- Or ROLLBACK if something fails
Enter fullscreen mode Exit fullscreen mode

With Prisma:

await prisma.$transaction([
  prisma.account.update({
    where: { id: 1 },
    data: { balance: { decrement: 100 } }
  }),
  prisma.account.update({
    where: { id: 2 },
    data: { balance: { increment: 100 } }
  })
]);
Enter fullscreen mode Exit fullscreen mode

Benefit

Ensures ACID properties — either everything succeeds or nothing does.

Back to Top


11. Overusing ORMs Without Understanding SQL

The Mistake

Relying fully on ORM without knowing what queries it generates.

// Looks innocent...
const users = await User.findAll({
  include: [Posts, Comments, Likes, Followers]
});
// But generates 5 JOINs and selects 50 columns
Enter fullscreen mode Exit fullscreen mode

Why It Hurts

  • Hidden N+1 problems
  • Inefficient queries
  • Hard-to-debug performance issues
  • Unexpected scaling problems

The Fix

1. Learn basic SQL

-- Know what your ORM is generating
SELECT users.*, posts.*, comments.*
FROM users
LEFT JOIN posts ON users.id = posts.user_id
LEFT JOIN comments ON posts.id = comments.post_id;
Enter fullscreen mode Exit fullscreen mode

2. Enable query logging

// Prisma
const prisma = new PrismaClient({
  log: ['query', 'info', 'warn', 'error'],
});

// Drizzle
const db = drizzle(client, { logger: true });
Enter fullscreen mode Exit fullscreen mode

3. Use raw queries when needed

// Sometimes raw SQL is better
const result = await prisma.$queryRaw`
  SELECT u.id, u.name, COUNT(p.id) as post_count
  FROM users u
  LEFT JOIN posts p ON u.id = p.user_id
  GROUP BY u.id
`;
Enter fullscreen mode Exit fullscreen mode

Back to Top


12. Not Monitoring Database Performance

The Mistake

"No alerts, no metrics, no logs."

Why It Hurts

Problems go unnoticed until users complain.

Day 1:  Query takes 50ms    [OK]
Day 30: Query takes 200ms   [Degrading]
Day 90: Query takes 2000ms  [Critical]
Day 91: Users complaining   [Too Late]
Enter fullscreen mode Exit fullscreen mode

The Fix

Track these metrics:

Metric Why It Matters
Slow queries Find bottlenecks
Query execution time Track trends
Connection pool usage Prevent exhaustion
CPU and memory Resource planning
Lock waits Concurrency issues

Monitoring Tools (2026)

Tool Type
Datadog Full APM
Grafana + Prometheus Open source
pganalyze PostgreSQL specific
PlanetScale Insights MySQL
Neon Dashboard Serverless Postgres
Sentry Error tracking

Back to Top


My Thoughts

Database performance issues are rarely caused by one big mistake.
They're usually the result of small inefficiencies piling up over time.

Small inefficiencies → Compound over time → Major performance issues
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

If you remember just a few things, make it these:

Priority Action
1 Fetch only what you need
2 Index smartly
3 Avoid N+1 queries
4 Use pagination
5 Measure performance, don't guess
6 Design your schema thoughtfully

The earlier you build good habits, the easier it becomes to scale your application without headaches later.

Back to Top


Bonus: Quick Checklist

Use this checklist for your next code review:

## Database Performance Checklist

- [ ] Am I using `SELECT *` anywhere?
- [ ] Do my frequent queries have indexes?
- [ ] Am I accidentally doing N+1 queries?
- [ ] Are large responses paginated?
- [ ] Have I checked query performance using EXPLAIN?
- [ ] Am I caching repeated queries?
- [ ] Am I using connection pooling?
- [ ] Are related operations wrapped in transactions?
- [ ] Do I have database monitoring set up?
- [ ] Have I logged ORM-generated queries?
Enter fullscreen mode Exit fullscreen mode

Further Reading


Discussion

If you found this helpful, let me know in the comments.

What's the biggest database mistake you've encountered in production?


If you're building backend systems, mastering these fundamentals will give you an edge most developers don't have.

Back to Top

Top comments (0)