DEV Community

Ella
Ella

Posted on

Why PostgreSQL Ignores Your Index (Sometimes), Entry #2

I Thought My Index Would Fix Everything

I added the index.

Ran the query.

And… nothing changed.

Same slow response. Same frustration.

That was the moment I realized something uncomfortable:

PostgreSQL doesn’t care about your index.

It cares about something else entirely.


The Question PostgreSQL Is Actually Answering

When you run a query, PostgreSQL is not asking:

“Do I have an index?”

It’s asking:

“What is the cheapest way to get this data?”

That’s it.


Meet EXPLAIN ANALYZE

If you’ve ever run:

EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 42;
Enter fullscreen mode Exit fullscreen mode

You’ve seen something like:

Index Scan using idx_orders_user_id on orders
Index Cond: (user_id = 42)
Actual Time: 0.03 ms
Enter fullscreen mode Exit fullscreen mode

Here’s how to read it:

  • Index Scan using idx_orders_user_id
    → PostgreSQL used your index

  • Index Cond: (user_id = 42)
    → Condition applied inside the index

  • Actual Time
    → What really happened (not theory)

But here’s where things get interesting…


The Counter-Intuitive Truth

Indexes are not always faster.

Let that sink in.

Indexes are only faster when they reduce total work.

And sometimes… they don’t.


How PostgreSQL Really Decides

It follows a simple (but powerful) mental model:

Step 1: How many rows do I need?

  • Few rows → Index Scan
  • Most rows → Sequential Scan

Step 2: What does it cost to get them?

Index scan means:

  • Jump to index
  • Jump to table
  • Repeat (random access)

That “jumping” is expensive.


Using a Warehouse Analogy

Imagine a warehouse.

Two ways to find items:

Option A: Sequential Scan

Walk every aisle. Check every box.

Option B: Index Scan

Use a system to jump directly to shelves.

Now here’s the twist:

  • If you need 5 items → jumping is faster
  • If you need 80% of the warehouse → just walk

Jumping becomes slower than walking

That’s exactly how PostgreSQL thinks.


When PostgreSQL Ignores Your Index

Even if your index exists, PostgreSQL may skip it when:

1. The Table Is Small

Scanning everything is cheaper than using the index.

2. Your Filter Isn’t Selective

WHERE status = 'active'
Enter fullscreen mode Exit fullscreen mode

If 80% of rows are “active”:

The index is of no use...


3. Index Access Is More Expensive

Indexes require:

  • Reading index pages
  • Jumping to table pages (random I/O)

PostgreSQL asks:

“Is all this jumping worth it?”

If not → Seq Scan wins


The Cost Model

PostgreSQL calculates cost based on:

  • Number of rows expected
  • Filter selectivity
  • Random vs sequential I/O

It does NOT follow rules like:

  • “Index exists → use it”

Instead:

“Which plan does the least total work?”


The Insight Most Developers Miss

Here’s the shift that changed everything for me:

Stop thinking in terms of speed. Start thinking in terms of work.

PostgreSQL is not optimizing for:

  • elegance
  • structure
  • your expectations

It’s optimizing for:

minimum effort to get the result


Why This Matters (Especially If You're Scaling)

At small scale:

  • Everything works
  • Mistakes are hidden

At large scale:

  • Wrong assumptions = slow queries
  • Slow queries = real problems

Understanding this early gives you an edge most devs don’t have.


Part of a Bigger Series

This is part of my Advanced Backend Learning Series where I break down concepts I’m actively learning—without the usual fluff.




If this clicked for you, the next posts will go deeper into Query optimization

You didn’t do anything wrong.

Your index didn’t “fail.”

You just learned the real rule:

Indexes are not faster. They are sometimes cheaper.

And PostgreSQL will always choose cheap over clever.


If this post, taught you something:

  • Drop a comment with something that confused you about databases
  • Or follow the series—next post gets even more practical

Because the goal isn’t to memorize PostgreSQL.

It’s to think like it.

Top comments (0)