DEV Community

Drilon Hametaj
Drilon Hametaj

Posted on

3 Database Indexes Turned a 30-Second Query Into 0.4 Seconds

A client called me: "The order list takes 30 seconds to load." Their custom ERP was crawling. The hardware was fine. The problem was the code. Specifically, one query.

Here's what happened and how I fixed it.

The Setup

Stack: Node.js backend, PostgreSQL database, React frontend. The application was a custom ERP for a manufacturing company — orders, clients, products, invoicing.

The orders table had 2.1 million rows accumulated over 8 years. The query that powered the main order list page was doing a full table scan. Every. Single. Time.

The Query (Before)

SELECT o.id, o.order_date, o.status, o.total,
       c.company_name, c.vat_number
FROM orders o
JOIN clients c ON o.client_id = c.id
WHERE o.status IN ('pending', 'processing', 'shipped')
  AND o.order_date >= '2025-01-01'
ORDER BY o.order_date DESC
LIMIT 50 OFFSET 0;
Enter fullscreen mode Exit fullscreen mode

Looks innocent, right? Let's see what EXPLAIN ANALYZE says:

Seq Scan on orders o  (cost=0.00..89543.21 rows=2100000)
  Filter: (status = ANY(...) AND order_date >= '2025-01-01')
  Rows Removed by Filter: 1987432
Planning Time: 0.2ms
Execution Time: 29847ms
Enter fullscreen mode Exit fullscreen mode

Sequential scan on 2.1 million rows. No indexes on status, order_date, or client_id (the foreign key).

The Fix: 3 Indexes

-- Index 1: composite index for the WHERE clause
CREATE INDEX idx_orders_status_date
ON orders (status, order_date DESC);

-- Index 2: foreign key index (should ALWAYS exist)
CREATE INDEX idx_orders_client_id
ON orders (client_id);

-- Index 3: covering index for the most common query pattern
CREATE INDEX idx_orders_list_covering
ON orders (status, order_date DESC)
INCLUDE (id, total, client_id);
Enter fullscreen mode Exit fullscreen mode

Index 3 is the secret weapon — a covering index. PostgreSQL can satisfy the entire query from the index alone, without touching the table (an "index-only scan"). The INCLUDE clause adds columns that aren't part of the search but ARE part of the SELECT.

After

Index Only Scan using idx_orders_list_covering on orders o
  Index Cond: (status = ANY(...) AND order_date >= '2025-01-01')
Planning Time: 0.3ms
Execution Time: 0.4ms
Enter fullscreen mode Exit fullscreen mode

From 29,847ms to 0.4ms. Same data. Same server. Same query. Just proper indexing.

The Broader Pattern

In my experience building management software for SMBs, slow queries are the #1 performance complaint. And 80% of the time, the fix is the same:

  1. Run EXPLAIN ANALYZE on every slow query
  2. Add indexes for your WHERE clauses — if you filter by it, index it
  3. Always index foreign keys — PostgreSQL does NOT auto-index FKs (unlike MySQL)
  4. Consider covering indexes for your most critical queries
  5. Monitor with pg_stat_user_indexes to see which indexes are actually used ## Common Mistakes I See

Too many indexes: every index slows down INSERT/UPDATE. Index what you query, not everything.

Indexing low-cardinality columns alone: a boolean is_active column with 99% true doesn't benefit from a standalone index. But it CAN benefit as part of a composite index.

Ignoring index order in composites: (status, order_date) is NOT the same as (order_date, status). Put the equality conditions first, range conditions last.

Never running VACUUM/ANALYZE: PostgreSQL needs updated statistics to choose the right query plan. If your stats are stale, even good indexes get ignored.

The Takeaway

Before blaming the hardware, the framework, or the database engine — check your indexes. The most impactful performance fix I've ever shipped took 3 lines of SQL and 10 minutes of analysis.


I build custom management software for Italian SMBs. Performance, security, and clean architecture are non-negotiable. Let's connect.


Top comments (0)