Your Slow Query Has a Story. EXPLAIN Reads It Out Loud.
Last month I spent two hours staring at a WooCommerce site that took 11 seconds to load its shop page. The hosting company said "upgrade your plan." The site owner said "it was fast six months ago." Query Monitor showed 47 database queries, and one of them was eating 9 of those 11 seconds.
The query itself looked innocent - a JOIN between wp_posts and wp_postmeta filtering by product price. Nothing fancy. But when I put EXPLAIN in front of it, the story was right there: MySQL was reading 1.2 million rows to return 24 products.
That's what EXPLAIN does. It doesn't fix anything. It shows you the execution plan - how MySQL decided to run your query, which indexes it picked (or didn't), how many rows it expects to touch, and where the bottleneck lives. Once you can read that output, you stop guessing and start fixing.
Most WordPress developers never touch EXPLAIN. They install a caching plugin and hope for the best. That works until it doesn't - and when it doesn't, you're back to wondering why your site is still slow even with full-page caching enabled.
This guide is what I wish someone had given me years ago. Not a MySQL textbook - a practical reference for the queries WordPress actually runs.
How to Run EXPLAIN (Three Ways)
Before we get into reading the output, you need to actually run it. There are three practical ways in WordPress.
1. WP-CLI (fastest for production)
If you have SSH access, this is the cleanest approach:
wp db query "EXPLAIN SELECT * FROM wp_options WHERE autoload = 'yes';"
That's it. WP-CLI reads your wp-config.php credentials, connects to the database, and runs the query. No phpMyAdmin, no extra plugins.
For a more complex query, use single quotes around the whole thing and escape inner quotes:
wp db query "EXPLAIN SELECT p.ID, p.post_title FROM wp_posts p JOIN wp_postmeta pm ON p.ID = pm.post_id WHERE pm.meta_key = '_price' AND p.post_type = 'product' AND p.post_status = 'publish';"
2. Query Monitor (best for discovering which queries to EXPLAIN)
Query Monitor doesn't run EXPLAIN for you, but it shows you every query on a page load, sorted by execution time. That's where you find the slow ones.
Open the Queries panel, sort by time, and grab the SQL from the slowest query. Then run it through WP-CLI or phpMyAdmin with EXPLAIN prepended.
The useful thing about Query Monitor is context - it tells you which plugin or theme function generated each query. So you're not just finding slow queries, you're finding who's responsible.
3. phpMyAdmin or MySQL CLI
If you're on shared hosting without SSH, phpMyAdmin works fine. Go to the SQL tab, paste your query with EXPLAIN in front, and hit Go. The output is the same table.
The EXPLAIN Output - Column by Column
When you run EXPLAIN, MySQL returns a table with up to 12 columns. You don't need all of them. Here are the ones that actually matter for WordPress debugging, in order of importance.
type - The Most Important Column
This tells you how MySQL accesses each table in the query. Think of it as a grade - from A+ to F.
| Type | What It Means | WordPress Context | Verdict |
|---|---|---|---|
system |
Table has one row | wp_options with a single match | Perfect |
const |
Single row, primary key lookup | WHERE ID = 42 |
Perfect |
eq_ref |
One row per join via unique key | Joining wp_posts to wp_users by ID | Excellent |
ref |
Multiple rows via non-unique index |
WHERE post_type = 'post' using type_status_date index |
Good |
fulltext |
Full-text index search | Rarely seen in WordPress core | Good |
range |
Index range scan | WHERE post_date > '2025-01-01' |
Acceptable |
index |
Full index scan (reads every index entry) | Scanning all of type_status_date | Slow |
ALL |
Full table scan | Reading every row in wp_postmeta | Problem |
The ones you need to worry about are index and ALL.
ALL means MySQL is reading every single row in the table. On a wp_postmeta table with 500,000 rows, that's catastrophic. On a wp_options table with 300 rows, it's fine. Context matters - always check the rows column alongside type.
index is better than ALL but still not great. It means MySQL is reading every entry in an index rather than the table itself. Faster because indexes are smaller than tables, but still a full scan.
Everything from ref upward is generally fine for WordPress workloads.
possible_keys vs key - What MySQL Could Use vs What It Actually Uses
possible_keys lists every index that could theoretically help with this query. key shows which one MySQL actually chose.
The interesting case is when possible_keys shows options but key is NULL. That means MySQL looked at the available indexes, did the math, and decided none of them were worth using. This usually happens when:
- The index isn't selective enough (e.g., an index on
post_statuswhen 95% of posts are 'publish') - The query needs columns not covered by any index, so MySQL would have to look up each row anyway
- The table is small enough that a full scan is actually faster than index lookups
The other case - possible_keys is NULL - means there's no index at all that could help. That's your signal to create one.
rows - How Many Rows MySQL Plans to Examine
This is an estimate, not an exact count. MySQL uses index statistics to guess. But it's usually in the right ballpark, and the ratio is what matters.
If your query returns 20 rows but rows says 450,000, MySQL is doing 22,500x more work than necessary. That's the kind of ratio that turns a 10ms query into a 10-second query.
In WordPress, the usual suspects for high row counts are:
- wp_postmeta: Almost always the biggest table, often millions of rows
- wp_posts: Can get large on content-heavy sites (50k+ posts)
- wp_options: Usually small, but autoloaded rows can cause issues at scale
- wp_term_relationships: Gets big on sites with complex taxonomies
Extra - The Column That Tells You What's Actually Happening
This is where MySQL explains the work it's doing beyond simple row reading. Some values are fine, some are red flags.
Good values:
-
Using index- MySQL can answer the query entirely from the index without touching the table data. This is called a "covering index" and it's the best case scenario. -
Using where- MySQL is filtering rows with a WHERE clause. Normal and expected. Only a problem when combined withtype: ALL.
Warning values:
-
Using filesort- MySQL is sorting results without an index. On small result sets (under a few hundred rows), it's fine. On thousands of rows, it's a performance killer. Common with ORDER BY on unindexed columns. -
Using temporary- MySQL created a temporary table to process the query. Seen with GROUP BY, DISTINCT, and some UNION operations. Expensive on large datasets. -
Using join buffer- MySQL couldn't use an index for the join and is buffering rows in memory instead. Almost always means a missing index on the join column.
Red flag combination:
When you see Using where; Using filesort; Using temporary together on a table with type: ALL - that's your query doing a full table scan, creating a temp table, and sorting without an index. Three performance problems in one line.
Real WordPress Queries Under the Microscope
Theory is useful, but let's look at actual queries that slow down real WordPress sites. These are the ones I see most often.
1. The WooCommerce Product Price Query
This query shows up on every WooCommerce shop page, category page, and anywhere products are listed with price sorting:
EXPLAIN SELECT p.ID, p.post_title, pm.meta_value AS price
FROM wp_posts p
JOIN wp_postmeta pm ON p.ID = pm.post_id
WHERE p.post_type = 'product'
AND p.post_status = 'publish'
AND pm.meta_key = '_price'
ORDER BY pm.meta_value + 0 ASC
LIMIT 20;
Typical EXPLAIN output on a store with 5,000 products:
+----+-------+------+------------------+------------------+---------+--------+------+-----------------------------+
| id | table | type | possible_keys | key | key_len | rows | Extra |
+----+-------+------+------------------+------------------+---------+--------+------+-----------------------------+
| 1 | p | ref | PRIMARY, | type_status_date | 164 | 4,800 | Using where |
| | | | type_status_date | | | | |
| 1 | pm | ref | post_id, | post_id | 8 | 11 | Using where; Using filesort |
| | | | meta_key | | | | |
+----+-------+------+------------------+------------------+---------+--------+------+-----------------------------+
What's happening here:
- MySQL finds ~4,800 published products using the
type_status_dateindex. That part is fine. - For each of those 4,800 products, it looks up postmeta rows using
post_id. It estimates ~11 postmeta rows per product. So it examines roughly 4,800 x 11 = 52,800 rows. - The
Using filesorton the postmeta table happens becauseORDER BY pm.meta_value + 0can't use any index - you're doing arithmetic on the value, which forces MySQL to sort all results in memory.
The real problem is the + 0 trick. WooCommerce stores prices as strings in meta_value, so it forces a numeric cast with + 0. That kills any chance of index-based sorting.
2. The wp_postmeta Multi-Meta Query
This is the classic "filter by multiple custom fields" query. ACF-heavy sites and WooCommerce product filtering generate these constantly:
EXPLAIN SELECT p.ID
FROM wp_posts p
JOIN wp_postmeta pm1 ON p.ID = pm1.post_id
JOIN wp_postmeta pm2 ON p.ID = pm2.post_id
WHERE p.post_type = 'product'
AND p.post_status = 'publish'
AND pm1.meta_key = '_stock_status'
AND pm1.meta_value = 'instock'
AND pm2.meta_key = '_price'
AND CAST(pm2.meta_value AS DECIMAL(10,2)) BETWEEN 10 AND 50;
Every additional meta filter adds another JOIN to wp_postmeta. I've seen plugins generate queries with 8 JOINs to the same table. Each JOIN multiplies the work.
The problem is structural. WordPress's Entity-Attribute-Value (EAV) pattern in wp_postmeta means that what should be a simple WHERE price BETWEEN 10 AND 50 AND stock = 'instock' becomes two separate JOINs because each attribute is a separate row.
The EXPLAIN for this typically shows type: ref on both postmeta joins using the meta_key index, but the rows estimate is still high because the meta_key index alone isn't selective enough when you have 40+ different meta keys all sharing the same index.
3. The wp_options Autoload Query
WordPress runs this on every single page load:
EXPLAIN SELECT option_name, option_value
FROM wp_options
WHERE autoload = 'yes';
Before WordPress 6.1, the autoload column had no index. On older installations (or those that never got the schema update), this is a full table scan:
+----+-----------+------+---------------+------+---------+------+------+-------------+
| id | table | type | possible_keys | key | key_len | rows | Extra |
+----+-----------+------+---------------+------+---------+------+------+-------------+
| 1 | wp_options| ALL | NULL | NULL | NULL | 932 | Using where |
+----+-----------+------+---------------+------+---------+------+------+-------------+
type: ALL, key: NULL. Full table scan, no index. On a table with 300-500 rows, the scan itself isn't that slow. But the real cost is what it loads - if you have 3MB of autoloaded option data, that's 3MB of memory consumed on every request before WordPress even starts rendering your page.
This ties directly into the autoload bloat trap. Even after WordPress 6.1 adds the index, the query is fast but the data payload might still be enormous.
4. The WooCommerce Order Lookup
Before WooCommerce switched to High Performance Order Storage (HPOS/custom tables), every order was a post in wp_posts with order details in wp_postmeta. Stores with 100k+ orders had queries like:
EXPLAIN SELECT p.ID, pm1.meta_value AS billing_email
FROM wp_posts p
JOIN wp_postmeta pm1 ON p.ID = pm1.post_id
WHERE p.post_type = 'shop_order'
AND p.post_status IN ('wc-processing', 'wc-on-hold')
AND pm1.meta_key = '_billing_email'
AND pm1.meta_value = 'customer@example.com';
The EXPLAIN for this often shows the meta_value lookup as a problem because - and this is the critical thing - meta_value is not indexed by default in WordPress.
The wp_postmeta Indexing Problem
This deserves its own section because it's the single most common cause of slow queries I see on WordPress sites. And it's a design choice baked into WordPress core.
Here's what the default wp_postmeta table looks like:
CREATE TABLE wp_postmeta (
meta_id bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_id bigint(20) unsigned NOT NULL DEFAULT '0',
meta_key varchar(255) DEFAULT NULL,
meta_value longtext,
PRIMARY KEY (meta_id),
KEY post_id (post_id),
KEY meta_key (meta_key(191))
);
Three things to notice:
The
meta_keyindex is truncated to 191 characters. The column is VARCHAR(255) but the index only covers the first 191 characters. This is a legacy constraint from older MySQL versions (5.6 and earlier). MySQL 5.7+ supports longer index prefixes, but WordPress keeps 191 for backward compatibility with older servers. For WordPress meta keys this rarely matters (most are short), but it's worth knowing.There is no index on
meta_value. This is intentional -meta_valueis a LONGTEXT column, and indexing the full thing would be impractical. But it means any query that filters by meta_value (WHERE meta_value = 'something') does a full scan of matching meta_key rows.There is no composite index on
(meta_key, post_id)or(meta_key, meta_value). Thepost_idandmeta_keyindexes are separate. When a query filters on both (which is almost every meta query), MySQL can only use one of them and has to filter the rest manually.
This is why a query like WHERE meta_key = '_price' AND meta_value > 100 is slow even on a table with "proper" indexes. MySQL uses the meta_key index to find all _price entries (which could be 50,000 rows on a big WooCommerce store), then scans every single one checking the meta_value condition.
The Fix: A Composite Index
CREATE INDEX idx_postmeta_lookup
ON wp_postmeta (meta_key(191), post_id, meta_value(191));
This composite index lets MySQL:
- Jump directly to
_priceentries via meta_key - Further narrow by post_id
- Filter or range-scan on meta_value (first 191 characters)
After adding this index, the same price filter query typically goes from examining 50,000+ rows to a few hundred.
But there's a catch. This index adds write overhead. Every time WordPress inserts, updates, or deletes a postmeta row, MySQL has to update this index too. On write-heavy sites (WooCommerce stores processing many orders, sites with constant meta updates), you're trading read speed for write speed.
For most WordPress sites, the tradeoff is worth it. Reads outnumber writes 100:1 on a typical site. But if you're running a high-volume WooCommerce store, benchmark both before and after.
Why WordPress Core Doesn't Add This Index
This has been discussed on WordPress Trac for years - there are multiple tickets about postmeta performance going back to 2013. The core team's position is reasonable: they can't predict which composite index will help the most for every site. A WooCommerce store needs different optimization than a news site with ACF fields.
The Index WP MySQL For Speed plugin takes a pragmatic approach - it analyzes your actual query patterns and adds indexes that match. Worth looking at if you don't want to manage indexes manually.
How to Spot Problems: A Decision Flowchart
Here's the mental model I use when reading EXPLAIN output. In order of severity:
Step 1: Check type column
- Is it
ALL? That's a full table scan. Check therowscolumn - if it's over a few thousand, this is likely your bottleneck. - Is it
index? Full index scan. Better thanALLbut still reading everything.
Step 2: Check rows column
- Is the ratio of rows examined to rows returned over 100:1? The query is doing too much work.
- Multiply
rowsacross all tables in the query. If table A shows 5,000 rows and table B shows 20 rows, MySQL might examine up to 100,000 row combinations.
Step 3: Check key column
- Is it NULL? MySQL isn't using any index. Either create one or check why it's ignoring existing indexes.
- Does it match what you expected? Sometimes MySQL picks a suboptimal index.
Step 4: Check Extra column
-
Using filesorton a large result set? Consider adding an index that covers the ORDER BY column. -
Using temporary? The GROUP BY or DISTINCT is creating a temp table. Sometimes unavoidable, but check if a composite index could help. -
Using join buffer? The join column is missing an index.
Writing CREATE INDEX for WordPress Safely
Adding an index to a production WordPress database isn't something to take lightly. I've seen sites go down because someone added an index to a 2-million-row wp_postmeta table during peak traffic.
Rule 1: Always Back Up First
wp db export backup-before-index.sql
No exceptions. If the index creation locks the table for 30 seconds and your site goes down, you need a way back.
Rule 2: Check Table Size First
wp db query "SELECT table_name, table_rows, ROUND(data_length/1024/1024, 2) AS data_mb FROM information_schema.TABLES WHERE table_schema = DATABASE() AND table_name LIKE 'wp_%' ORDER BY table_rows DESC;"
Tables under 100,000 rows? Index creation is near-instant. Over 1 million? It could take minutes and lock the table.
Rule 3: Do It During Low Traffic
On MySQL 5.7+ and MariaDB 10.x+, CREATE INDEX uses an online DDL algorithm that doesn't fully lock the table. But it still causes increased I/O and can slow down concurrent queries. Schedule it for your quietest hours.
Rule 4: Verify After Creating
Run the same EXPLAIN that showed the problem before. The key column should now show your new index, type should improve, and rows should drop dramatically.
wp db query "EXPLAIN SELECT p.ID FROM wp_posts p JOIN wp_postmeta pm ON p.ID = pm.post_id WHERE pm.meta_key = '_price' AND pm.meta_value > 100 AND p.post_type = 'product';"
If key still doesn't show your index, MySQL decided it's not helpful for this query. That can happen if the table statistics are stale:
ANALYZE TABLE wp_postmeta;
This updates the statistics MySQL uses to make index decisions. Run it after creating indexes.
Rule 5: Don't Over-Index
Every index slows down writes. WordPress does a lot of writes - saving posts, updating options, processing orders, logging things. I've seen sites where someone added 8 indexes to wp_postmeta and INSERT performance tanked.
A good rule: 2-3 targeted indexes on wp_postmeta is fine. More than that, and you should question whether you need a different data architecture (like custom tables).
Common WordPress Indexes Worth Adding
Based on hundreds of slow query analyses I've done, these are the indexes that help most often:
For WooCommerce Stores
-- Product meta lookups (price, stock, SKU)
CREATE INDEX idx_postmeta_key_value
ON wp_postmeta (meta_key(191), meta_value(191));
-- Order queries by status and date
CREATE INDEX idx_posts_type_status_date
ON wp_posts (post_type, post_status, post_date);
Note: WordPress already has an index called type_status_date on wp_posts. But on some older installations, the column order might not match what your queries need. Check with SHOW INDEX FROM wp_posts; first.
For Sites With Autoload Issues (pre-6.4)
CREATE INDEX idx_options_autoload
ON wp_options (autoload);
WordPress 6.1+ adds this automatically. But I still see sites running WordPress 6.3 or older without it. Check first:
wp db query "SHOW INDEX FROM wp_options WHERE Key_name = 'autoload';"
If it returns nothing, add it.
Using EXPLAIN with Query Monitor - The Full Workflow
Here's the workflow I actually use when debugging a slow WordPress site. Not the theoretical version - the one that works.
Step 1: Install Query Monitor
wp plugin install query-monitor --activate
Step 2: Load the slow page and open Query Monitor
Click the Query Monitor info in the admin bar. Go to the Queries panel. Sort by time (descending). The slow queries float to the top.
Step 3: Copy the slow query
Query Monitor shows the full SQL with actual values filled in (not placeholders). Copy it.
Step 4: Run EXPLAIN via WP-CLI
wp db query "EXPLAIN <paste the query here>;"
Step 5: Read the output using the framework above
Check type, rows, key, Extra. Identify the bottleneck.
Step 6: Decide on a fix
Your options, in order of preference:
- Add an index - if the query is fine but missing index support
-
Rewrite the query - if it's doing unnecessary work (e.g.,
SELECT *when you only need ID) - Cache the result - if the query is expensive but the result doesn't change often
- Use a different data structure - if the problem is structural (too many postmeta JOINs)
Step 7: Verify the fix
Run EXPLAIN again. Compare the rows count before and after. Automated tools can do this comparison for you, but the manual process works fine if you want to understand what's happening under the hood.
What EXPLAIN Won't Tell You
EXPLAIN is powerful but it has blind spots:
- It doesn't show actual execution time. It shows the plan, not the result. A query with a good EXPLAIN can still be slow if it returns huge amounts of data.
- Row estimates can be wrong. Especially on tables with uneven data distribution. The estimate might say 500 rows when the actual count is 50,000.
- It doesn't account for caching or server load. A query that's fine on an idle server with a warm buffer pool might crawl under real traffic.
For actual execution time, you need the slow query log or a monitoring tool that tracks query duration over time.
The Bigger Picture
EXPLAIN is a diagnostic tool - it tells you what's wrong, not what to do about it. The fix might be an index, a query rewrite, an architectural change (moving from postmeta to custom tables), or sometimes just accepting that a particular query is expensive and caching the result.
The WordPress database schema was designed in 2003 for blogs with a few hundred posts. It works remarkably well for what it is, but it wasn't built for WooCommerce stores with 100,000 orders or ACF-powered sites with 50 custom fields per post. Understanding EXPLAIN helps you work within those constraints - or make informed decisions about when to break out of them.
If you want to skip the manual process and get automated EXPLAIN analysis for every slow query on your site, WP Multitool's scanner does exactly that. But honestly, learning to read EXPLAIN yourself is worth the time. It's a skill that transfers to any MySQL database, not just WordPress.





Top comments (0)