MySQL's slow query log is one of the most practical tools you have for tracking down performance problems. It records queries that take longer than a threshold you define, so instead of guessing where your database is slow, you get a list of actual offenders.
This guide walks through enabling the log, reading it, and acting on what you find. Six concrete steps, no hand-waving.
What the slow query log actually does
The slow query log captures queries that exceed a configured execution time threshold. It writes them to a file (or a table) along with execution time, rows examined, rows sent, and timestamp.
It does not run continuously in the background eating CPU. The overhead is minimal — mostly the cost of writing to disk when a slow query is detected. Most production databases run with it enabled permanently.
Two settings control the most important behavior:
| Setting | What it controls |
|---|---|
slow_query_log |
Enables or disables the log |
long_query_time |
Minimum execution time (in seconds) to log a query |
Setting long_query_time to 0 logs everything, which is useful for short-term profiling but not something you'd leave on in production.
Step 1: Enable the slow query log
You can enable it without restarting MySQL by running these commands:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';
This takes effect immediately for new connections. Existing connections keep the old settings until they reconnect.
To make it persistent across restarts, add the following to your my.cnf or my.ini:
[mysqld]
slow_query_log = 1
long_query_time = 1
slow_query_log_file = /var/log/mysql/slow.log
log_queries_not_using_indexes = 1
The log_queries_not_using_indexes option is worth enabling. It catches queries that skip indexes entirely, even if they finish quickly on small tables — those same queries will become a problem as the table grows.
After changing the config file, restart MySQL for the settings to take effect.
Step 2: Set the right threshold
The default long_query_time is 10 seconds, which is too high for most applications. A query that takes 10 seconds is already causing visible problems.
Start lower. For a web application, 1 second is a reasonable starting point. For APIs or dashboards where users expect sub-second responses, drop it to 0.5 or even 0.2.
There is no single right answer — it depends on what your application is doing. A reporting query joining multiple large tables might reasonably take 3–5 seconds. A login check taking 800ms is a problem.
If you are debugging a specific issue and want to catch everything, temporarily set it to 0:
SET GLOBAL long_query_time = 0;
Run your test, look at the log, then set it back to something reasonable.
Step 3: Read the log with mysqldumpslow
Raw slow query log files are hard to scan manually. mysqldumpslow is a tool that ships with MySQL and summarizes the log by grouping similar queries together.
Basic usage:
mysqldumpslow -s t -t 10 /var/log/mysql/slow.log
The -s t flag sorts by total time, and -t 10 shows the top 10. This gives you the queries that are costing the most time overall, not just the ones that were slowest on a single execution.
Other useful sort options:
-
-s c— sort by count (most frequently executed slow queries) -
-s l— sort by lock time -
-s r— sort by rows examined
Look at rows examined versus rows sent. A query that examines 500,000 rows to return 10 is doing a lot of unnecessary work, and is a strong signal that an index is missing or being bypassed.
A third-party tool worth knowing is pt-query-digest from Percona Toolkit. It produces more detailed output than mysqldumpslow and handles edge cases better. If you find yourself spending a lot of time analyzing slow query logs, it is worth installing.
Step 4: Run EXPLAIN on the slow queries
Once you have a list of problem queries, the next step is understanding why they are slow. EXPLAIN shows you how MySQL plans to execute a query.
EXPLAIN SELECT * FROM orders WHERE customer_id = 42 AND status = 'pending';
The output shows the execution plan — which tables are accessed, in what order, what indexes are used and which are not.
| Column | What to look for |
|---|---|
type |
ALL means full table scan — usually bad. ref, eq_ref, range are better |
key |
Which index MySQL chose. NULL means no index was used |
rows |
Estimated rows MySQL expects to examine |
Extra |
Watch for Using filesort and Using temporary — both indicate expensive operations |
EXPLAIN ANALYZE (available in MySQL 8.0+) actually runs the query and shows real execution times, not just estimates. It is more useful when estimates are misleading.
A full table scan on a small table is not necessarily a problem. A full table scan on a table with millions of rows is.
Step 5: Add or fix indexes
Most slow queries come down to missing indexes. Once you know which column a query is filtering or joining on, adding an index is often all it takes.
CREATE INDEX idx_orders_customer_status ON orders (customer_id, status);
A few things to keep in mind:
- Column order in composite indexes matters. Put the most selective column first, and match the order to how the query filters
- Indexes speed up reads but slow down writes. A table that is written to constantly probably should not have 10 indexes on it
- Unused indexes waste space and add overhead. Check
information_schema.INDEX_STATISTICSor usept-index-usageto find indexes that are never used
After adding an index, run EXPLAIN again and confirm MySQL is actually using it. Sometimes MySQL will still choose a full scan if its statistics are stale. Run ANALYZE TABLE orders; to refresh them.
Step 6: Rewrite the query when indexing is not enough
Sometimes the query itself is the problem. Indexes help MySQL find rows faster, but a poorly written query can still be slow even with good indexes.
Common patterns worth looking for:
-
SELECT *when you only need a few columns. Retrieving unnecessary columns increases I/O and network transfer - Functions applied to indexed columns in WHERE clauses.
WHERE YEAR(created_at) = 2024cannot use an index oncreated_at. Rewrite it asWHERE created_at >= '2024-01-01' AND created_at < '2025-01-01' -
ORconditions that prevent index use. Sometimes splitting into two queries withUNION ALLis faster - Subqueries that run once per row instead of once total. Often rewritable as a JOIN
Rewriting queries is worth doing before reaching for schema changes. A query rewrite has no migration risk and takes effect immediately.
Keeping an eye on things over time
The slow query log tells you what is slow right now, but it does not help you catch regressions before they become user-visible problems. For that, you need monitoring.
Query performance tends to degrade gradually as tables grow. A query that was fine at 10,000 rows starts showing up in the slow query log at 1,000,000. Checking the log regularly — or setting up alerts — helps you catch these before users start complaining.
If you are running MySQL in production and care about the data, a solid MySQL backup setup is worth having alongside your performance monitoring. Databasus is an industry-standard tool for MySQL backups, suitable for both individual projects and larger teams.
Summary
The slow query log is not a complicated tool. Enable it, set a sensible threshold, and you immediately have a feed of actual slow queries from your production database.
From there, the process is straightforward: use mysqldumpslow or pt-query-digest to find the worst offenders, run EXPLAIN to understand the execution plan, add indexes where they are missing, and rewrite queries where the logic itself is inefficient.
Most performance problems in MySQL are not mysterious. They show up in the slow query log, and they have a clear cause. The hard part is making time to look.

Top comments (0)