DEV Community

Cover image for MySQL Analyze Query Execution Plans: How to Reach Top Query SpeedMemory storage engine
dbDeveloper
dbDeveloper

Posted on

MySQL Analyze Query Execution Plans: How to Reach Top Query SpeedMemory storage engine

Hey there, MySQL mavens! Ever felt like you’re on a quest for the holy grail of lightning-fast queries? Well, fear not. Diving into the intriguing world of MySQL analyze query execution plans might be the adventure you’re looking for!

Picture this: your MySQL queries are running fine for now. As the data grows day by day, your queries started to crawl. What happened? Is there a way to fix this?

What if I told you there’s a hidden map that can lead you to query utopia? Yup, that’s where understanding execution plans comes in. And yes, it can help you make your queries run at top speed.

Now, before you start thinking this is about to get as dry as a boring science project or as complex as a mad scientist’s experiment, hold on! I promise we’re taking the scenic route, not the sleep-inducing one. It’s more like a detective story, uncovering clues to boost your queries’ speed – making them zoom rather than crawl.

This article will walk you through the secrets behind MySQL analyze query execution plans. Get ready to turn the tides on sluggish queries. And unlock the hidden potential of your MySQL database. Let’s dive in and make query optimization as fun and rewarding as a treasure hunt!

Ready?

Here’s the map to this rewarding journey:

Let’s start by knowing why you should learn this stuff?

Why Learn to Analyze MySQL Execution Plans

Okay, so you’ve mastered the art of crafting MySQL queries, and they’re doing okay. But here’s the thing: ever dreamed of taking your queries from good to outstanding? That’s where understanding MySQL execution plans comes in. It’s the secret weapon that separates the good from the exceptional.

Why bother, you ask? Well, think about it this way: when you analyze these execution plans, you’re essentially peeking under the hood of your database engine. It’s like becoming a detective solving a mystery, but the prize is faster, more efficient queries.

By unraveling the execution plans, you’ll uncover precious insights. You can spot the bottlenecks. Understand why some queries take ages to return results. And pinpoint where to make optimizations.

Translation?

Your queries start running faster. Your applications become more responsive. And you, my friend, become the hero who saves the day for your users.

Learning to analyze MySQL query execution plans bridges the gap between good and exceptional queries. It’s the key to not just understanding but mastering the power of MySQL, giving you the edge to boost performance and win the efficiency game.

So, let’s roll up our sleeves and explore the treasure trove of MySQL query optimization!

Using EXPLAIN to Analyze MySQL Query Execution

Get ready to dive deep into the heart of MySQL query optimization! In this section, we’ll unravel the power of EXPLAIN, the secret decoder ring for understanding MySQL query execution plans.

We can’t do MySQL analyze query without data. So, this article uses a sample sales database that you can download here. I also used MySQL 8.0.34 running on Ubuntu 22.04.

What is EXPLAIN: Decoding the Query’s Blueprint

Ever wished for a crystal ball to predict how your MySQL queries perform? Well, EXPLAIN is as close as it gets! Think of it as an x-ray vision tool for your queries, offering insights into the behind-the-scenes magic of execution plans.

When you throw an EXPLAIN before your SELECT, INSERT, UPDATE, or DELETE query, it doesn’t run the query. Instead, it reveals a detailed roadmap showing how MySQL intends to process it, including the order of processing.

The following is the syntax for EXPLAIN:

EXPLAIN [FORMAT=TRADITIONAL | JSON | TREE] | [ANALYZE]
SELECT | INSERT | UPDATE | DELETE statement
Enter fullscreen mode Exit fullscreen mode

This syntax is simple to remember. Here’s some explanation of each keyword in EXPLAIN:

  • FORMAT is optional. If you do not provide a format, it’s the same as using FORMAT=TRADITIONAL. You will see a sample later for each of the formats.

  • ANALYZE will show you an output like FORMAT=TREE but with additional information like actual time and number of loops. With ANALYZE, the EXPLAIN results will come out slower than the EXPLAIN FORMAT because the actual time, cost, and loops also need to be measured.

  • The statement is either SELECT, INSERT, UPDATE or DELETE statement. This is the statement you want to analyze for query optimization.

EXPLAIN Formats

The best way to see what it looks like is to show you examples. That way, you will also see the differences in the format and the gems it offers.

TRADITIONAL Format

We’ll start with TRADITIONAL. It will show you a tabular information about the execution plan. Here’s an example using the CLI:

Using EXPLAIN to analyze MySQL query execution. The format shown is the default or TRADITIONAL.
The above shows EXPLAIN on a SELECT statement with joined tables. The purpose of the query is to get the total sales figures per store per product in 2021. The most common columns examined are table, type, rows, filtered, key, and ref columns. You can also examine the Extra column if MySQL puts something there. Later, you will see samples of these.

TREE Format

MySQL reveals more query execution plan information when using the TREE format. Below is an example:

Using EXPLAIN to analyze MySQL query execution. The format shown is the TREE format.
The EXPLAIN above analyzed the same SELECT statement and provided more details. For example, a table scan on a temporary table, the join used, the filter condition, and more. I won’t reveal what it means yet and if the execution plan is good or bad. Because right now, our focus is on the EXPLAIN format.

JSON Format

Finally, the JSON format. This format uses the JavaScript Object Notation (JSON) with a hierarchical view of the execution plan. Here’s a portion of the output to EXPLAIN the same SELECT statement:

Using EXPLAIN to analyze MySQL query execution. The format shown is JSON format.It will reveal more important points like the columns used and a detailed query cost info.

EXPLAIN ANALYZE

Another way to use EXPLAIN is to add the ANALYZE keyword.

Let’s take the previous query as an example:

EXPLAIN ANALYZE
SELECT
d.store_name
,c.product_name
,SUM(total_price) AS product_sales
FROM sales_header a
INNER JOIN sales_detail b ON a.sales_header_id = b.sales_header_id
INNER JOIN product c ON b.product_id = c.product_id
INNER JOIN store d ON a.store_id = d.store_id
WHERE year(a.sales_date) = 2021
GROUP BY d.store_name, c.product_name;
Enter fullscreen mode Exit fullscreen mode

And here’s the output from MySQL CLI:
The output of EXPLAIN ANALYZE.The final result of 240 rows is the actual row count of the result set. You may opt to use this for actual stats in CLI than using FORMAT=TREE which gives estimates only.

Now that you know the syntax and the output format, let’s take a short detour on where to look for queries to analyze.

Where to Look for Slow Queries: Hunting Down the Culprits

Alright, hero! So, you’re all geared up with EXPLAIN, ready to save the day by making queries faster than a speeding bullet. But first, we need to identify the ‘villains’ – the slow queries – so cue the hero music, here’s how to find them.

Think of it as tuning in to police radio for clues. Two trusty sources will help you uncover these sluggish criminals lurking in your database:

  1. Slow Query Log: Activate this feature by adjusting some MySQL variable settings – long_query_time, slow_query_log, slow_query_log_file, and log_output. Once done, MySQL starts keeping a diary of queries that take longer than the specified time. It’s like a detective’s notebook filled with slow-query offenders. For more details about the slow query log, check the official documentation.

  2. Performance Schema: It’s like setting up surveillance cameras to catch the bad guys. Configure the performance_schema database to capture detailed information about query execution times and bottlenecks. Avoid the wild guesses on which query is slow. For more details on setting this up, see the official documentation.

This is your ticket to becoming the Sherlock Holmes of MySQL queries – identifying the culprits and bringing swift justice to optimize their performance. We only made this short and sweet because analyzing MySQL query execution plan is our focal point.

Now that you know how to capture the culprits, you need evidence to bring things to query execution justice. The next section will show you what evidence to look for.

Some Problem Areas to Watch Out in EXPLAIN: Unveiling the Crime Scene

Alright, CSI: MySQL Edition! We’re now stepping into the shoes of forensic scientists, diving into the crime scenes of slow queries. Our goal? To uncover the evidence that reveals why these queries are taking their sweet time.

Here they are:

Table Scan

Picture this like searching for a lost item in an entire mansion without any hints. A table scan means MySQL is checking every single row in a table, which can slow things down considerably.

A table scan is very bad on large tables. So, let’s look for some hints of table scan from the previous query.

Here’s the query:

SELECT
d.store_name
,c.product_name
,SUM(total_price) AS product_sales
FROM sales_header a
INNER JOIN sales_detail b ON a.sales_header_id = b.sales_header_id
INNER JOIN product c ON b.product_id = c.product_id
INNER JOIN store d ON a.store_id = d.store_id
WHERE year(a.sales_date) = 2021
GROUP BY d.store_name, c.product_name;
Enter fullscreen mode Exit fullscreen mode

When using the TRADITIONAL format, look for the type ALL. See the EXPLAIN results of the query below. The table scan happened on the first table:

The type = ALL means table scan. It is considered bad when doing MySQL analyze query.
Or, use the TREE format. The words “Table Scan” appear there. Here’s a sample using the same query:

Table scan is shown whren the TREE format is used to analyze MySQL query execution.
Meanwhile, the JSON format will display ALL. The same with the TRADITIONAL format.

Number of Rows Scanned Is More Than the Result Set

It’s like sifting through a truckload of groceries to find one specific item. If MySQL’s scanning more rows than necessary, it’s like overkill and could be why your queries are sluggish.

The previous example has another evidence and this time, the rows scanned are too much. While the resulting rows are only 200+ rows, the rows scanned are way bigger than that. See below:

Boxed in the image emphasizes the number of rows scanned
Note that the number of rows in InnoDB tables is an estimate. But you can try EXPLAIN ANALYZE for the actual time and rows.

Sorting Problems

Imagine sorting a huge pile of papers without organizing them first – messy and time-consuming. If MySQL is doing extensive sorting, it might be slowing down your queries. If your query is fast without an ORDER BY but very slow with it, that is a sorting problem. One of the fixes we will discuss later will stop this. For now, please read on.

Index Not Used

It’s like searching for a word in a book without an index – you’re flipping through page after page. If MySQL isn’t using available indexes, it’s missing a shortcut to find data efficiently.

It’s safe to conclude that table(s) in EXPLAIN results having a type ALL (table scan) did not use any index. But the events statements table in performance_schema provides a direct clue. By having a value of 1 under columns NO_INDEX_USED or NO_GOOD_INDEX_USED. See a sample below using dbForge Studio for MySQL:


dbForge Studio for MySQL has a Data Viewer that you can use for viewing long text better. The long text in our case is the query that didn’t use an index.

Internal Temporary Tables

These are like setting up a temporary workspace to sort things out. But if it’s happening frequently, it might hinder query performance.

Again, using the query earlier, we can find clues on the use of internal temporary tables. You can see this under the Extra column in the TRADITIONAL format. Or directly mentioned using the TREE format. See below:

The same MySQL query execution but this time, we analyzed the use of temporary tables.
The use of derived tables, common table expressions, or subqueries may trigger the creation of internal temporary tables.

By investigating these problem areas, we’re gathering evidence to optimize and fine-tune our queries for peak performance. But there’s no mention of any fix to these problem areas yet. Keep reading because we will get there real soon. But first, let’s look at a GUI alternative to analyze MySQL query executions.

Using a GUI Tool to Analyze Query Performance

You’ve already seen dbForge Studio for MySQL a while back. While the CLI is good enough, the presentation of output may not be likable for some people. It’s hard to read when keywords and data wrap automatically to fit the window. Of course, you can always adjust the CLI terminal window. But a GUI tool offers a better presentation of data with some niceties to play along with it.

DbForge Studio for MySQL (and MariaDB) has a feature called the Query Profiler. And EXPLAIN results belong to this feature. But what’s the big deal? You can do a comparison of previous and current execution plans to see if performance has improved (or not). Each time you adjust, you only need to re-run the query and let the Query Profiler do its job.

More About the Query Profiler in dbForge Studio for MySQL

Below is a sample of the EXPLAIN results from the Query Profiler. It’s the same EXPLAIN results from the CLI earlier but here, it’s GUI.

EXPLAIN results in dbForge Studio for MySQL.
The EXPLAIN results in the Query Profiler uses the TRADITIONAL format only. But you can run EXPLAIN using other formats in the SQL window of dbForge Studio. Then, the Data Viewer will display the TREE or JSON output. See a sample below:

Using EXPLAIN FORMAT=TREE in dbForge Studio.
Aside from EXPLAIN results, you can also compare Session Statistics, execution time duration, and many more. See a sample of the Session Statistics for the same query earlier:

A sample of Session Statistics in dbForge Studio for MySQL.
Session Statistics also reveal table scans and the number of logical reads that are gems for investigating slow queries. Do you really need these details? Sometimes, you won’t find the reason why a query is slow with just the EXPLAIN results, no matter what format. You will see an example of this later.

To start using the Query Profiler, enable the Query Profiling Mode from the toolbar as seen below:


Then, run the query you want to analyze. After the it finishes running, the Query Profiler window will appear.

So, if you wish to try dbForge Studio for MySQL, you can download a copy from here.

Getting the evidence you need to pinpoint problems in slow queries is one thing. But fixing the problem is another. That’s the focus of the next section.

MySQL Query Performance Tuning Techniques

This section will provide common techniques to fix your slow-running queries.

Database Table Design: Crafting the Foundation

Creating a powerful query starts with a solid foundation – your database table design. Think of it like building a house; choose the right materials, and it stands strong. Opt for appropriate data types, sizes, and row formats to ensure your queries have a robust structure.

MySQL has various data types for strings, numbers, and dates. Choosing which is which is sometimes confusing. But the secret is knowing the storage requirements of each data type. For example, you need a number column that will store no more than 100,000. So, instead of using BIGINT, use MEDIUMINT. Because BIGINT consumes 8 bytes while MEDIUMINT is only 3 bytes.

But what’s the big deal about this?

Bigger storage requirements will consume more disk space and RAM. The same thing applies to strings and dates. So, the rule of thumb is to use the smallest possible data type for a column.

Also, consider how MySQL stores your data. MySQL arranges tables in tablespaces. Each tablespace contains pages. And each page has the same size. The default is 64KB, though you can change this. If a row doesn’t fit a page, variable-length columns will be relocated to overflow pages. Reading and writing from and to overflow pages consumes I/O. And in the end, will slow your queries.

So, again, make sure your data types and sizes are just right. And know your data type storage requirements.

Indexing: Your Queries’ Jet Packs

Indexing makes queries faster. Without it, MySQL will search every row to give you the results you want. Like a book with an index, it’s easier to look for topics and keywords. You don’t have to scan each page to look for them.

But you have to use it wisely. Adding irrelevant and duplicate indexes will make your INSERT, UPDATE, and DELETE slower. So, strike a balance in adding indexes.

Adding the right index will fix a table scan and reduce the rows scanned.

Statistics: Illuminating the Path

Statistics are the streetlights of query optimization, illuminating the path to better performance. It helps the query optimizer decide on the best execution plan. Keep them up-to-date, providing MySQL with accurate information for making informed decisions on query execution. It’s like ensuring well-lit streets for smooth and swift query navigation.

You can update table statistics using ANALYZE TABLE for the table and columns you need.

InnoDB Optimization: Fine-Tuning the Engine

InnoDB is the powerhouse engine of MySQL, and like any engine, it performs best with proper tuning. Adjust settings to match your workload, optimizing the storage, buffer pool, and log files. It’s akin to giving your car engine the right fuel and tuning for peak performance.

For more details about optimizing InnoDB for high performance, check out the official documentation.

Consider Memory Storage Engine: Turbocharging Performance

InnoDB is not the only database engine MySQL has. For data with less frequent updates, consider using Memory storage engine. It will store your data in RAM for faster data retrieval. But note that it will disappear when you reboot your server or MySQL crashes or restarts the service.

So, use this for temporary storage or caches only. This is a big NO for transactional tables.

When you decide to use this storage engine, test if query runtimes improve. If not, or the improvement is negligible, go back to InnoDB.

5 Examples of MySQL Analyze Query Execution Plan (with Tips)

The lessons about EXPLAIN earlier provide the foundation for MySQL analyze query execution plans. We will use them here in our examples.

1. SELECT * vs. SELECT column_list

Below is our SELECT * sample:

SELECT *
FROM sales_detail sd
INNER JOIN sales_header sh ON sd.sales_header_id = sh.sales_header_id
WHERE sh.sales_date BETWEEN '2021-12-01 00:00:00' AND '2021-12-31 23:59:59';
Enter fullscreen mode Exit fullscreen mode

And below is our SELECT column_list sample:

SELECT
sh.sales_header_id
,sh.sales_date
,sh.customer_id
,sd.product_id
,sd.total_price
FROM sales_detail sd
INNER JOIN sales_header sh ON sd.sales_header_id = sh.sales_header_id
WHERE sh.sales_date BETWEEN '2021-12-01 00:00:00' AND '2021-12-31 23:59:59';
Enter fullscreen mode Exit fullscreen mode

Both queries have the same purpose. But the last one chooses only the needed columns.

Using dbForge Studio, let’s do some MySQL analyze query. Here’s the Query Profiler showing SELECT * is slower based on runtime duration:

The Query Profiler showing SELECT * usage and that it is slower than specifying column names.
And here’s the Session Statistics with a side-by-side comparison. SELECT * uses more logical reads. The more logical reads of a query, the slower it is.

Comparing Session Stats on SELECT * vs. SELECT column_list.
How about the EXPLAIN results? Both queries have the same EXPLAIN results. It shows a table scan on the sales_header table. That’s a good enough reason to say that there’s a problem. But it won’t reveal clues that SELECT is slower than SELECT column_list. The JSON format only tells that SELECT uses more columns. That’s it.

However, EXPLAIN provides clues on how to fix the problem as you will see next.

Below are the EXPLAIN results in the Query Profiler. It uses the results for SELECT column_list. But it’s the same with SELECT *.

Using SELECT * may cause table scan as shown in the Query Profiler of dbForge Studio.
So, why did the optimizer decide on a table scan? And why are so many rows scanned?

Because the WHERE clause uses a column (sales_date) that is not indexed. See the key and ref columns in the EXPLAIN results above. They are blank, meaning there’s no index used. To fix that, create an index for that column.

Fixing the Slow Query by Removing the Table Scan

Here’s the new index for the sales_header table:

CREATE INDEX ix_sales_header_sales_date ON sales_header(sales_date);
Enter fullscreen mode Exit fullscreen mode

Now, check the new EXPLAIN results. It’s still the same for both queries.

To fix the table scan, create an index to the column(s) used in the WHERE clause.
Instead of a table scan, it is now a range. There’s also an index key used. And the number of rows also decreased.

But how about the query execution time? Check it out below and see SELECT * is still slower. Using SELECT column_list is faster by more than 50%. That’s the power of an index.

After indexing, SELECT * is still slower than providing a column list for the SELECT.
Lessons:

  • AVOID SELECT *.
  • You cannot find all clues by only using EXPLAIN.
  • Index the columns used in the WHERE clause to avoid a table scan.

2. Functions in WHERE Clause

Should you use functions in the WHERE clause? Doing some MySQL analyze query execution plans with the Query Profiler will let you know shortly.

Here’s the query in question:

SELECT
sales_header_id, sales_date, customer_id
FROM sales_header
WHERE YEAR(sales_date) = 2021;
Enter fullscreen mode Exit fullscreen mode

It uses the YEAR() function to filter sales records to 2021 sales only.

The alternative WHERE clause that will output the same is using BETWEEN. Here it is:

SELECT
sales_header_id, sales_date, customer_id
FROM sales_header
WHERE sales_date BETWEEN '2021-12-01 00:00:00' AND '2021-12-31 23:59:59';
Enter fullscreen mode Exit fullscreen mode

It uses a data range to filter the sales to 2021 only. Let’s start with the runtime duration. See it below:

Analyze a MySQL query with a function in the WHERE clause is slower as shown in the Query Profiler of dbForge Studio.
Using a function like YEAR() is shorter to type. But the speed is around 9 times slower than using a range of dates with BETWEEN.

But why? EXPLAIN results will tell us. Here are the EXPLAIN results for the query that uses BETWEEN:

Using BETWEEN removes the table scan.
It uses a range type using the index we created earlier. Now, check out the EXPLAIN results using the YEAR() function in the WHERE clause:

After indexing, MySQL still used table scan when functions are used in the WHERE clause.
It used a table scan (type=ALL) and it scanned a lot of rows. MySQL didn’t use the index because each row needs to be evaluated if the sales_date’s year is 2021. But using BETWEEN is straightforward. No evaluation is needed.

Lessons:

  • Using functions in WHERE clause causes a table scan.
  • MySQL does not use an existing index when you use a function in the WHERE clause.
  • Replace the function with BETWEEN or similar operators to avoid the table scan.

3. Analyzing Usage of LIKE and Wildcards

Using LIKE in WHERE clause with wildcards is sometimes unavoidable. For example, your app searches for a customer name using a few characters from user input. In this example, you will know the effects of using them with EXPLAIN.

First, let’s analyze using LIKE without leading wildcards. Here’s the code:

SELECT * FROM customer
WHERE customer_name LIKE 'sh%';
Enter fullscreen mode Exit fullscreen mode

The EXPLAIN results in the Query Profiler show a range scan and use an index on the customer name. See below:

Using LIKE with a without the leading wildcard shows a range scan.
This will change if you use a leading wildcard as seen below:

Using leading wildcards in the WHERE clause causes a table scan. The index was not used.
The leading wildcard ‘%’ caused a table scan to get the results as indicated by the ALL type.

Lessons:

  • Always search for exact matches as much as possible. LIKE with wildcards will use either table or index scan.
  • If using LIKE is the preferred solution, avoid leading wildcards as much as possible.

4. Analyzing WHERE Clause with Implicit Conversion

MySQL forgives you if you enclose an INT value in quotes. Or you forgot to put quotes on a VARCHAR value that contains a valid number. MySQL will convert the values for you. That’s nice! This is known as implicit conversion.

But there’s a catch!

Performance degrades if you make a mistake. Let’s take this table structure as an example:

CREATE TABLE sales_header (
sales_header_id int NOT NULL,
or_number varchar(8) DEFAULT NULL,
sales_date datetime NOT NULL,
customer_id int NOT NULL,
store_id int NOT NULL,
created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (sales_header_id)
)
ENGINE = INNODB,
AVG_ROW_LENGTH = 52,
CHARACTER SET utf8mb4,
COLLATE utf8mb4_0900_ai_ci;

ALTER TABLE sales_header
ADD INDEX ix_sales_header_sales_date (sales_date);

ALTER TABLE sales_header
ADD INDEX ix_sales_header_or_number (or_number);
Enter fullscreen mode Exit fullscreen mode

Note that the or_number column is VARCHAR(8). Let’s say you made a mistake NOT to put quotes in a WHERE clause like this:

SELECT
sales_date
, customer_id
, store_id
FROM sales_header
WHERE (or_number BETWEEN 500 AND 550);
Enter fullscreen mode Exit fullscreen mode

You will still have an output because MySQL will forgive you for that mistake. But check out the EXPLAIN results:

Analyzing a MySQL query that used implicit conversion in the WHERE clause.
It’s a table scan scanning more than 79,000 rows. But there’s an index in the or_number column.

What happened?

MySQL converted each or_number column from VARCHAR to a number to satisfy the WHERE clause. But notice when we correct the statement to this:

SELECT
sales_date
, customer_id
, store_id
FROM sales_header
WHERE (or_number BETWEEN '500' AND '550');
Enter fullscreen mode Exit fullscreen mode

We used quotes because the column is VARCHAR. Notice the dramatic change to the EXPLAIN results:

Adding quotes to a VARCHAR number results in a range type and uses the index.
The table scan is gone, and the number of rows is lowered. This one is faster according to the runtime duration of each. See the side-by-side comparison of the two query profiles:

Without implicit conversion, MySQL queries are faster.
The second query (with quotes) is more than 3 times faster than the first (no quotes).

Lessons:

  • Comparing different data types in the WHERE clause can cause implicit conversion. Double-check your WHERE clause for such scenarios.
  • Implicit conversion may cause a table scan.

5. Analyzing LIMIT and OFFSET for Paging a Large List

Do you need to display a large list? Doing so in a large list can be very slow. Let’s have a sample query of displaying customer names with sales records. Check out the code below:

SELECT
c.customer_name, c.mobile_number, sh.sales_date, sh.or_number
FROM sales_header sh
INNER JOIN customer c ON sh.customer_id = c.customer_id
ORDER BY c.customer_name;
Enter fullscreen mode Exit fullscreen mode

The EXPLAIN results below shows a table scan of 2000 customer records. The result is 80,000 rows (not shown). That’s not only too much for the eyes to see. Retrieving all 80,000 rows has a performance penalty. Check it out below:

EXPLAIN results without using LIMIT and OFFSET. It resulted in a table scan.
Meanwhile, adding paging of let’s say 10 rows at a time will take away the table scan. Check out the modified query below.

SELECT
c.customer_name, c.mobile_number, sh.sales_date, sh.or_number
FROM sales_header sh
INNER JOIN customer c ON sh.customer_id = c.customer_id
ORDER BY c.customer_name
LIMIT 10 OFFSET 0;
Enter fullscreen mode Exit fullscreen mode

LIMIT tells MySQL how many rows to return. And OFFSET is the starting point or the rows to skip. You can create a stored procedure with parameters for both LIMIT and OFFSET. Then, use it to display rows with paging.

The EXPLAIN results of the query with LIMIT and OFFSET removes the table scan. See below:

The EXPLAIN results after analyzing the MySQL query that uses LIMIT and OFFSET.
With LIMIT and OFFSET, MySQL utilized the index that results to only 1 row scanned for the customer table.

Lessons:

  • Use LIMIT and OFFSET to allow paging of rows displayed on an app or web page. This is faster compared to displaying all rows.
  • Without the LIMIT and OFFSET, the result is a table scan.
  • You need to use ORDER BY to make consistent, ordered paging.

Takeaways on MySQL Analyze Query Execution Plans

That’s it!

Learning about MySQL analyze query execution plans can help your queries reach top speed. EXPLAIN provides us an x-ray vision on what’s happening behind the scenes of query execution. And with the help of the Query Profiler of dbForge Studio, we also analyzed Session and runtime duration statistics.

Query speed does not depend on using particular methods. It’s not about stored procedures are the fastest or an INNER JOIN is always faster than a LEFT JOIN. It depends on different factors like your table design, indexes, and more. So, analyze the query using the techniques you learned here.

When you turn your query speeds from good to exceptional, you get a hero feeling. You save day for your users getting the data they want – faster.

But there’s no magic pill or hidden moves to master this skill. It takes practice and a dash of determination. So, experiment, tweak, and embrace the power you now hold.

Happy optimizing!

Top comments (1)

Collapse
 
julfcur profile image
Info Comment hidden by post author - thread only accessible via permalink
Julieta

Hey! To apply for IT/Tech positions, I leave you the referral link to Outdefine, our job board with several remote job searches open for IT or Tech profiles with different levels of seniority in different areas (marketing/software/development, UX and +), in case anyone is interested or knows someone with a tech profile who needs a job: outdefine.com/r/JulietaCura-4363
You can find many jobs for technical and not so technical, but digital profiles too. And you can also get rewards in tokens on the platform, connect with our community of professionals and companies in the Community section, and it's free to use, of course!

Some comments have been hidden by the post's author - find out more