We've already introduced you to Saby Explain, an open service designed to help you read and analyze query plans in PostgreSQL, and shared the insights on improving poorly performing queries using Explain recommendations.
In this post, we’ll highlight the key features that we believe will greatly improve your experience with our solution. Click DEMO on the Saby page to analyze your plan.
All plan formats supported
Get query plans analyzed using only log data
In the developer console, just copy the data as it is, starting from the Query Text line and including leading spaces:
Query Text: INSERT INTO dicquery_20200604 VALUES ($1.*) ON CONFLICT (query)
DO NOTHING;
Insert on dicquery_20200604 (cost=0.00..0.05 rows=1 width=52) (actual time=40.376..40.376 rows=0 loops=1)
Conflict Resolution: NOTHING
Conflict Arbiter Indexes: dicquery_20200604_pkey
Tuples Inserted: 1
Conflicting Tuples: 0
Buffers: shared hit=9 read=1 dirtied=1
-> Result (cost=0.00..0.05 rows=1 width=52) (actual time=0.001..0.001 rows=1 loops=1)
Then, paste it all into the field for query analysis, no refining needed:
Finally, along with the analyzed query plan, you'll find all the query execution details in the Context tab:
JSON and YAML files
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT * FROM pg_class;
"[
{
"Plan": {
"Node Type": "Seq Scan",
"Parallel Aware": false,
"Relation Name": "pg_class",
"Alias": "pg_class",
"Startup Cost": 0.00,
"Total Cost": 1336.20,
"Plan Rows": 13804,
"Plan Width": 539,
"Actual Startup Time": 0.006,
"Actual Total Time": 1.838,
"Actual Rows": 10266,
"Actual Loops": 1,
"Shared Hit Blocks": 646,
"Shared Read Blocks": 0,
"Shared Dirtied Blocks": 0,
"Shared Written Blocks": 0,
"Local Hit Blocks": 0,
"Local Read Blocks": 0,
"Local Dirtied Blocks": 0,
"Local Written Blocks": 0,
"Temp Read Blocks": 0,
"Temp Written Blocks": 0
},
"Planning Time": 5.135,
"Triggers": [
],
"Execution Time": 2.389
}
]"
It doesn't matter if the data contains external quotes, the way pgAdmin copies them, or not—you can paste your file data into the same field for analysis, and the result won't take long:
Advanced plan visualization
Planning time vs Execution time
Easily track what's taking extra time when you're running the query:
I/O timing stats
Sometimes, even though a plan may have few resources read or written, it can still take ages to execute.
We could blame the server disk overload for that, but is it the real root cause? Well, if you want to be sure about this, all you need is track_io_timing. You can find out more about this config parameter in the documentation:
Enables timing of database I/O calls. This parameter is off by default because it will repeatedly query the operating system for the current time, which may cause significant overhead on some platforms. You can use the pg_test_timing tool to measure the overhead of timing on your system. I/O timing information is displayed in pg_stat_database, in the output of EXPLAIN when the BUFFERS option is used, and by pg_stat_statements.
If you need to, you can also enable this parameter in a local session using the following command:
SET track_io_timing = TRUE;
The great thing about our solution is that we can read and visualize IO timing information:
So, the total execution time (0.790 ms) was used like this: it took 0.718 ms to read one page of data and 0.044 ms to write it. That leaves just 0.028 ms for all the other tasks if you know what I mean.
Keeping up with PostgreSQL updates
Over time, we’ve worked hard to keep abreast of PostgreSQL development. Let me focus on the key changes we've made to ensure that our solution remains up-to-date and effective.
Buffers usage while planning
There's a different way to monitor the resources used during query planning, which has nothing to do with pg_stat_statements. To see how many buffers were used during the planning stage, run the EXPLAIN statement with BUFFERS enabled:
Seq Scan on pg_class (actual rows=386 loops=1)
Buffers: shared hit=9 read=4
Planning Time: 0.782 ms
Buffers: shared hit=103 read=11
Execution Time: 0.219 ms
Incremental sorting
In cases where you need to sort by multiple keys (k1, k2, k3, ...), the optimizer can now use the fact that the data is already sorted by some of the initial keys (for example, k1 and k2). So, instead of sorting all the data from scratch, the data can be sequentially grouped by the k1 and k2 values, and then “incrementally sorted” by the next key, k3.
This approach breaks the whole process down into a few smaller sorts, which uses less memory. You can also see the first results before the whole sorting is complete.
Incremental Sort (actual rows=2949857 loops=1)
Sort Key: ticket_no, passenger_id
Presorted Key: ticket_no
Full-sort Groups: 92184 Sort Method: quicksort Memory: avg=31kB peak=31kB
-> Index Scan using tickets_pkey on tickets (actual rows=2949857 loops=1)
Planning Time: 2.137 ms
Execution Time: 2230.019 ms
UI/UX improvements
Screenshots everywhere!
You can quickly take a screenshot of each tab and copy it to the clipboard. The screenshot will capture everything that's on the tab. Just click the corresponding button in the upper-right corner:
This is exactly how we got most of the images for this post.
Even more suggestions for query improvement
We've not only added more recommendations but also gone into more detail about them. Just follow the link to find out more about your issue.
Deleting plans from the archive
Some users asked if there's a way to permanently delete plans from the archive, even if they're not published. We've heard you and made this possible:
If you have any comments or suggestions, please feel free to contact us.
Top comments (0)