When investigating PostgreSQL performance, the usual starting point is pg_stat_statements. From there, many teams sort queries by mean_exec_time or total_exec_time and start optimizing the first rows in the list.
That approach is simple, but it often leads to the wrong priorities.
A query that takes five seconds but runs twice a day is not necessarily more important than a query that takes five milliseconds and runs millions of times. Conversely, a query with a high total execution time may simply be a normal core workload query, not necessarily the best optimization target.
The real question is not:
Which query is the slowest?
It is:
Which query has the highest operational impact and the clearest optimization potential?
This is the principle behind the query-ranking algorithm implemented in pgAssistant.
Top comments (0)