DEV Community

Discussion on: In Pursuit of PostgreSQL Performance

Collapse
 
tvondra profile image
Tomas Vondra

I'm not sure what exactly is the goal of multiplying all the cost variables by the same number, TBH. In principle, that should not change the planning decision because what matters is how do the costs assigned to different possible plans compare. And if you multiply everything by the same number, I'd expect that to be about the same. There may be some non-linear parts in the costing formular, so perhaps it's possible that (cost(P1) < cost(P2)) with the initial cost values, and (cost(P1) > cost(P2)) with the multiplied ones. But it's stange that it'd cause consistently worse planning decisions, but without seeing the before/after plans it's hard to say what's happening.

Regarding the "focus on the differences and the most expensive part" - well, it's hard to tune any system without at least some basic understanding of how it works. What I meant is that you can look at EXPLAIN ANALYZE of the query, look which part of the query takes most of the time, and then try doing something about it. Is the most expensive part a sequential scan? Well, let's try convincing the planner to use an index scan in some way - for simple queries you might try disabling seqscans entirely, and see what happens. Or maybe just lower random_page_cost a bit. It's also possible that the cardinality estimates are off for some reason, at which point the cost estimation is entirely unreliable. That's also visible in the explain analyze (as "rows").

Thread Thread
 
kenwhitesell profile image
Ken Whitesell

"I'm not sure what exactly is the goal of multiplying all the cost variables by the same number, TBH."

It was what you mentioned in your earlier reply:
""It's a good / recommended practice to keep seq_page_cost = 1.0, and treat everything else as a multiple of that.""

So it was my attempt to "upscale" those values to base everything off of the 1.0 value for seq_page_cost. (What I might do this weekend is rerun my tests with seq_page_cost = 1.0 and run various combinations of other settings to see if I can get the same run times for those queries. At this point it's more academic than anything else, but I am curious about this.)

"Or maybe just lower random_page_cost a bit."
That's where I started with this - but it didn't go far enough. It was after doing that and digging into the output from explain analyze where I saw that JIT was taking 30 seconds on the longest-running query - assuming I was interpreting the output from it correctly, so my second change was disabling JIT.
Everything just kinda followed from that, as I realized just how little I knew and began exploring. (In the interests of trying to keep this post to a reasonable length, I skipped the details of all the blind-alleys that I dove into regarding other settings changes. I was running spot checks on every variable that I though might have been even remotely related to performance.)

"It's also possible that the cardinality estimates are off for some reason, at which point the cost estimation is entirely unreliable." - That was the reason for doing the analyze before running the queries. My understanding is that that was going update the statistics to provide for the most accurate estimates. And since the data was static for these tests, the numbers were not going to change.

Thread Thread
 
tvondra profile image
Tomas Vondra

JIT taking 30 seconds for a single query is quite unexpected / extreme. I'm sure we'd be interested in more details, so if you could send some details (PostgreSQL version, OS, llvm versions, query and explain plans) on pgsql-performance, that'd be helpful.

As for the estimates - doing analyze is good, but it's possible the estimates are still wrong even with fresh statistics. For example, there might be expressions that are difficult to estimate well. Joins are notoriously difficult to estimate well. Sometimes the ORM generates queries that are difficult to estimate, etc. So it's possible the estimates were off, confusing the planner enough to pick a poor plan. And setting the cost parameters simply confused it "in the opposite direction" enough to pick a better plan. It's impossible to say without looking at the explain analyze. Planner gets improved between versions, so maybe that's why you saw different behavior on different PostgreSQL versions.

What I'd suggest is picking one or two queries that perform particularly poorly, do explain analyze on them (both with the default and modified cost values, and maybe even on different PostgreSQL versions) and send that to pgsql-performance mailing list. If you ping me, I'll take a look.

Thread Thread
 
kenwhitesell profile image
Ken Whitesell

Quick question on the logistics for this - the query is about 25K, and the explains are about 150K. Do you want these included in-line text within the email, as attachments, zip attached, or ??? (I don't want to spam the list or create problems for anyone, and I don't see anything in either wiki.postgresql.org/wiki/Guide_to_... or wiki.postgresql.org/wiki/Slow_Quer... that seems to address issues with this much information to supply - my apologies if it's there and I've just missed it.)
I do see the reference to explain.depesz.com, I can see where that will help...

Thread Thread
 
tvondra profile image
Tomas Vondra

Definitely as attachments, not in-line - the clients sometimes wrap / reflow the lines, making it impossible to read (particularly for such long queries / explains). Compressing it won't hurt, of course, although I'm pretty sure it'll be compressed during transfer anyway.

explain.depesz.com is an option too, and I've been recommending that in the past, but that only really addresses the explain plan, not the queries. So just send it as attachements, and the readers can always copy it into explain.depesz.com if they wish.