I've been using PostgreSQL for a long time and it honestly hasn't disappointed me.
Usually when I had slow queries the problem was the query itself (including indexes on the columns) or the server load. To be honest I rarely found bugs that affected the outcome of a query or skyrocketed the execution time.
Are you having problems with the output of EXPLAIN ANALYZE ?
Also: since the introduction of JSONB in PostgreSQL there's a lot of stuff you can get away with it without having to introduce a second DB in your stack.
PostgreSQL is probably my favorite database too, and I totally agree, it's almost always my crappy queries that are to blame!
The case I talked about in the post was really interesting though, because the query planner was producing radically different plans, all depending on how "fresh" its table stats were.
I learned that table stats are automatically refreshed on a commit, but my slow query was running inside the same transaction, after a huge insert. So the insert skewed the table data in a way that the stats didn't reflect, therefore producing a bad plan.
The ANALYZE command (which is different than EXPLAIN ANALYZE), when executed against a table, tells PG to sample the table contents and refresh its statistics.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I've been using PostgreSQL for a long time and it honestly hasn't disappointed me.
Usually when I had slow queries the problem was the query itself (including indexes on the columns) or the server load. To be honest I rarely found bugs that affected the outcome of a query or skyrocketed the execution time.
Are you having problems with the output of
EXPLAIN ANALYZE
?Also: since the introduction of
JSONB
in PostgreSQL there's a lot of stuff you can get away with it without having to introduce a second DB in your stack.PostgreSQL is probably my favorite database too, and I totally agree, it's almost always my crappy queries that are to blame!
The case I talked about in the post was really interesting though, because the query planner was producing radically different plans, all depending on how "fresh" its table stats were.
I learned that table stats are automatically refreshed on a commit, but my slow query was running inside the same transaction, after a huge insert. So the insert skewed the table data in a way that the stats didn't reflect, therefore producing a bad plan.
The ANALYZE command (which is different than EXPLAIN ANALYZE), when executed against a table, tells PG to sample the table contents and refresh its statistics.