DEV Community

Cover image for The Part of PostgreSQL we Don’t know the most
NightBird07
NightBird07

Posted on

The Part of PostgreSQL we Don’t know the most

Hello again, we have been in a series of talks about PostgreSQL and as I mentioned before in the first article we are going from the top-down approach where we hit the depth of the design of features that recently was added to PostgreSQL 15. but I was reading an article that went viral last few days The Part of PostgreSQL we hate the most though I agree on all the terms mentioned in the article and the stated purpose. but I am going to list some aspects of the system design because he chooses to perform such actions.

Why to use O2N over N2O

while the reasons are left with the designer, we can strictly take about why not to use it as stated in the article the heavy read of the data can be a huge disadvantage, but what about the heavy write, one thinks of this as a very efficient way to store and not to update the index as the design of the PostgreSQL requires updating all the versions of the same index which could be a massive changing load, Imagine a single object that has millions of underlying that points to a total table of indexes and you updated just a row of the table the new version would copy all the data and update the row changing all the indexes has massive implications on the backend process.

Why MVCC is mentioned as the worst

In PostgreSQL's MVCC (Multi-Version Concurrency Control) with the append-only storage scheme, when a query updates a tuple, a new version of the entire tuple is created, regardless of whether the update modifies one or all its columns. This results in significant data duplication, leading to increased storage requirements and memory usage. Therefore, PostgreSQL may require more disk storage and memory to store a database compared to other DBMSs, which can result in slower queries and higher cloud costs. but at the same time, you might consider a level of security where you don't change any bit in the older data but instead, you make sure you have copied all the data into another page.

Why use the append-only

it was very effective until the database size grew to a point where the storage requirements and memory usage became a significant concern. and nowadays we have millions of data that needed to be stored retrieved and even modified. with a good intention to fix what is still a problem PostgreSQL suggests various but to be honest not practical from a point of speed. ### Auto Vacuum
If you have been using pointers with C++ or other languages other than Java, you would notice that allocating memory in the heap using the keyword "new" requires the usage of "Delete" to manually delete the spaced and you can use that space again. in contrast Java does not allow you to manually delete any allocated space it automatically deletes the created pointers after not pointing to it, and the time to release that space is not anymore in usage can be noticeable at some level, we can think of auto vacuum as the same way... so why it is still unreasonable to use it. for the same reason why, you need space. when dealing with a database we need to consider memory and time of processing as much as we can as it is the key to the best optimal way to find the drawbacks. the larger the database the more complex the auto vacuum must operate.

Why is the design still valuable to anyone despite the facts

Open-source: PostgreSQL is an open-source database, which means that it is free to use and can be modified and distributed by anyone. This makes it an attractive option for organizations that want to avoid vendor lock-in and have more control over their database.

Reliability and durability: PostgreSQL are known for its reliability and durability, with features such as transactional support, data integrity, and crash recovery. This makes it a popular choice for applications that require high levels of data consistency and availability.

Top comments (0)