DEV Community

mayur
mayur

Posted on

Autovacuum Tuning in The Cloud Era

Image description

How often do you hear DBAs vent, "Autovacuum used to fly on bare-metal, but in the cloud, it feels like it's dragging forever!" Or they might say, "I've doubled the max_autovacuum_workers, yet dead tuples just keep stacking up," or "Autovacuum isn't giving us the throughput our cloud provider promised!" And then there's the developer chiming in, "Every time Autovacuum kicks in on that big table, our application queries start timing out!" These frustrations are all too common in cloud environments, where expectations meet the harsh reality of "you get what you pay for."

In the cloud era, PostgreSQL's Autovacuum tuning can feel like a tightrope walk - juggling application performance, costs, and autovacuum efficiency due to cloud-imposed limitations. The cloud puts strict caps on I/O and throughput. Push Autovacuum too hard, and it'll devour your application's resources, tanking performance. But dial it down too much, and bloat creeps in, eventually strangling the application's efficiency. It's all about finding that sweet spot.

Often DBAs increase autovacuum_max_workers , maintenance_work_mem, adjust autovacuum_vacuum_scale_factor but still wonder why it fails to clean the database effectively. This inefficiency arises because Autovacuum throughput is constrained by the autovacuum_vacuum_cost_limit and further limited by host restrictions on Cloud. Before we delve into tuning Autovacuum, we need some metrics at hand. Autovacuum is known for being I/O-intensive, but just how much strain does it place on your system? Let's explore.

#Firstly, let's see how much IO to expect.
#At most, an autovacuum can do IO as shown below.
Max Autovacuum throughput = (1000/autovacuum_vacuum_cost_delay) * (autovacuum_vacuum_cost_limit/vacuum_cost_page_hit) * 8 KB (default block_size)

#For PG17 default settings:
Max Autovacuum throughput = (1000/2) * (200/1) * 8 KB = 800MB per second
Enter fullscreen mode Exit fullscreen mode

Now let's see what is io throughput limit on different cloud and at what cost?

Starting with AWS :-(since it's most familiar to me)
Old answer but I just wanted to show that throughput limit in AWS depends on both instance class and storage class.

Image description

Burstable instance class : It's too small throughput and generally not suitable for high performance demanding production workloads hence ignoring it.

Standard instance class : General purpose standard instances of latest graviton series offer better throughput upto 2.5GBps.
Gbps/Mbps here is in Bits hence division by 8 to get GB/s or MB/s.

Image description

"Up to" is a very vague term. I found another piece of the puzzle in AWS document.

Image description

Since maximum performance is guaranteed for only 30 minutes per day, we will focus only on baseline values.

Next storage classes and limits:
We will only consider SSDs and latest storage classes availble.
GP3: GP3 offers 15MB/s for tiny (below 400GB) databases and customizable range of 62.5MB/s to 500MB/s for larger database.

Image description

io2 :

Image description

For io2 we can reach up to 500MB/s as we crank up iops knob.

Image description

Cost : Addressing the Elephant in the room

Image description

Let's see in AWS pricing calculator how much we would need to spend for reaching 500MB/s throughput capacity on AWS (in cheapest region) for 1TB RDS Postgres database. For simplifying calculations and comparisons we will consider single-AZ RDS, you should always use multi-AZ in production.
Instance class m7g.4xlarge (baseline is 625MB
Storage io2 256000 iops

Image description

Image description

holy IOPS!! 26K for 1TB

Image description

Don't worry GP3 saves the day.

Image description

Image description

Seems reasonable cost if you have high throughput requirement on RDS.

Next we will see pricing and capacity for Google Cloud SQL for PostgreSQL and Azure Database for PostgreSQL — Flexible Server to achieve same 500MB/s throughput. These two are closest matching counterparts of Amazon RDS Postgres.

Google :

Image description

So we will need a Zonal extreme PD and N2 VM with 64 vCPUs to get 500MB/s on GCP.

Image description

Google pricing calculator is very limited, it does not allow me to select zonal extreme-pd and just offers generalized cost of ssd.

Azure Database for PostgreSQL — Flexible Server

We will need a configuration of at least 16 vCores (for example, the D16ds_v5 instance) as it can provide up to 600 MB/s throughput.

Image description

SSD v2 is still in preview as of writing this article.

So estimated monthly cost would be around $1,400.00 slightly lower than AWS.

Why did I dive into the cost of compute and storage? Well, it’s not just about convincing the cloud FinOps team that we need a buffer for throughput — though that’s part of it. As a DBA, you might find yourself in a production incident where a small capacity bump could cost your company only a few hundred dollars. But in doing so, you’d be preventing a potential disaster worth thousands! Sometimes, the difference between smooth sailing and costly downtime is just a matter of proactively increasing capacity before things go south. 🚀

Juggling Cost, Autovacuum I/O, and Application Performance

Image description

Autovacuum Tuning consider the following two scenarios:

  1. Your “Pet” Database:
    If you only have a single beloved database to care with ample throughput capacity, you can fine-tuneautovacuum_vacuum_cost_limit and autovacuum_max_workersto keep the vacuum backlog low. Regularly monitor dead tuples using pg_stat_all_tables to identify any potential issues. For very large tables that require special attention, adjust the table-level Autovacuum parameters while maintaining conservative settings for the rest.

  2. Microservices Architecture:
    In a company operating a microservices architecture with hundreds of independent RDS PostgreSQL databases, cloud FinOps experts aim to optimize costs by selecting RDS instance classes and storage that align with the application’s throughput and latency requirements — a practice known as right-sizing. However, this approach often overlooks the Autovacuum component. Being on an undersized instance means that merely increasing autovacuum_vacuum_cost_limit, reducing autovacuum_vacuum_cost_delay, or adding more workers won’t suffice.

As a DBA, you must think creatively :

  1. Consider partitioning problematic tables, enabling Autovacuum to operate on smaller chunks. With indexes local to partitions, their sizes decrease, aiding Autovacuum as it’s vacuuming all indexes each time.

  2. Detect and drop unused indexes. Lesser the number of indexes, quicker Autovacuum can finish.

  3. Minimize long-running transactions to prevent them from holding up xmin, rendering some Autovacuum runs ineffective.

4.If Autovacuum on large tables still impacts application performance by hogging throughput, you may need to dial back cost limit and delay.

  1. However, don’t go too conservative on cost limit and delay; this risks TXID wraparound, which can bring down your entire database. Always implement an Early Warning System for TXID wraparound (AWS offers a detailed guide on this).

Additionally, log these warnings and observe dead tuple growth trends in your most critical databases. Share these findings with FinOps, advocating funding for more throughput capacity at both instace and storage level.

A picture is worth a thousand words

Image description

Top comments (0)