If you manage PostgreSQL in production, you probably felt a chill when the news hit Hacker News: pgbackrest appears to no longer be actively maintained. For a tool that's been the backbone of PostgreSQL backup strategies for years, that's a big deal.
Let's talk about what this means practically, how to assess your exposure, and how to migrate to an alternative without losing sleep.
Why This Hurts
pgbackrest has been the gold standard for PostgreSQL backup and restore for a long time. It handles incremental backups, parallel backup/restore, encryption, and repository management in a way that pg_dump simply can't match. A lot of production setups — including several I've worked on — depend on it heavily.
When a critical infrastructure tool loses active maintenance, you're looking at a few real problems:
- Security patches stop. Any CVEs discovered going forward won't get fixed upstream.
- New PostgreSQL versions may break compatibility. PostgreSQL 17 and beyond might introduce changes pgbackrest can't handle.
- Bug fixes are on you. That edge case in differential backup you've been meaning to report? It's staying.
This doesn't mean you need to panic and rip it out tomorrow. But you do need a plan.
Step 1: Audit Your Current pgbackrest Setup
Before migrating anything, document what you're actually using. Not every pgbackrest feature has a 1:1 replacement in every alternative tool.
# Check your pgbackrest config
cat /etc/pgbackrest/pgbackrest.conf
# List your current backup stanzas and their status
pgbackrest --stanza=your_db info
# Check your repo type (local filesystem, S3, Azure, GCS?)
grep 'repo1-type' /etc/pgbackrest/pgbackrest.conf
Write down the answers to these questions:
- Are you using incremental or differential backups?
- What's your retention policy (how many full backups do you keep)?
- Are you backing up to object storage (S3, GCS, Azure) or local disk?
- Do you use encryption at rest?
- Are you using pgbackrest for PITR (point-in-time recovery)?
- Do you rely on parallel backup/restore?
This list becomes your migration checklist.
Step 2: Evaluate Your Alternatives
There are three serious contenders worth evaluating. Each has tradeoffs.
Barman (by EnterpriseDB)
Barman is the most feature-complete alternative. It's actively maintained by EnterpriseDB (the folks behind the major PostgreSQL commercial distribution) and handles most of what pgbackrest does.
Strengths: PITR, incremental backups, parallel jobs, S3/Azure/GCS support, solid documentation, active development.
Weaknesses: Python-based (adds a runtime dependency), the configuration model is different enough to require real migration effort.
WAL-G
WAL-G is a Go-based tool originally developed at Citus Data. It's focused on cloud-native backup workflows and is quite fast.
Strengths: Written in Go (single binary, no runtime deps), excellent cloud storage support, delta backups, good performance with large databases.
Weaknesses: Less mature PITR tooling compared to Barman, fewer configuration knobs for complex setups.
pg_basebackup + Manual WAL Archiving
The built-in option. PostgreSQL ships with pg_basebackup and WAL archiving out of the box. No third-party tool required.
Strengths: Zero additional dependencies, guaranteed compatibility with your PostgreSQL version, well-documented in core PostgreSQL docs.
Weaknesses: No incremental backups, no built-in retention management, no parallel restore, you're writing your own wrapper scripts.
Step 3: Migration Path (pgbackrest → Barman Example)
Here's a concrete walkthrough for migrating to Barman, since it covers the most common pgbackrest use cases.
# Install Barman
sudo apt-get install barman # Debian/Ubuntu
# or
sudo yum install barman # RHEL/CentOS
# Create a Barman configuration for your server
sudo cat > /etc/barman.d/main-db.conf << 'EOF'
[main-db]
description = "Main Production Database"
ssh_command = ssh postgres@db-server
conninfo = host=db-server user=barman dbname=postgres
backup_method = postgres
# Use streaming for WAL archiving (replaces pgbackrest archive-push)
streaming_archiver = on
slot_name = barman
streaming_conninfo = host=db-server user=streaming_barman
# Retention: keep 4 full backups (adjust to match your pgbackrest policy)
retention_policy = RECOVERY WINDOW OF 14 DAYS
EOF
Then set up the replication slot and test:
# On the PostgreSQL server, create the replication slot
psql -c "SELECT pg_create_physical_replication_slot('barman');"
# Back on the Barman server, verify the connection
barman check main-db
# Take your first backup
barman backup main-db
# Verify it worked
barman list-backup main-db
The Critical Overlap Period
Don't cut over immediately. Run both tools in parallel for at least two full backup cycles. This means:
- Keep pgbackrest running on its existing schedule
- Run Barman alongside it
- Test a restore from Barman to a staging environment
- Only after a successful test restore, disable pgbackrest
I cannot stress point 3 enough. A backup you haven't tested restoring is not a backup. It's a hope.
Step 4: Update Your archive_command
If you're using pgbackrest's archive-push in your postgresql.conf, you'll need to update this. With Barman using streaming replication, you might not need archive_command at all, but if you want belt-and-suspenders:
# postgresql.conf — old pgbackrest config
# archive_command = 'pgbackrest --stanza=main-db archive-push %p'
# Option A: Switch to barman-wal-archive
archive_command = 'barman-wal-archive barman-server main-db %p'
# Option B: If using streaming replication with Barman,
# you can use a simple copy as fallback
archive_command = 'cp %p /var/lib/postgresql/wal_archive/%f'
Reload PostgreSQL after changing this:
sudo systemctl reload postgresql
Prevention: Making Your Backup Strategy More Resilient
This situation is a good reminder that depending on a single tool for critical infrastructure is risky. Here's what I'm doing going forward:
-
Layer your backups. Use a tool like Barman or WAL-G for your primary backup pipeline, but also run periodic
pg_dumpexports as a secondary safety net. They're slower and larger, but they're format-independent. - Test restores regularly. Set up a cron job or CI pipeline that restores your latest backup to a throwaway instance at least weekly. If you're not testing restores, you don't have backups.
- Monitor backup health. Whatever tool you use, set up alerts for failed backups, growing backup age, and WAL archive lag. The worst time to discover your backups aren't working is during a recovery.
- Document your recovery procedure. Write a runbook. Actually write it down. Include the exact commands, the expected timelines, and who has access to what. Future-you at 3 AM during an incident will be grateful.
Don't Panic, But Don't Wait
pgbackrest losing active maintenance doesn't mean your existing backups are suddenly invalid. The tool still works. Your existing backups are still there. But the clock is ticking on compatibility with future PostgreSQL releases and security patches.
Start your evaluation now, pick the alternative that matches your feature needs, and run a parallel migration over the next few weeks. Your future self — and your on-call rotation — will thank you.
Top comments (0)