TL;DR: The thing that finally broke me wasn't a failed restore. It was a successful restore that silently put the database in the wrong state.
📖 Reading time: ~27 min
What's in this article
- The Script Graveyard Problem (Why I Was Even Looking)
- What Portabase Actually Is (Without the Marketing Speak)
- Installing Portabase v1.13
- Running Your First Backup
- Restoring a Backup (The Part That Actually Matters)
- The 3 Things That Surprised Me After Daily Use
- When NOT to Use Portabase
- Portabase vs. Rolling Your Own pg_dump Scripts
The Script Graveyard Problem (Why I Was Even Looking)
The thing that finally broke me wasn't a failed restore. It was a successful restore that silently put the database in the wrong state. I restored a three-month-old Postgres dump onto a staging box, ran the app, and everything looked fine — until a teammate noticed we were missing two columns that a migration had added in week six. The dump was fine. The migration history wasn't captured anywhere near it. Those two artifacts lived in completely separate places, managed by completely separate processes, with no enforcement that they'd ever travel together.
If you've shipped more than two or three projects, you know the folder I'm talking about. It's usually called scripts/ or db/ or, honestly, just dumped in the repo root. Mine had:
-
dump.sh— pg_dump wrapped in three lines of bash, no compression flag, no timestamp in the filename -
restore-prod.sh— had a hardcoded connection string that stopped working after a password rotation six months ago -
migrate-staging-FINAL2.sh— I genuinely cannot tell you what FINAL1 did differently -
backup-before-release.sh— ran once, never touched again, still references a database host that doesn't exist
The core problem with this approach isn't laziness — it's that backups and migrations are conceptually treated as separate concerns when they're actually coupled. A backup without its corresponding migration state is a partial artifact. You don't know which migrations had already run when that dump was taken. So when you restore it three months later, you're guessing. You might run all pending migrations on top of it, or none, and both options can be wrong depending on what state the dump captured. I've seen both mistakes cause data loss in staging environments, and once — painfully — in a production incident.
What I actually needed was a tool that bakes the migration state into the backup artifact itself, so that restoring a backup automatically knows which migrations to run or skip. Not a convention I maintain manually. Not a README I promise to keep updated. A hard guarantee at the tooling level. That's what Portabase v1.13 does, and it's the reason I switched. Quick scope note: this article covers v1.13 specifically. If you're on v1.12, the --migration-source flag was renamed and the snapshot format changed — a straight upgrade without reading the changelog will break existing restore scripts.
What Portabase Actually Is (Without the Marketing Speak)
The thing that surprised me most about Portabase is what it doesn't do. It doesn't try to be another migration runner. It doesn't compete with Flyway or Liquibase. It reads the migration state those tools already wrote into your database and snapshots it alongside your SQL dump into a single archive. That's a genuinely useful distinction — and one the README buries under three paragraphs of preamble.
At its core, Portabase ships as a single Go binary with no runtime baggage. You drop it on a server, point it at your database connection string, and it works. The only real dependency is having your DB client libraries reachable — libpq for PostgreSQL or the MySQL connector for MariaDB. There's no Node runtime to wrangle, no JVM version conflicts, no Python virtualenv. I've dropped the 12MB binary into a Docker image's final stage and called it done in about four lines.
The output format is a .pbx archive — a tar-compressed bundle containing three things:
- The SQL dump — a plain pg_dump or mysqldump output, nothing proprietary
- A migration manifest — a JSON snapshot of the schema history table (compatible with Flyway's
flyway_schema_history, Liquibase'sDATABASECHANGELOG, or Alembic'salembic_version) - A checksum file — SHA-256 hashes of both the above, so you can verify archive integrity before a restore without unpacking everything
A minimal backup invocation looks like this:
# PostgreSQL, with Flyway-tracked migrations
portabase backup \
--driver postgres \
--dsn "postgres://app_user:secret@localhost:5432/myapp" \
--migration-tracker flyway \
--output ./backups/myapp-$(date +%Y%m%d).pbx
# Output:
# [✓] SQL dump: 142MB
# [✓] Migration manifest: 47 entries (flyway_schema_history)
# [✓] Checksum written: sha256:a3f1...
# Archive written: ./backups/myapp-20250115.pbx (38MB compressed)
The optional daemon mode is where the operationally interesting stuff lives. Run portabase daemon --config /etc/portabase/config.yaml and it'll handle scheduled backups, retention pruning, and — critically — post-restore migration state reconciliation, where it replays the manifest into a fresh schema history table so Flyway or Liquibase doesn't think the target database needs migrations it already has. Without that, a naive SQL restore followed by a deploy will often trigger a cascade of "already applied" errors or, worse, duplicate migration attempts. For a broader look at database tooling alongside other dev productivity tools, check out our guide on Essential SaaS Tools for Small Business in 2026.
Portabase v1.13 specifically added native Alembic support (Python/SQLAlchemy shops were previously shimming it with the --custom-manifest flag) and fixed a long-standing bug where restoring onto a database with a non-default schema search path would silently write migration state to the wrong schema. Both of those were real production pain points — the Alembic gap in particular meant the tool was largely ignored by the FastAPI/Django crowd until now.
Installing Portabase v1.13
The first thing that tripped me up was the Homebrew tap lagging. I ran brew install portabase on my Mac, saw it pull v1.12.1, and spent twenty minutes wondering why a v1.13 flag wasn't recognized. Always verify what you actually got:
brew install portabase
portabase version
# portabase v1.12.1 -- oops, tap hasn't caught up yet
If the tap is behind, just fall back to the install script and pin explicitly. The Homebrew route is convenient for local tinkering but I wouldn't trust it in any automated context where the version actually matters.
For everything repeatable — local dev setups, CI pipelines, teammate onboarding scripts — pin the version hard:
curl -sSL https://get.portabase.dev | sh -s -- v1.13.0
The -- v1.13.0 argument after the double-dash is what forces the specific release. Drop that suffix and you get whatever "latest" resolves to on that day, which will silently break your migration detection logic the moment v1.14 ships with a different adapter interface. Non-negotiable in CI.
What I actually use in pipelines is the Docker image. It's cleaner than fighting with system dependencies and the digest is immutable:
# In your docker-compose or CI service definition
image: ghcr.io/portabase/portabase:1.13.0
# Or pull and inspect locally
docker pull ghcr.io/portabase/portabase:1.13.0
docker run --rm ghcr.io/portabase/portabase:1.13.0 portabase version
The gotcha that will burn you on a fresh Ubuntu 22.04 GitHub Actions runner: the installer script calls out to libpq at runtime for Postgres connectivity, but ubuntu-22.04 runners don't ship with libpq-dev. The error message isn't obvious — you'll see something like symbol lookup error: libpq.so.5 rather than a clean "missing dependency" message. Fix it before you run the installer:
- name: Install system deps
run: sudo apt-get install -y libpq-dev
- name: Install Portabase
run: curl -sSL https://get.portabase.dev | sh -s -- v1.13.0
Once it's installed, run portabase doctor before you do anything else — this is a v1.13 addition and it's genuinely useful. It checks three things in sequence: whether it can reach your target database with the credentials in your config, whether the backup destination path exists and is writable by the current process, and whether it can detect a migration adapter (Flyway, Liquibase, or raw SQL dir). A healthy output looks like this:
portabase doctor --config ./portabase.yml
✔ DB connectivity postgres@localhost:5432/myapp (23ms)
✔ Backup path /var/backups/portabase (writable)
✔ Migration adapter flyway detected at db/migration
All checks passed.
If the migration adapter check fails, it usually means your migration_dir key in portabase.yml is pointing at the wrong relative path. The doctor output tells you exactly what path it tried, which saves the usual "but it works on my machine" back-and-forth.
First-Time Config: portabase.yml
The thing that will burn you immediately: migrations.adapter defaults to none, and Portabase won't warn you. Your first backup completes successfully, you get a green checkmark, and your migration state is quietly absent from the archive. I only caught this when I tried a test restore and noticed the flyway_schema_history table was missing. Set migrations.adapter explicitly on day one — don't assume the default is safe.
Here's a minimal config that actually works for Postgres with Flyway tracking. I'm running this against Postgres 16 with Flyway 10.x:
# portabase.yml
db:
driver: postgres
dsn: "${DATABASE_URL}" # env var interpolation works here; no extra quoting needed
backup:
destination: "s3://my-backups-bucket/portabase"
compression: zstd # new in v1.13 — use this, not gzip
retention_days: 30
migrations:
adapter: flyway
schema_history_table: flyway_schema_history # must match your Flyway config exactly
The db.dsn field handles ${VAR_NAME} interpolation at runtime, so you can keep secrets out of the file. I've tested this with both a raw connection string and a URL-format DSN — both work. For backup.destination, S3 worked first try with standard AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY env vars. GCS was more annoying: the tool doesn't pick up Application Default Credentials automatically. You need GOOGLE_APPLICATION_CREDENTIALS pointing at a service account JSON file set explicitly in your environment, otherwise you'll get a vague auth error with no path to debug it.
The backup.compression: zstd option is new in v1.13 and worth switching to immediately if you're on a text-heavy schema. I tested against a 4GB dump that was mostly JSONB columns and got roughly 40% smaller archives compared to gzip at its default level. The config previously only accepted gzip — if you have older automation that hardcodes that value, it still works, but zstd is strictly better here unless you need the archive readable by tools that don't support it.
The four supported destination schemes are local:// (or just a plain path), s3://, gs://, and sftp://. Local paths are useful for CI environments where you're piping the backup elsewhere yourself. SFTP support is there but the docs don't cover key-based auth clearly — you need to set PORTABASE_SFTP_KEY_PATH in your environment; password auth is the assumed default and that's not obvious from the config file alone.
One structural opinion: keep your portabase.yml in version control and never put credentials directly in it. The DSN interpolation exists specifically for this. If you're deploying this in Kubernetes, a straightforward setup is mounting the config as a ConfigMap and injecting the DSN via a secret environment variable — the ${DATABASE_URL} syntax handles that pattern without any extra tooling.
Running Your First Backup
The thing that catches most people off guard isn't the backup itself — it's how much information Portabase surfaces during a run. Before your first backup finishes you'll have seen a connection check, a per-table row count, the current migration state, and a final checksum. That's not noise; it's the baseline you'll reference when something breaks at 2am.
Here's the actual command you want to run before any deploy. Tag it with the commit hash so the backup is queryable later:
# --config points to your portabase.yml; --tag is free-form but commit hashes are gold
portabase backup --config portabase.yml --tag pre-deploy-$(git rev-parse --short HEAD)
A healthy run looks like this in stdout:
[portabase 1.13] checking connection to postgres://prod-db:5432... ok
[portabase 1.13] row snapshot: users=84201, orders=312894, sessions=1203, ...
[portabase 1.13] migration state: latest=20240318_add_payment_index (applied)
[portabase 1.13] compressing: 2.1 GB → 430 MB (gzip level 6)
[portabase 1.13] uploading to s3://your-bucket/backups/pre-deploy-a3f91c2.pgbr
[portabase 1.13] checksum SHA256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
The checksum line matters more than it looks. Portabase writes that hash to its internal manifest, so when you run portabase restore later it can verify the file wasn't corrupted in transit or at rest. The row-count snapshot is similarly useful — if a restore lands you back at 84k users when you expected 85k, the snapshot tells you whether that delta was already there before the backup ran, not after.
Use the --tag flag from your very first backup. The discipline pays off fast: once you have 200 backups in the store, portabase list --tag pre-deploy filters the list down to exactly the backups you made before each deploy, sorted newest first. Without tags you're scanning by timestamp and doing mental arithmetic about which deploy happened when. The tag value is just a string, so anything meaningful to your team works — pre-deploy, weekly, post-migration-v4, whatever.
For scheduling you have two real options. The built-in portabase daemon mode runs as a long-lived process and exposes a Prometheus metrics endpoint on :9110 — backup duration, last success timestamp, upload bytes, the works. If you're already running Grafana or any Prometheus-compatible scraper, spinning up the daemon and pointing a scraper at it takes about five minutes and gives you alerting on missed backups for free. The cron path is simpler if you're already leaning on cron for other ops tasks:
# /etc/cron.d/portabase — daily at 03:00, output goes to syslog
0 3 * * * deploy /usr/local/bin/portabase backup --config /etc/portabase.yml --tag daily 2>&1 | logger -t portabase
I'd pick daemon if you have observability infrastructure already set up. I'd pick cron if you don't want another process to babysit. Don't run both — they'll step on each other's lock file.
One real wall I hit: we have a database with around 60 schemas (multi-tenant setup, one schema per account) and v1.13 dumps them sequentially. A backup that should take four minutes was taking closer to eighteen. Parallel schema dumping is on the Portabase roadmap but hasn't shipped yet. The workaround I landed on was setting schema_filter in the config to back up a subset of schemas per run on a rotation, which isn't ideal but keeps the window tight:
# portabase.yml — rotate schema groups if you have 50+ schemas
backup:
schema_filter:
- "public"
- "tenant_001_*" # glob supported in v1.13
compression: gzip
level: 6
If your database is single-schema or under ~20 schemas you won't hit this at all. It's specifically the wide multi-tenant case that bites, and knowing it's a v1.13 limitation (not your config) saves you the hour of chasing the wrong thing.
Restoring a Backup (The Part That Actually Matters)
Most backup tools have decent write paths and terrible read paths. The restore command is where you find out your backup was actually corrupted, or that you're missing 3 migrations, or that your target database is in some ambiguous half-applied state. Portabase v1.13 put obvious effort into making restore a first-class operation rather than an afterthought.
Start by finding your backup ID. portabase list pulls from wherever your config points — S3, local filesystem, GCS — and gives you a timestamped list with sizes and checksum status:
# portabase.yml points to your storage backend
portabase list --config portabase.yml
# Output looks like:
# ID CREATED SIZE CHECKSUM
# pbx-20240318-143022 2024-03-18 14:30:22 412 MB OK
# pbx-20240317-143018 2024-03-17 14:30:18 409 MB OK
# pbx-20240316-090001 2024-03-16 09:00:01 408 MB WARN
That WARN on the third entry means the checksum doesn't match what was recorded at backup time — don't restore that one without investigating. Once you pick your ID, the restore sequence unpacks the .pbx archive, validates the checksum again at restore time, drops and recreates the target database, restores the SQL dump, and then replays migration state into your history table. The drop-and-recreate step requires the --confirm-drop flag explicitly. This is the right call — I've seen too many "oops, production" moments with tools that just ask for a y/n prompt:
portabase restore \
--config portabase.yml \
--backup-id pbx-20240318-143022 \
--confirm-drop
The migration replay is what separates Portabase from a glorified pg_dump wrapper. After restore, your migration tool — Flyway, Liquibase, golang-migrate, whatever — reads the history table and sees exactly the migrations that were applied when that backup was taken. No phantom conflicts, no "migration V12 already exists but with different checksum" errors, no manual surgery on the schema history table at 2am. The backup captured migration state as a first-class artifact inside the .pbx, and restore writes it back faithfully. I switched from a manual dump-and-restore script to Portabase specifically because of this — reconciling migration state after a restore was always the brittle part.
Before touching any real database, run the dry-run mode. This is something I've made mandatory in our staging PR process:
portabase restore \
--config portabase.yml \
--backup-id pbx-20240318-143022 \
--confirm-drop \
--dry-run
# Prints step-by-step plan:
# [DRY-RUN] Would download pbx-20240318-143022 from s3://my-bucket/backups/
# [DRY-RUN] Would validate checksum: sha256:a3f1...
# [DRY-RUN] Would DROP DATABASE staging_app
# [DRY-RUN] Would CREATE DATABASE staging_app
# [DRY-RUN] Would apply SQL dump (412 MB)
# [DRY-RUN] Would replay 47 migration records into schema_history
Now for the v1.13 gotcha that burned me once and will burn you if you're not careful: if the target database has a different collation than the source — say your production Postgres 16 instance is en_US.UTF-8 and your restore target is C.UTF-8 — the restore succeeds. It doesn't fail, it doesn't abort. It logs a WARNING line in the middle of several hundred lines of output and keeps going. In testing this is invisible. In a CI log you'll never see it. The fix is simple but you have to remember to do it:
portabase restore \
--config portabase.yml \
--backup-id pbx-20240318-143022 \
--confirm-drop \
2>&1 | tee restore.log
# Then immediately:
grep "WARN" restore.log
Pipe through tee so you get both live output and a file to grep. Collation mismatches cause subtle ordering bugs in queries that use string comparison — the kind that only show up in edge cases weeks later. The Portabase team knows about this and the GitHub issue tracker has a discussion about making it a hard failure by default in v1.14, but for now treat the WARN check as non-optional.
The 3 Things That Surprised Me After Daily Use
I expected a backup tool to be boring infrastructure — set it, forget it, maybe think about it when something breaks. Portabase v1.13 kept proving me wrong in small ways that actually changed how I work day-to-day. None of these are headline features in the README, which is part of why they caught me off guard.
The audit log is the one I've gotten the most mileage from. Every backup and restore operation writes a structured record to a local SQLite file at ~/.portabase/audit.db by default — timestamp, operation type, database name, migration state hash, and exit status. You can point it at a remote Postgres table instead if you want the logs centralized across machines. I've used this twice in code reviews to prove exactly when a schema migration landed in production. Instead of combing through CI logs or asking who deployed what, I ran:
# Query the local audit log directly
sqlite3 ~/.portabase/audit.db \
"SELECT ts, operation, migration_hash, db_name FROM audit_log
WHERE db_name = 'prod-main'
ORDER BY ts DESC LIMIT 20;"
Timestamps don't lie, and neither does the migration hash. That hash is reproducible — same schema state always produces the same hash — so you can match it against your migration history without guessing.
The portabase diff command was the bigger surprise. I assumed it would dump two SQL schemas and leave me to eyeball the differences. Instead, running portabase diff --backup-id a3f1c --backup-id 9d72b gives you a structured side-by-side that separates schema changes (added/removed columns, index changes) from migration-state changes (which migrations are applied in each snapshot). The output looks like this:
Schema diff (a3f1c → 9d72b):
+ users.last_login_at TIMESTAMPTZ
~ orders.status VARCHAR(32) → VARCHAR(64)
Migration state diff:
+ 20240311_add_last_login [applied in 9d72b, missing in a3f1c]
That alignment between schema changes and migration state is what makes it readable. You can see that the column appeared at the exact same point the migration was applied, which is reassuring, or alarming if they don't line up. I've started using this before any production deploy that touches an existing table — treating it as a last sanity check instead of staring at the migration file alone.
The --target-dsn flag on restore deserves more attention than it gets. Before this, spinning up a production clone for debugging a gnarly data issue meant: dump prod, create a new database, restore the dump, update connection strings, pray you didn't skip a step. Now it's one command:
portabase restore \
--backup-id 9d72b \
--target-dsn "postgres://user:pass@localhost:5432/prod_clone_debug" \
--create-if-missing # creates the target DB if it doesn't exist
The --create-if-missing flag handles database creation so you don't have to pre-create it with psql first. The restored clone also has its migration state intact, so if your app checks migrations on startup it won't complain or try to re-run anything. The honest trade-off: this only works cleanly if your backup includes the migration state metadata, which requires you to have been using Portabase for backups consistently — not a one-off restore from a raw pg_dump file.
When NOT to Use Portabase
The thing that'll bite you hardest is discovering a tool doesn't fit your stack after you've already wired it into your CI pipeline. So here's the honest breakdown of where Portabase v1.13 is the wrong call.
Your database isn't PostgreSQL, MySQL 8+, or MariaDB 10.6+
If you're running MongoDB, Redis, CockroachDB, or anything outside that three-database list, Portabase simply won't connect. There's no plugin interface in v1.13 for custom drivers. I've seen teams try to work around this by dumping MongoDB to a staging MySQL instance first — that's two failure points instead of one and completely defeats the operational simplicity that makes Portabase worth considering. For MongoDB you're looking at mongodump piped to your own archiver, or Atlas's native backup UI. For Redis, redis-cli --rdb with a proper rotation script is 20 lines of bash and doesn't need a framework around it.
Your migration tool isn't on the supported list
Portabase's migration adapter system knows about four tools: Flyway, Liquibase, Alembic, and golang-migrate. If your team wrote its own migration runner — which is more common than it should be — or you're using something like DBmate, Sqitch, or a Rails-style ActiveRecord runner, Portabase falls back to adapter: none in the config. That means your backup still works, but migration state bundling is completely disabled. You get a plain dump with no schema version metadata attached. At that point you're carrying all the operational weight of Portabase without the feature that differentiates it from just running pg_dump yourself.
# What you'll see in portabase.yaml when your tool isn't recognized
migration:
adapter: none # falls back silently — check your logs for the warning
state_bundle: false # migration history won't be included in the archive
The fallback is silent by default in v1.13 unless you set log_level: debug. A lot of people miss this and assume migration state is being captured when it isn't.
Databases in the hundreds-of-gigabytes range
Portabase uses a single-process sequential dump under the hood. For a 50GB PostgreSQL database that's fine. Push into the 300GB–500GB territory and you'll feel the wall. pg_dump with parallel jobs via -j and a custom archiver format can saturate your I/O in a way Portabase's compression pipeline simply won't match right now. Here's the comparison that matters in practice:
# Portabase on a 400GB Postgres DB — single process, ~90 min on an 8-core machine
portabase backup --db production --compress zstd
# pg_dump with 8 parallel workers — same DB, ~22 min
pg_dump -Fd -j 8 -f /backups/production_dir production_db
That 4x speed gap compounds badly when you're running nightly backups with a tight window. Portabase's team has parallel compression on the roadmap, but v1.13 doesn't have it. Use the right tool for scale.
You need point-in-time recovery
Portabase is entirely snapshot-based. You get a backup at a moment in time, full stop. If your recovery requirements include "restore to the state at 14:37:22 before the bad deploy at 14:40" — which is a real and reasonable requirement for production financial data — you need WAL streaming with Barman or pgBackRest. Those tools maintain a continuous WAL archive you can replay forward or backward to any transaction. Portabase cannot do this and isn't trying to. If PITR is a compliance or business requirement, Portabase isn't even in the conversation for your primary backup strategy.
Your team already has Barman or AWS RDS automated backups working well
This is the subtlest case. If you have a mature Barman setup with tested restore runbooks, or you're on RDS with automated backups and snapshot export configured, the only thing Portabase adds is migration state bundling — the ability to know which Flyway or Alembic version was applied at backup time. Ask yourself honestly: is that worth adding another binary to your deployment, another config file to maintain, and another thing to break during an incident? For a lot of teams the answer is no. The migration-state-bundling feature shines most when you're starting fresh or when your current backup process has no schema versioning context at all. Bolting it onto an already solid RDS setup is often organizational overhead in exchange for a modest documentation improvement.
Portabase vs. Rolling Your Own pg_dump Scripts
The thing that actually convinced me to look at Portabase wasn't the feature list — it was watching a teammate spend two hours debugging why a restore was "successful" but the app still crashed. The pg_dump script had worked fine for months. Nobody had tested that the migration table came back in the right state. This is the exact gap Portabase targets, and it's worth being honest about where that gap is real versus where it's marketing.
Here's the head-to-head across the things that actually matter in production:
- Migration state capture: Portabase bundles your migration history table (Flyway, Alembic, golang-migrate — it detects schema by convention) into the backup artifact as a separate manifest. A hand-rolled
pg_dumptechnically captures that table, but you have to explicitly include it in partial dumps and then verify it on restore. Most scripts don't. Portabase does this by default. - Restore idempotency: Portabase wraps restores in a transaction with a pre-check — if the target DB already has the schema at that migration version, it aborts cleanly. Raw
psql -f dump.sqlwill happily try to re-create tables that exist and throw a cascade of errors you have to manually triage. - Checksum validation: SHA-256 over the full artifact before and after transfer, verified automatically on restore. Replicating this in bash is three lines — but I've seen those three lines get deleted "temporarily" during an incident and never come back.
- Remote storage support: Portabase v1.13 ships with S3-compatible and GCS backends via a single config key. DIY scripts need you to wire in
aws s3 cporgsutilseparately, handle retry logic, and manage credentials yourself. - Restore dry-run:
portabase restore --dry-runvalidates the artifact, checks migration compatibility, and reports what would change without touching the DB. Genuinely hard to replicate without essentially building a second restore path. - Audit log: Structured JSON log of every backup/restore operation with operator identity, artifact hash, and outcome. Your cron script logs to stdout at best.
- Setup time: Portabase takes maybe 20 minutes to configure. A production-grade custom script — one with checksums, idempotent restore, remote upload, and error handling — takes a day to write and another day to review properly.
# Portabase v1.13 config — this is the whole thing for a basic setup
[database]
url = "postgres://user:pass@localhost:5432/mydb"
[storage]
backend = "s3"
bucket = "myapp-backups"
prefix = "prod/"
[migrations]
framework = "flyway" # auto-detects if omitted, but explicit is better
[backup]
checksum = true # on by default, listed here for visibility
retention_days = 30
The honest take: a well-maintained custom script absolutely can cover most of this ground. I've seen bash scripts with proper checksums, clean error handling, and even migration table verification — they exist. The real question is what happens when the person who wrote it leaves. Portabase's behavior is documented, versioned, and auditable by anyone on the team. Your predecessor's backup.sh is a 200-line file with two TODO comments and a variable named FINAL2.
Where DIY still wins is legitimate. If you're running parallel dump jobs across large tables with pg_dump -j 8, Portabase v1.13 doesn't expose that flag — you're capped at its single-stream backup. If you need custom --exclude-table patterns or you're dumping into a format that feeds directly into another pipeline, the flexibility isn't there yet. And some teams genuinely can't take a dependency on an external binary they don't compile themselves — security posture, air-gapped environments, whatever the reason. That's a real constraint and Portabase doesn't solve it.
The two places where Portabase wins so cleanly that I wouldn't try to replicate them in bash: the zero-config checksum-on-restore (because the validation is cryptographically tied to the artifact metadata, not a sidecar file that can drift) and the migration state bundling. Getting migration state capture right — handling partial migrations, dirty states, out-of-order applies — requires understanding the internals of each migration framework. Portabase has already done that work per framework. Writing it yourself means you're either doing it shallowly or spending a week on it.
Practical CI/CD Integration Example (GitHub Actions)
The thing that surprised me most when wiring Portabase into GitHub Actions was how clean the Docker-based approach is compared to installing it as a binary step. You get reproducible behavior across environments, and v1.13's image is pinned to a specific tag so you're not chasing down "works on my machine" issues mid-deploy. Here's the full pre-deploy backup job I'm running in production:
jobs:
pre-deploy-backup:
runs-on: ubuntu-latest
container:
image: portabase/portabase:1.13
steps:
- name: Checkout repo
uses: actions/checkout@v4
- name: Run pre-deploy backup
env:
DB_HOST: ${{ secrets.DB_HOST }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
BACKUP_STORAGE_KEY: ${{ secrets.BACKUP_STORAGE_KEY }}
run: |
portabase backup \
--config ./portabase.yml \
--tag "pre-deploy-${{ github.sha }}" \
--fail-on-warn \
--output-id BACKUP_ID
echo "BACKUP_ID=$(cat .portabase-output/backup-id)" >> $GITHUB_ENV
I commit a portabase.yml directly to the repo rather than shoving the whole config into a single opaque secret. The sanitized file uses env var references for anything sensitive, so the structure and intent are visible in code review while credentials stay out of git. This has saved me twice when a teammate changed the backup target bucket — they could see the config, open a PR, and we caught a misconfiguration before it ever hit CI. The file looks like this:
# portabase.yml — committed to repo, credentials via environment
database:
host: ${DB_HOST}
port: 5432
name: myapp_production
password: ${DB_PASSWORD}
storage:
provider: s3
bucket: myapp-backups-prod
prefix: portabase/
access_key: ${BACKUP_STORAGE_KEY}
options:
compression: zstd
retention_days: 30
The --fail-on-warn flag added in v1.13 is the one I'd have killed for six months ago. Before it existed, Portabase would silently log a warning when it detected a collation mismatch between source and target — the backup would succeed, and the warning would scroll past in CI logs nobody reads. With --fail-on-warn, that's now a hard exit code 2, which fails the step and blocks the deploy. If you're running Postgres 15 or 16 with custom collations or non-UTF8 encoding on legacy tables, you will hit this. Better to find out before the deploy than after a restore attempt at 2am.
The post-deploy verification step is lightweight enough that there's no excuse not to run it. It validates the archive integrity — checksums, index completeness, metadata — without pulling the actual data back down. I attach it to the deploy job as a final step:
post-deploy-verify:
runs-on: ubuntu-latest
needs: [pre-deploy-backup, deploy]
container:
image: portabase/portabase:1.13
steps:
- name: Verify backup integrity
env:
BACKUP_STORAGE_KEY: ${{ secrets.BACKUP_STORAGE_KEY }}
run: |
portabase verify \
--backup-id ${{ env.BACKUP_ID }} \
--config ./portabase.yml
# Non-zero exit here means the archive is corrupt or incomplete.
# The deploy already happened, but now you know your rollback is unreliable.
One gotcha: the BACKUP_ID env var doesn't automatically propagate between jobs in GitHub Actions — $GITHUB_ENV is job-scoped. You need to write it to a job output explicitly and consume it in the dependent job. Either use outputs: on the backup job, or write the ID to an artifact and pull it in the verify step. I went with job outputs because artifacts add latency and I didn't want the verify step to be skippable if the artifact upload failed:
pre-deploy-backup:
outputs:
backup_id: ${{ steps.backup.outputs.backup_id }}
steps:
- name: Run pre-deploy backup
id: backup
run: |
portabase backup --config ./portabase.yml --tag "pre-deploy-${{ github.sha }}" --fail-on-warn
echo "backup_id=$(cat .portabase-output/backup-id)" >> $GITHUB_OUTPUT
Disclaimer: This article is for informational purposes only. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official policy or position of Sonic Rocket or its affiliates. Always consult with a certified professional before making any financial or technical decisions based on this content.
Originally published on techdigestor.com. Follow for more developer-focused tooling reviews and productivity guides.
Top comments (0)