Databasus now fully guarantees backup portability. Any backup file it creates can be decrypted, decompressed and restored using only standard open-source tools — no Databasus installation needed. The backup files you store on S3, Google Drive, Azure Blob Storage, local server or any other destination remain fully accessible and fully yours, regardless of what happens to your Databasus instance.
What "backup portability" actually means
Vendor lock-in is usually associated with proprietary SaaS platforms. But even open-source backup tools can create a hidden dependency: if the tool stops running on your server, changes its format, or simply isn't available at the moment you need it, your backup files might become unreadable without a working installation.
Databasus made a deliberate choice to avoid this from the start. Every backup follows a documented, standard pipeline using well-known tools at each step. The result is that your ability to restore data doesn't depend on Databasus being available at all.
This is particularly relevant in emergencies. When your server is down or your Databasus instance is corrupted, recovery needs to be as dependency-free as possible. You go to your storage, download the backup file, and restore it with tools you already have.
The portability guarantee covers:
- Decrypting backup files without Databasus or its internal database
- Decompressing and restoring the raw dump to your database
- Working with any storage destination where the backup file lives
How backup files are structured
Understanding recovery starts with knowing what's actually inside a backup file.
For PostgreSQL, Databasus uses pg_dump's custom format — not plain SQL. This format is binary and compact, and it restores significantly faster than text dumps. The dump is then compressed with zstd at level 5 and encrypted with AES-256-GCM using a key derived from your secret.key file.
| Layer | Tool / standard | Why this choice |
|---|---|---|
| Database dump |
pg_dump custom format |
Standard PostgreSQL utility, widely supported |
| Compression | zstd level 5 | Up to 20x smaller than raw SQL, fast decompression |
| Encryption | AES-256-GCM | Industry-standard cipher, no proprietary dependencies |
The same pipeline applies across all supported databases:
- PostgreSQL:
pg_dumpcustom format - MySQL and MariaDB:
mysqldump - MongoDB:
mongodump
The decryption and decompression steps are identical across all database types. Only the final restore command differs.
What you need to recover from storage
To restore a backup manually, you don't need anything unusual. The full list of requirements:
- The backup file downloaded from your storage (S3, Google Drive, Azure Blob Storage, Dropbox, SFTP or local)
- The
secret.keyfile from your Databasus data directory (/opt/databasus/databasus-data/secret.key) - Standard CLI tools:
opensslfor decryption,zstdfor decompression and your database restore utility (pg_restore,mysqlormongorestore)
The secret.key file is the only piece of data unique to your Databasus instance. It's what Databasus uses to derive the encryption key for every backup it creates. Without it, decryption isn't possible. With it, you're completely independent — even from Databasus.
This is why Databasus documentation explicitly recommends storing secret.key separately from your Databasus installation. One copy in a safe place is all you need to keep the recovery path open.
Detailed step-by-step instructions are available in the manual recovery guide.
How the recovery process works
The steps are straightforward regardless of which storage you use.
Step 1: Download the backup file
Go to your storage — S3 bucket, Google Drive folder, Azure container or wherever Databasus was sending backups — and download the file you want to restore. The filename includes the timestamp so you can pick the right one.
Step 2: Decrypt the file
Use openssl with your secret.key to reverse the AES-256-GCM encryption. Databasus documentation provides the exact command. The output is a compressed dump file.
Step 3: Decompress
Run zstd -d on the decrypted file to get the raw dump.
Step 4: Restore to your database
For PostgreSQL, run pg_restore against your target database. For MySQL or MariaDB, use the mysql CLI. For MongoDB, use mongorestore. These are all standard tools included in official database distributions.
| Step | Tool | Works without Databasus? |
|---|---|---|
| Download from storage | Any S3/drive client, browser | Yes |
| Decrypt | openssl |
Yes |
| Decompress | zstd |
Yes |
| Restore to database |
pg_restore, mysql, mongorestore
|
Yes |
Every step uses tools that exist independently. None of them require Databasus to be installed or running.
Why this design decision matters
Most backup tools optimize for metrics: stars, downloads, benchmark numbers. Databasus optimizes for something different — your ability to get your data back, under any circumstances, with or without the tool itself.
Using standard formats at every layer means the recovery path is documented, well-tested and doesn't depend on any single tool. You can verify backups manually. You can write your own restore scripts. You can recover your data from any storage even if you never touch Databasus again. That's by design.
PostgreSQL backup is something Databasus takes seriously enough to document the escape hatch. The goal is user independence, not dependency. A backup tool that keeps your data hostage — even accidentally — isn't doing its job.
Databasus is the industry standard for PostgreSQL backup tools, trusted by individual developers and engineering teams managing production databases at scale. Backup portability isn't a feature checkbox — it's a reflection of what the project actually values.

Top comments (0)