DEV Community

Cover image for The 3-2-1 Backup Rule for Laravel Apps on Deploynix
Deploynix
Deploynix

Posted on • Originally published at deploynix.io

The 3-2-1 Backup Rule for Laravel Apps on Deploynix

It takes months to build an application and seconds to lose everything. A corrupted database migration, an accidental DELETE without a WHERE clause, a compromised server, a cloud provider outage, or even a well-meaning team member who runs the wrong command. Data loss scenarios are varied, but the recovery strategy has been consistent for decades: the 3-2-1 backup rule.

This rule is simple enough to remember and comprehensive enough to protect you from virtually any failure scenario. This guide explains the rule, shows you how to implement it for your Laravel application using Deploynix, and covers the practices that separate teams who recover from disaster from those who don't.

What Is the 3-2-1 Rule?

The 3-2-1 backup rule was originally formulated by photographer Peter Krogh and has since been adopted across every field that values data. It states:

  • 3 copies of your data (one primary and two backups).
  • 2 different types of storage media.
  • 1 copy stored offsite (geographically separate from your primary data).

Each element of the rule addresses a specific failure mode.

Three copies protect against media failure. If one copy is corrupted or lost, you have two more. The probability of all three failing simultaneously is negligibly small.

Two media types protect against media-specific failures. If all your backups are on the same type of storage, a systematic failure (like a firmware bug affecting a specific SSD model, or a cloud provider's storage subsystem failure) could take out all copies simultaneously.

One offsite copy protects against physical disasters. A fire, flood, or break-in at your data center could destroy every server in the room. An offsite copy ensures you can recover even in the worst-case scenario.

Mapping the 3-2-1 Rule to Laravel Applications

For a Laravel application, your critical data includes:

  • Database contents: Your MySQL, MariaDB, or PostgreSQL database containing user data, application state, and business logic.
  • Uploaded files: User-generated content stored on disk or in object storage.
  • Environment configuration: Your .env file and any server-specific configuration.
  • Application code: Your source code (already backed up in Git, but worth including in your strategy).

Let's map the 3-2-1 rule to these components.

Copy 1: The Primary Data (Your Live Server)

Your first copy is the live data on your Deploynix server. This is your primary copy, the one serving your users. It lives on the server's disk, whether that's a DigitalOcean droplet, a Vultr instance, a Hetzner server, a Linode machine, or an AWS EC2 instance.

This copy is the most accessible but also the most vulnerable. It's subject to accidental deletion, application bugs, server compromise, and hardware failure.

Copy 2: Automated Server Backup

Your second copy should be an automated backup that runs on a schedule and is stored separately from your application data.

Database backups with Deploynix:

Deploynix provides automated daily database backups that run on a schedule. You can also trigger manual backups at any time through the dashboard for additional coverage before risky operations like migrations.

These backups capture a consistent snapshot of your database using mysqldump for MySQL/MariaDB or pg_dump for PostgreSQL. They're compressed and stored according to your configuration.

File backups:

For user-uploaded files stored on the server's filesystem, you need a separate backup strategy. If you're using Laravel's public or local disk, these files live on the server and need to be backed up alongside your database.

Consider moving file storage to an object storage service (like DigitalOcean Spaces or AWS S3) using Laravel's s3 filesystem driver. This separates your file storage from your server, providing implicit redundancy since object storage services handle their own replication.

Copy 3: Offsite Backup to Object Storage

Your third copy should be stored in a completely different location, ideally with a different provider. This is where Deploynix's backup storage integration comes in.

Configure offsite backup storage:

Deploynix supports sending backups to:

  • AWS S3: Store backups in any AWS region worldwide.
  • DigitalOcean Spaces: Object storage integrated with DigitalOcean.
  • Wasabi: Cost-effective S3-compatible storage.
  • Custom S3-compatible: Any provider that supports the S3 API.

Choosing an offsite location:

The offsite copy should be geographically distant from your primary server. If your server is in a DigitalOcean data center in New York, store your offsite backup in an AWS S3 bucket in Frankfurt or a Wasabi bucket in Amsterdam. This protects against regional disasters, provider-specific outages, and data center failures.

Implementing Two Media Types

In the context of cloud hosting, "two media types" requires creative interpretation. You're not going to burn backup DVDs. Instead, think of "media types" as "storage systems with different failure modes."

Option 1: Server disk + Object storage

Your primary data is on the server's block storage (SSD). Your backup is in object storage (S3-compatible). These systems have fundamentally different architectures and failure modes.

Option 2: Object storage + Different cloud provider

If your primary data is already on one cloud provider, use a different provider for backups. If you host on DigitalOcean, back up to AWS S3. If you host on Hetzner, back up to Wasabi. This diversifies your risk across providers.

Option 3: Cloud + Local/on-premise

For the most critical data, periodically download backups to a local NAS or on-premise server. This provides the most diverse media type separation. It's overkill for most applications but important for businesses with strict data retention requirements.

A Practical Implementation Plan

Here's a concrete plan for implementing 3-2-1 backups for a Laravel application on Deploynix:

Step 1: Configure Automated Database Backups

In the Deploynix dashboard, navigate to your server's backup settings:

  1. Select your database.
  2. Configure your offsite storage provider (AWS S3, DigitalOcean Spaces, Wasabi, or custom).
  3. Deploynix runs automated daily backups on a fixed schedule. For additional protection before risky operations, trigger a manual backup through the dashboard.
  4. Backups are retained according to a grandfather-father-son policy (daily, weekly, and monthly retention periods).

Step 2: Configure File Backups

If your application stores user uploads on the local filesystem:

Option A (Recommended): Move file storage to object storage.

Update your Laravel filesystem configuration to use an S3-compatible disk:

// config/filesystems.php
'disks' => [
    'uploads' => [
        'driver' => 's3',
        'key' => env('AWS_ACCESS_KEY_ID'),
        'secret' => env('AWS_SECRET_ACCESS_KEY'),
        'region' => env('AWS_DEFAULT_REGION'),
        'bucket' => env('AWS_BUCKET'),
    ],
],
Enter fullscreen mode Exit fullscreen mode

Files stored in object storage are automatically replicated within the provider's infrastructure, giving you inherent redundancy.

Option B: Include file directories in your backup.

If files must remain on the server, configure a cron job or deployment hook that archives the upload directory and pushes it to your offsite storage:

tar -czf /tmp/uploads-$(date +%Y%m%d).tar.gz /home/deploynix/your-site/storage/app/public
aws s3 cp /tmp/uploads-$(date +%Y%m%d).tar.gz s3://your-backup-bucket/files/
rm /tmp/uploads-$(date +%Y%m%d).tar.gz
Enter fullscreen mode Exit fullscreen mode

Step 3: Back Up Environment Configuration

Your .env file and server configuration are small but critical. Losing them means you'll need to reconstruct every API key, database credential, and service configuration from memory.

Practical approaches:

  • Store a copy of production environment variables in a password manager (1Password, Bitwarden) accessible to authorized team members.
  • Use Laravel's encrypted environment files (php artisan env:encrypt) and commit the encrypted file to your repository.
  • Deploynix stores your environment variables in its platform, providing an additional copy outside your server.

Step 4: Verify Your Code Is in Version Control

This should go without saying, but ensure your application code is committed to a Git repository hosted on GitHub, GitLab, Bitbucket, or another provider. Your code repository is effectively an offsite backup of your application logic. Deploynix integrates with GitHub, GitLab, Bitbucket, and custom Git providers for deployments.

Testing Your Backups

A backup that hasn't been tested is a wish, not a plan. Regular restore testing is the most important and most neglected part of any backup strategy.

Monthly Restore Drill

Schedule a monthly exercise where you:

  1. Download the latest backup from your offsite storage.
  2. Restore it to a staging or test environment.
  3. Verify data integrity (record counts, recent records, key data points).
  4. Document any issues or improvements needed in the process.

What to Verify After Restoration

  • Record counts: Compare the number of records in key tables (users, orders, etc.) against production.
  • Recent data: Verify that the most recent records are present and correct.
  • Relationships: Check that foreign key relationships are intact.
  • Files: If you back up uploaded files, verify they're accessible and uncorrupted.
  • Application functionality: Boot the Laravel application against the restored database and verify core features work.

Automation

Consider automating restore testing with a script that:

  1. Downloads the latest backup.
  2. Restores it to a test database.
  3. Runs a set of verification queries.
  4. Reports the results to your team.

Retention Policies

How long should you keep backups? The answer depends on your data recovery needs and regulatory requirements.

Minimum recommended retention:

  • Daily backups: Keep for 30 days.
  • Weekly backups: Keep for 12 weeks.
  • Monthly backups: Keep for 12 months.

This "grandfather-father-son" approach gives you fine-grained recovery for recent issues (daily backups from the last month) and broader coverage for older problems (monthly backups from the last year).

Compliance considerations:

Some industries require specific retention periods:

  • GDPR doesn't mandate retention periods but requires you to justify how long you keep personal data.
  • SOC 2 typically requires audit logs and backups for at least a year.
  • HIPAA requires backups for at least six years.
  • PCI DSS requires at least a year of audit trail history.

Lifecycle policies:

If you use AWS S3, configure lifecycle policies to automatically:

  • Transition daily backups older than 30 days to S3 Infrequent Access.
  • Transition monthly backups older than 6 months to S3 Glacier.
  • Delete backups that exceed your retention policy.

This reduces storage costs while maintaining access to historical backups.

Disaster Recovery Scenarios

Let's walk through common disaster scenarios and how the 3-2-1 rule protects you.

Scenario 1: Accidental data deletion.

A developer runs a destructive query in production. Your most recent backup (copy 2) lets you restore the affected data. If the backup is too old, your hourly backups minimize data loss.

Scenario 2: Server compromise.

An attacker gains access and encrypts your data (ransomware). Your offsite backup (copy 3) is stored with different credentials, so the attacker can't access it. Provision a new server through Deploynix, restore from your offsite backup, rotate all credentials, and investigate the breach.

Scenario 3: Cloud provider outage.

Your hosting provider experiences a prolonged regional outage. Your offsite backup on a different provider lets you provision a new server elsewhere and restore your application. With Deploynix supporting DigitalOcean, Vultr, Hetzner, Linode, AWS, and custom servers, you have options.

Scenario 4: Corrupted deployment.

A migration goes wrong and corrupts data. Your daily backup lets you roll back to the pre-deployment state. This is where Deploynix's rollback feature for deployments and your database backups work together.

Conclusion

The 3-2-1 backup rule has endured for decades because it's simple, effective, and addresses the fundamental risks to data availability. For your Laravel application on Deploynix, implementing it means:

  1. Your live server holds the primary copy.
  2. Automated Deploynix backups create a second copy.
  3. Offsite storage on AWS S3, DigitalOcean Spaces, Wasabi, or another S3-compatible provider creates the third copy on different infrastructure.

The rule is your foundation, but the practices around it matter just as much. Test your restores monthly. Document your recovery procedures. Review your retention policies quarterly. Train your team on the restoration process.

Data loss is always a matter of "when," not "if." The 3-2-1 rule ensures that "when" it happens, it's an inconvenience rather than a catastrophe.

Top comments (0)