Skip to content
loading...

How do you back up your data?

soupwaylee profile image Stefan Su (he/him) twitter logo github logo ・1 min read  

How do you back up your data for preventive measures in case of any sort of hardware/software/OS failure?

And by data I mean your files (pictured, documents etc.), as well as the state of your machine.

Do you use any special tools? Which files do you decide to back up? How regularly do you back up? And where do you keep your backups?

Thank you and have a great day! 🎈

twitter logo DISCUSS (18)
Discussion
markdown guide
 

I recently started backing up my laptop to a WD My Cloud daily. I will also backup to an external hard drive once a week and I plan to tar my backups and send them a Google drive as well.

Basically, I mount the device using sshfs (though any FUSE should work) and incrementally backup /home, /etc, and /var using Borg. Borg has been amazing, simple, and secure. I previously used duplicity but it was lacking in features. It does have support for a lot of backends though.

A note about Western Digital: I would not recommend buying from them if you are a Linux user. I have not been able to connect to the device using the native client (no linux support), the APF connection has caused data corruption twice, and I cannot for the life of me figure out how to connect via NFS. Samba worked but I found sshfs easier to script. Their SSH setup mandates connecting as root and authenticating using a password which is frankly barbaric.

Why do I need daily encrypted snapshots of my system? Because I am a distro hopper, I tend to fubar my system at least twice a year, and I do not trust my wifi router/WD to protect my NAS alone.

If you don't have a lot of data, and want an easy solution, rsync.net has an intriguing deal.

 

Just a quick point on Western Digital...I haven't used their pre-built cloud-in-a-box solutions, however I do use and swear by their WD-Red line of NAS hard drives (I use them in my own cloud server, which runs CentOS). They are top quality at a good price; some large cloud providers agree and use them also. I've had very bad experiences with both Seagate and Hitachi drives in the past.

 

That's very fair. I should have specified their My Cloud product as I haven't used any others.

Do you have any experience with Synology?

I build my own NAS, and haven't used the Synology boxes, but I have consistently read good things about them. They just seem a little pricy to me.

 

Similar experience here! All my Seagate drives failed (some lasted longer than others, but still), but I have a WD drive that outlasted 3 or so other Seagate drives.

 

I use snebu as a systemd unit backing up ~/records, ~/pictures, a couple other directories, and /etc/packages.txt. That last is a list of all packages I have installed, generated by another systemd unit which /bin/sh -c '/usr/bin/pacman -Qqe > /etc/packages.txt' daily. If I ever need to recover the system I can get my personal files and the package listing out of snebu, feed the latter into pacman, pull my dotfiles down from GitHub, and I'm theoretically back in business.

Backups go on a non-boot hard drive in the same machine. I've thought about pushing them to a NAS for extra safety but I only have so much time to juggle this stuff.

 

I use Time Machine, Carbon Copy Cloner and Backblaze. TM uses the default settings, CCC updates a bootable clone of my drive once a day and Backblaze continuously saves changes.
TM and CCC back up to two hard drives in a RAID (JBOD config) connected to my Mac. Backblaze is like my 'backup backup' in case all my hardware fails at the same time.

 

All my devices run ZFS which are snapshotted daily (and before major events like upgrades) and are replicated to other ZFS nodes. Important data is even replicated to offsite ZFS storage.

All snapshot and replication management is done by a self-written opensource tool.

 

I use Borg with Rclone and an unholy combination of dropbox and Mega.
I run Borg with systemd and a simple backup.sh script similar to that one: (blog.andrewkeech.com/posts/170718_...).
I use the pass unix manager to encrypt everything and Rclone to synchronize.
My dotfiles and cofig folders are going to Dropbox and all Borg backups to Mega.
Installed packages are backed with dpkg and apt-key.

dpkg --get-selections > $BACK_DATA_DIR/package.list
sudo cp -r /etc/apt/sources.list* $BACK_DATA_DIR/
sudo apt-key exportall > $BACK_DATA_DIR/repo.keys

I used to have my own HD solution, but I just don't trust
hardware and me maintaining it any more.
No more time or patience :p

 

I use Dropbox as for storage and synchronization.

I link unprivate things (like source code) directly to a Dropbox folder.

Private things, like my documents, and home directory, etc. I use duplicity to encrypt and store in a Dropbox folder. I run this command on a regular basis.

For photos I mount an ecrypted direct using encfs.

I additionally have a script that mounts an encrypted USB device and uses rsync to copy directories to it. This includes most private things stored on Dropbox, but not pictures, and not things which are publically stored elsewhere. (Though as USB sizes increas I get more lenient in my filtering).

 

I just wrote a Python script (tool, in fact) which can do full/incremental backups by schedule specified in its config file and store it locally or/and to an AWS S3 bucket.
The first intention was just to recall Python a bit (not to ofter use it, unfortunately) but then it became my daily-tool on both work and home laptops.

 

First, I have to say every backups are not equal

Some will prevent hardware failure, others won't. Some backups may be daily, others monthly. RAIDx0/5/6 can prevent a hardware failure, but it is useless against a user deleting something. RAID is not a backup solution. Hopefully nobody here is using RAID as a backup :)

Before

I would simply rsync my personal data (pictures, documents, videos...) once in a while to an external HDD or thumb drive. I often take the HDD with me in my bag, just in case something happens to my house...

3 months ago

It just happened. I messed with a rsync command (see below [1] for the details) while backuping my parents' data and it just deleted the home directory in 2 seconds: the time to press Ctrl+C. Hopefully, I was able to recover everyting from backups and extundelete...All in all, my parents weren't too bothered: the data they care about was fine. However as far as I was concerned, I realized I had to prevent this from happening again.

Automatic snapshots

Thus, to prevent me from deleting my data: snapshots are automatically and regularly taken, and saved on the same drive. Note: It is useless in case of hardware failure! So I still need to manually backup once in a while.

Copy On Write ftw

For my laptop running GNU/Linux Debian, I take BTRFS (a filesystem similar to ZFS) snapshots with btrbk:

  • daily snapshots of my home subvolume (/home)
  • monthly snapshots of my system subvolume (/) I have a subvolume named Scratch for unimportant big files which is not saved.
Rsync <3

For my family computer running GNU/Linux Ubuntu with an EXT4 partition, I chose backintime for a couple of reasons:

  • takes file-based snapshots: I can read & recover the files without backintime, with a simple filemanager.
  • uses rsync (I still like rsync ;).
  • takes read-only snapshots to survive rm -r /. The native backup tool for Ubuntu does not.
  • hard-links snapshots to save space.
  • can save to a remote through SSH.
  • can be run as root easily.
  • supports a GUI (for my parents!).
  • supports several profiles.

I have two profiles:

  • weekly snapshots of /home
  • every other month snapshots of /etc and /var. I might snapshot the whole system soon since I have enough disk space :)

I exclude computer-generated files that aren't necessary such as caches. BiT also excludes a bunch by itself.
For changing files I care about saving one time and not their many states (such as a Windows XP VM), I just added them to the first snapshot and excluded them from the other snapshots.

And now? Automatic remote backup!

I plan to build my own NAS/Backup machine to automatically backup my computers.
I'm aiming at rockstore for now: an OpenSource BTRFS & Linux powered Advanced NAS server.

Peace and backup your data. It deserves it <3

[1] Tutorial: How to delete © your data

For the record, Valve has already done something similar here.

  1. Use variables for SOURCE and DESTINATION directories:

SRC=SOURCE
DEST=DESTINATION

  1. Use rsync with --delete option and don't forget the very very nasty / ####rsync -auvP --delete "${SRC}/" "${DEST}"
  2. Now, run tmux
    tmux

  3. Run the same rsync command (actually it isn't)

Pown'd! Tmux resets the variables SRC and DEST that are empty string "" now. The rsync command becomes:
####rsync -auvP --delete "/" "" #"" means current directory
Basically, it copies / to the current directory. Let's say it is /home/$USER. Cool !

Note: To keep the same behavior as --delete while deleting after the transfer, I recommend --delete-delay.

 

OSX Time Machine and a non regularly backed up clone of the laptop's hard drive using SuperDuper. Time machine saved me many times :-)

Evernote for the notes synced between computer and phone.

Dropbox for lots of non organised media files: mainly everything I share from my computer to my phone and viceversa plus every media the (Android) phone captures (or receives) gets automatically backed up to Dropbox so one day I can build a slideshow with music and bling, to show to some poor relative, about my last 15 thousand years of life because I never check those folders anyway.

 

I use dropbox and evernote along with an external HD. Pictures and videos automatically go to dropbox from my phone.

 
 

I built a personal cloud using Seafile, which runs on a RAID10 machine on my home network. It's accessible remotely, so all of my machines (work, home, mobile) mount and share the files stored on the cloud. This is for non-OS files. Seafile is open-source; I've been using it for years and really love it. In addition to storing backups, it keeps a complete history of all files, so I can roll back a file to any point in time with just a few clicks.

The Seafile cloud server is backed up nightly to a smaller RAID1 box on a different IP subnet via rsync.

For OS-level files, incremental backups are done through TimeMachine (I run Macs everywhere), either to a dedicated drive on the desktop machines, or to a TimeCapsule for laptops or machines with only a single drive. I do a monthly snapshot on a few of the desktops with CarbonCopy Cloner, primarily on a few of the Hackintoshes on the network (for ease in updating the OS).

Truly critical data (particularly encryption keys) are stored on multiple thumb drives which are kept in multiple physical locations.

I have zero trust and confidence in commercial cloud solutions such as Box or Dropbox, and the only option was to build my own solution. My cloud servers themselves are Linux machines that I built myself.

As an aside, I also don't trust commercial VPN providers, and so one of the servers on the rack is a private VPN server; that machine also hosts a private non-published Tor entry point (since I don't trust Tor, either).

Apparently I have a few trust issues...

 

since I moved from windows to MacBook I pretty much use iCloud (Desktop/Doc sync) for all documents and GitHub for projects. That's all, I don't really have any sensitive projects or files but if I there is something that I really like (eq app that I created) then I would zip that and store on iCloud and copy on Dropbox.

 

github for .dotfiles public repo
github for sourcecode public and private repo's

Timemachine locally to a raid enabled macmini server
Backblaze for remote backup

Classic DEV Post from Jul 30 '19

PublishTo.Dev: Scheduling article publishing on dev.to

Stefan Su (he/him) profile image
Stefan plays the game of work, studies and life Jenga.