DEV Community

Max Pixel
Max Pixel

Posted on

Moving a Docker Volume to a different Windows computer

Today, I needed to transfer my Postgresql instance from my laptop to my always-on machine. The database and accompanying application are in the early stages of development, so I haven't set up any sort of replication or backup yet. In fact, after discovering how undesirably complex and inflexible this is for Postgresql, I intend to switch to Cassandra. I'm not going to port my API to Cassandra in just a few hours, though, I need to use it today, and transferring it seems a small enough task that I should just get it out of the way now.

The codebase is already replicated to all of my machines through Plastic SCM, however the actual data of the database lives only on my laptop. I have Postgresql's persistent data set up to be stored in a named volume. According to Docker's documentation, the correct way to back up and restore named and anonymous volumes is to copy the files from named-volume to host-volume using a temporary container that binds to both and runs a cp, tar, or similar command to copy the data from mount to mount.

The example given in Docker's documentation, however, assumes that Linux containers are being used. In my case, I'm running Postgresql in a Windows Container. These are the commands that I ended up running (in PowerShell) to get it to work:

mkdir backup
docker run --rm --volumes-from masterbrain_postgresql_1 -v $PWD/backup:C:/backup mcr.microsoft.com/powershell:nanoserver pwsh -Command cp pgsql/data/* backup -Recurse
Enter fullscreen mode Exit fullscreen mode

There are a few caveats that required a few unintuitive elements that I had to include in the command:

  • The local backup folder must exist before I can run the docker command - if it doesn't exist, Docker will return an error instead of assuming that I would like it to mkdir for me.
  • Docker for Windows does not accept relative paths in bind-mount specifications - prefixing the relative path with $PWD satisfies this requirement.
  • Docker for Windows also requires the use of drive letters. Even though / is sufficient in pwsh, scp, and many others, docker requires C:/.
  • The official powershell image does not set pwsh as the entrypoint - when specifying a command in the docker run line instead of entering an interactive session, in order for that command to actually run in PowerShell (the whole point of using this image), it must be prefixed with pwsh -Command.

Once I had everything copied into the backup folder, I needed to transfer those files to the other computer. I already have all of my machines set up to ssh into each other (I find this a bit more convenient than setting up ActiveDirectory for a company of two), so I chose to do so using scp. You can use any other method, though, such as SMB or a flash drive.

ssh max@alwayson mkdir /temp/pg
scp -r backup/* max@alwayson:/temp/pg
Enter fullscreen mode Exit fullscreen mode

Just like with the backup, I'm required to create the destination folder first. If I don't, scp fails with the shockingly uninformative and misleading error message, "lost connection". rsync could accomplish the same thing in just one command, but the Windows version of rsync is still awful.

Now that the database files exist on the target machine, the last step is mostly the same as the first step with the directories reversed:

docker-compose up # followed by a ctrl+c to close it as soon as it's up and running
docker run --rm --volumes-from masterbrain_postgresql_1 -v C:/temp/pg:C:/backup mcr.microsoft.com/powershell:nanoserver pwsh -Command 'rm pgsql/data/* -Recurse; cp backup/* pgsql/data -Recurse'
Enter fullscreen mode Exit fullscreen mode

This time, I didn't need to create any folders beforehand, but I did need to create the named volume. I accomplished this by spinning up my composition, which also allows me to use the same --volumes-from approach. I also had to add an rm to make sure that the resulting named volume didn't contain anything not in the backup folder. Note that in order to run two commands (rm and cp) inside of the container (instead of running one inside and one outside), the commands need to be encapsulated in quotes.

In retrospect, there appears to be a slightly more straightforward solution: An scp-equipped container on my laptop could scp the files directly from its named volume, directly into a named volume on the target machine, if there is a container running on the target machine which is bound to the named volume and is configured to accept SSH connections from the container on my laptop. That said, while this approach would reduce the number of "hops" that the data goes through, it's much easier to remember and bash out the above commands than it is to get two containers talking to each other over SSH.

Top comments (0)