<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nik</title>
    <description>The latest articles on DEV Community by Nik (@nikvdp).</description>
    <link>https://dev.to/nikvdp</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nikvdp"/>
    <language>en</language>
    <item>
      <title>How to easily clone a Linux installation to another machine</title>
      <dc:creator>Nik</dc:creator>
      <pubDate>Wed, 21 Aug 2019 00:34:49 +0000</pubDate>
      <link>https://dev.to/nikvdp/how-to-easily-clone-a-linux-installation-to-another-machine-2pbl</link>
      <guid>https://dev.to/nikvdp/how-to-easily-clone-a-linux-installation-to-another-machine-2pbl</guid>
      <description>&lt;p&gt;Moving a Linux installation from one machine to another is actually relatively easy to do, but there aren’t many articles online that walk through the whole process. Unlike some other operating systems (I’m looking at you Windows) Linux is by default fairly uncoupled from the hardware it is running on. That said, there are still a few gotchas that need to be watched out for, especially when it comes time to configure the bootloader. This post takes you through the whole process and assumes minimal Linux experience: if you’re comfortable with basic shell commands you should be able to follow along.&lt;/p&gt;

&lt;p&gt;Since there are a lot of different reasons to want to clone a system we’ll be focusing on actually understanding what each step is doing so that you can adapt what I’ve described to your situation. While I’m assuming you’re using physical machines here, this procedure works just as well with VMs, whether run locally via something like VirtualBox or VMs provided by a cloud provider like Amazon AWS. If you find yourself needing to move from one cloud provider to another, you can adapt the steps in this guide to make that happen, just keep in mind that on a cloud VM it may be difficult to boot into a livecd so you will probably need to instead attach two hard drives to the VM–one with a fresh Ubuntu install that can act as your “livecd” and an empty one will be used as the restore target.&lt;/p&gt;

&lt;p&gt;I’ve listed out the commands to clone a system with minimal explanation as a reference below. If you know your way around Linux you may be able to just run through these commands, adapting as needed to fit your situation. If you’d like more detail, keep reading and we’ll go over exactly what each command is doing (and why it’s needed) below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Bind-mount the source drive to a new location so that we don’t end up in an infinite copy loop while copying &lt;code&gt;/dev/zero&lt;/code&gt;, etc.:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount --bind / /mnt/src
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;tar&lt;/code&gt; up the source filesystem:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -C /mnt/src -c . &amp;gt; source-fs.tar
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and copy the resulting &lt;code&gt;source-fs.tar&lt;/code&gt; onto a USB drive or network share that you can access from the destination machine.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the dest machine boot from a live-cd (I used the Ubuntu install disc)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Partition the drive on the destination machine. The easiest way to do this is to use &lt;code&gt;gparted&lt;/code&gt; (included on the Ubuntu live-cd). How you partition will differ depending on whether you want to use MBR or EFI mode:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MBR mode&lt;/strong&gt;: just create one big ext4 partition on your target drive, and use use &lt;code&gt;gparted&lt;/code&gt;‘s ’Manage Flags’ right click menu to add the &lt;code&gt;boot&lt;/code&gt; flag&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EFI mode&lt;/strong&gt;: create one 200-500MB vfat/fat32 partition (use &lt;code&gt;gparted&lt;/code&gt;‘s ’Manage Flags’ right click menu to add &lt;code&gt;boot&lt;/code&gt; and &lt;code&gt;esp&lt;/code&gt; flags), and create one ext4 partition in the remaining space.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once booted into the live-cd, mount your destination filesystem. I’m mounting mine at &lt;code&gt;~/dest&lt;/code&gt;.&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount /dev/&amp;lt;some-disk&amp;gt; ~/dest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;code&gt;tar&lt;/code&gt; to extract your image onto the destination filesystem, (using &lt;code&gt;pv&lt;/code&gt; to provide a progress meter since this can take a while):&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pv &amp;lt; [image-file] | tar -C ~/dest -x
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;chroot&lt;/code&gt; into the newly extracted filesystem&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/dest
for i in /dev /dev/pts /proc /sys /run; do sudo mount --bind $i .$i; done
mkdir -p ./boot/efi  # skip if using MBR mode
sudo mount /dev/&amp;lt;your-efi-partition&amp;gt; ./boot/efi  # skip if using MBR mode
sudo chroot .
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run &lt;code&gt;grub-install&lt;/code&gt; from inside the chroot:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install grub-efi-amd64-bin       # skip if using MBR mode
grub-install /dev/&amp;lt;your-boot-drive&amp;gt;  # use the whole drive (e.g. sda, not sda1)
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Step 1: Bind mount the root filesystem
&lt;/h1&gt;

&lt;p&gt;The first command we run is &lt;code&gt;mount --bind / /mnt/src&lt;/code&gt;. In Linux-land filesystems are accessed by mounting them to a path (usually under &lt;code&gt;/media&lt;/code&gt; or &lt;code&gt;/mnt&lt;/code&gt;). Here we’re using something called a bind mount, which allows you to “bind” a mount point to another mount point. In other words, you can access the same folder at two locations. In this instance, we are telling the system to make the &lt;code&gt;/&lt;/code&gt; folder available at &lt;code&gt;/mnt/src&lt;/code&gt; as well. If you write a file to &lt;code&gt;/test-file&lt;/code&gt;, you’ll see that it’s also available at &lt;code&gt;/mnt/src/test-file&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Why is this needed you ask? Well, when a Linux system boots it creates some virtual filesystems that many Linux programs rely on. One of the more commonly used ones is the &lt;code&gt;/dev&lt;/code&gt; folder, which is how Linux addresses the physical hardware installed in your system. The files in the &lt;code&gt;/dev&lt;/code&gt; folder aren’t real files though, so it doesn’t make sense to copy them to another system–that system will have it’s own &lt;code&gt;/dev&lt;/code&gt; that reflects it’s own hardware. More importantly for our current purposes, &lt;code&gt;/dev&lt;/code&gt; also contains some special “files” such as &lt;code&gt;/dev/zero&lt;/code&gt;, which returns an infinite amount of zeros, and it’ll take more time than any of us have to copy an infinite amount of zeros.&lt;/p&gt;

&lt;p&gt;Bind mounting &lt;code&gt;/&lt;/code&gt; to &lt;code&gt;/mnt/src&lt;/code&gt; allows us to sidestep this issue: this system’s &lt;code&gt;/dev&lt;/code&gt; will still exist at &lt;code&gt;/dev&lt;/code&gt;, but you won’t find a corresponding &lt;code&gt;/mnt/src/dev/zero&lt;/code&gt; folder, so copying from &lt;code&gt;/mnt/src&lt;/code&gt; avoids starting an infinitely long copy process.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 2: &lt;code&gt;tar&lt;/code&gt; up the source file system
&lt;/h1&gt;

&lt;p&gt;Now that we’ve got the filesystem bind-mounted we can start preparing our image. All we really need to do here is save the contents of the root filesystem (excluding special filesystems such as &lt;code&gt;/dev&lt;/code&gt;) into a tar archive:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-C&lt;/span&gt; /mnt/src &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; source-fs.tar
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-C&lt;/code&gt; flag tells &lt;code&gt;tar&lt;/code&gt; to change directories to &lt;code&gt;/mnt/src&lt;/code&gt;, &lt;code&gt;-c&lt;/code&gt; tells tar to use ‘create’ mode (as in, create a tar archive, not extract one) and the &lt;code&gt;.&lt;/code&gt; tells it to do so in the current directory (which is now &lt;code&gt;/mnt/src&lt;/code&gt; thanks to our &lt;code&gt;-C&lt;/code&gt; flag). We then use shell redirection via the &lt;code&gt;&amp;gt;&lt;/code&gt; sign to write the output to the file &lt;code&gt;source-fs.tar&lt;/code&gt;. &lt;strong&gt;Make sure &lt;code&gt;source-fs.tar&lt;/code&gt; is not on the same drive you are copying from&lt;/strong&gt; or you may kick off another infinite loop!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; In this example I’m just writing the image to a file, but if you wanted you could also stream the filesystem directly to another machine over the network. The most common way to to this is to use &lt;code&gt;ssh&lt;/code&gt; and a shell pipe like so:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -C /mnt/src -c . | \
  ssh &amp;lt;some-other-machine&amp;gt; 'tar -C &amp;lt;some-folder-on-the-other-machine&amp;gt; -x'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This uses a shell pipe to send the output of &lt;code&gt;tar&lt;/code&gt; into the ssh command, which takes care of setting up an encrypted connection to the other machine, and then runs &lt;code&gt;tar -C &amp;lt;some-folder-on-the-other-machine&amp;gt; -x&lt;/code&gt; on the other machine, connecting the stdin of &lt;code&gt;tar&lt;/code&gt; on the remote machine to the stdout of &lt;code&gt;tar&lt;/code&gt; on the sending machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Step 3: On the dest machine boot from a live-cd
&lt;/h1&gt;

&lt;p&gt;On the destination machine (the machine we want to clone our system &lt;em&gt;to&lt;/em&gt;), we need to boot into an operating system that is not running off of the system’s primary hard drive, so that we can write our cloned image into the new drive. I usually just grab the latest Ubuntu live-cd from &lt;a href="https://ubuntu.com/download/desktop"&gt;Ubuntu’s website&lt;/a&gt; website and write it to a USB via &lt;a href="https://www.balena.io/etcher/"&gt;Etcher&lt;/a&gt; or the &lt;code&gt;dd&lt;/code&gt; command. Ubuntu provides directions on how to prepare an Ubuntu LiveUSB &lt;a href="https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-ubuntu"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you don’t like Ubuntu any Linux livecd should work fine, just make sure it has a partitioning tool like &lt;code&gt;gparted&lt;/code&gt; (gui) or &lt;code&gt;fdisk&lt;/code&gt; (cli).&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 4: Partition the drive on the destination machine
&lt;/h1&gt;

&lt;p&gt;Here is where things start to get a little tricker. &lt;strong&gt;There are two common ways to boot a Linux system, MBR (an older method) or EFI (a newer method), and each have different partitioning requirements.&lt;/strong&gt; If possible you’ll want to use EFI, but if you have an older machine that doesn’t support EFI mode you may need to use MBR. The easiest way to check if a machine supports EFI mode is to boot into the Ubuntu livecd and check if a directory called &lt;code&gt;/sys/firmware/efi&lt;/code&gt; exists:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ls /sys/firmware
acpi  devicetree  dmi  efi  memmap
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If there’s no &lt;code&gt;efi&lt;/code&gt; folder in &lt;code&gt;/sys/firmware&lt;/code&gt; then you’re on an MBR machine. If there is an &lt;code&gt;efi&lt;/code&gt; folder present, then you’re on an EFI machine and we’ll need to create an EFI partition as well as a root partition.&lt;/p&gt;

&lt;p&gt;From the Ubuntu livecd open a terminal and let’s fire up &lt;a href="http://TODO"&gt;gparted&lt;/a&gt; on the drive we’re going to partition:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo gparted
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Using the selector in the upper left, choose the drive you’re going to be restoring to. On my system this is &lt;code&gt;/dev/nvme0n1&lt;/code&gt;, but depending on the hardware in you’re machine you may have a different designation such as &lt;code&gt;/dev/sda&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once you have your drive selected, choose &lt;code&gt;Device -&amp;gt; Create Partition Table&lt;/code&gt; from the Device menu. You’ll be greeted with a scary looking screen like the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C9Fcwr7r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yovh4hv1ddxw9tbmyppg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C9Fcwr7r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/yovh4hv1ddxw9tbmyppg.png" alt="gparted"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure you have the right drive selected here, because, as the window above indicates, as soon as you hit apply &lt;code&gt;gparted&lt;/code&gt; will proceed to erase everything on that drive.&lt;/p&gt;

&lt;p&gt;Because the MBR approach is how MS-DOS historically loaded itself, some tools (including gparted) refer to MBR partition layouts as &lt;code&gt;msdos&lt;/code&gt;. If your system is an MBR system, then leave that unchanged, otherwise select &lt;code&gt;gpt&lt;/code&gt; from the list since GPT is the hard-drive layout that works with EFI. For the rest of this step, we will proceed with an EFI based install. If you’re doing an MBR install then you can skip the create EFI partition portion.&lt;/p&gt;

&lt;p&gt;In the next screen we’ll need two create two partitions, one ~500MB EFI partition (this can be smaller if you need to save space, but things may break if you make it less than 200MB) and a second partition filling up the remainder of the drive. This second partition is the partition we will restore our clone into.&lt;/p&gt;

&lt;p&gt;Let’s start by creating the EFI partition. Use the menus to choose &lt;code&gt;Partition -&amp;gt; New&lt;/code&gt;, and in the screen that follows set the size to 500MB, and set the file system to &lt;code&gt;fat32&lt;/code&gt; which is the filesystem type EFI requires. Repeat the process for the second partition, but this time do not enter a size and choose ext4 for the filesystem type.&lt;/p&gt;

&lt;p&gt;When you’re finished your partition layout should look similar to the below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qMMVuwRl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/s487gwrc0gqgp0rj44cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qMMVuwRl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/s487gwrc0gqgp0rj44cv.png" alt="partitions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead and use the &lt;code&gt;Edit -&amp;gt; Apply all Operations&lt;/code&gt; menu to write the new partition table. Once that’s completed we have to set some partition flags to make the drives properly bootable. To do this, right click on the first fat32 partition and choose ‘Manage Flags’. Click the checkmark next to &lt;code&gt;boot&lt;/code&gt; (which may also automatically check the ‘esp’ flag) and hit Close.&lt;/p&gt;

&lt;p&gt;Keep track of the device names (they will show in the Partition column with names that start with &lt;code&gt;/dev/&lt;/code&gt;) as you will need them for the next step.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 5: Mount the destination filesystem
&lt;/h1&gt;

&lt;p&gt;At this point our target system is prepared and we are ready to restore the image onto this machine. Before we can do anything with the new hard drive layout we need to mount it.&lt;/p&gt;

&lt;p&gt;Boot back into the Ubuntu livecd if you’re not already in it, and open up a terminal window. We’ll first create a mount point (an empty directory) where we’ll mount the two drives. I’m using &lt;code&gt;~/efi&lt;/code&gt; and &lt;code&gt;~/dest&lt;/code&gt;&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/efi
mkdir ~/dest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;And then mount the drives to them. On my system the drive I was partitioning was &lt;code&gt;/dev/sdb&lt;/code&gt;, so my EFI and data partitions are &lt;code&gt;/dev/sdb1&lt;/code&gt; and &lt;code&gt;/dev/sdb2&lt;/code&gt; respectively. Your system may assign different identifiers, make sure to use the names shown by &lt;code&gt;gparted&lt;/code&gt; in the &lt;code&gt;Partition&lt;/code&gt; column:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount /dev/sdb1 ~/efi
mount /dev/sdb2 ~/dest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h1&gt;
  
  
  Step 6: Use &lt;code&gt;tar&lt;/code&gt; to extract your image to the destination filesystem
&lt;/h1&gt;

&lt;p&gt;Now that we have all our mount points set up, we can do the reverse of the image creation process from step 2 to duplicate our source machine’s filesystem onto the new machine. Since this can take a while I like to use a tool called &lt;code&gt;pv&lt;/code&gt; (pv stands for pipe viewer) to provide a progress meter. You can install &lt;code&gt;pv&lt;/code&gt; by doing &lt;code&gt;sudo apt update &amp;amp;&amp;amp; sudo apt install pv&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once &lt;code&gt;pv&lt;/code&gt; is installed, we can start the restore process. First, find a way to get the Ubuntu livecd access to the source image we created in Step 2. Most likely this means plugging a USB drive into the machine. Once you have access to the image file run the following command, replacing &lt;code&gt;[image-file]&lt;/code&gt; with the path to your source tar file:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pv &amp;lt; [image-file] | tar -C ~/dest -x
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The above command is saying to take the contents of &lt;code&gt;[image-file]&lt;/code&gt; and send it to &lt;code&gt;pv&lt;/code&gt; over stdin. &lt;code&gt;pv&lt;/code&gt; reads the data from the file, prints out a nice progress meter, and then sends the data it’s reading to &lt;code&gt;tar&lt;/code&gt; via a shell pipe (the &lt;code&gt;|&lt;/code&gt; symbol). &lt;code&gt;-C&lt;/code&gt; then tells &lt;code&gt;tar&lt;/code&gt; to first change directories to &lt;code&gt;~/dest&lt;/code&gt; (where we mounted our destination partition in the previous step), and the &lt;code&gt;-x&lt;/code&gt; tells &lt;code&gt;tar&lt;/code&gt; to run in extract mode.&lt;/p&gt;

&lt;p&gt;This may take a while, but when the process completes you will have completely restored all the files that originally lived on the source machine onto the new machine. Getting the files there is only half the battle however, we still need to tell Linux how to boot into this filesystem, which we’ll do in the next step.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 7: &lt;code&gt;chroot&lt;/code&gt; into the newly extracted filesystem to install a bootloader
&lt;/h1&gt;

&lt;p&gt;At this point we have all the files we need on the new system, but we need to make the new system bootable. The easiest way to do this is to piggyback off of the Ubuntu livecd’s kernel, and use the &lt;code&gt;chroot&lt;/code&gt; command to make our current Linux installation (the Ubuntu livecd) pretend like it’s the installation we just copied over to the new machine.&lt;/p&gt;

&lt;p&gt;For this to work we have to use our helpful friend &lt;code&gt;mount --bind&lt;/code&gt; again to do the reverse of what we did in step 1. This time rather than avoiding copying these special filesystems, we instead want to give the &lt;code&gt;chroot&lt;/code&gt;-ed installation temporary access to the special filesystems that our Ubuntu livecds created so that it can act as a functional Linux installation.&lt;/p&gt;

&lt;p&gt;First, change directories to where the new installation is mounted (&lt;code&gt;~/dest&lt;/code&gt; if you followed the example above):&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/dest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Then we’ll use &lt;code&gt;mount ---bind&lt;/code&gt; to give the chroot access to the linux special directories:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in /dev /dev/pts /proc /sys /run; do sudo mount --bind $i .$i; done
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; We use a &lt;code&gt;for&lt;/code&gt; loop here to save ourselves some typing, but the above line is just telling the system to run the command &lt;code&gt;sudo mount --bind &amp;lt;input-dir&amp;gt; ./&amp;lt;input-dir&amp;gt;&lt;/code&gt; for each of the special directories listed between the &lt;code&gt;in&lt;/code&gt; and the &lt;code&gt;;&lt;/code&gt; . In other words, the single line above is equivalent to running the following:&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mount --bind /dev ./dev
sudo mount --bind /dev/pts ./dev/pts
sudo mount --bind /proc ./proc
sudo mount --bind /sys ./sys
sudo mount --bind /run ./run
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;If installing in EFI mode we also need to give our chroot access to the EFI partition we mounted earlier. &lt;code&gt;mount --bind&lt;/code&gt; comes to the rescue again here, we simply bind mount the livecd mount point into the &lt;code&gt;/boot/efi&lt;/code&gt; directory inside the chroot (&lt;code&gt;/boot/efi&lt;/code&gt; is where grub expects to find the EFI partition).&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/dest
mkdir -p boot/efi 
mount --bind ~/efi boot/efi
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Now that we have access to the Linux special folders (and the EFI partition), we can use the &lt;code&gt;chroot&lt;/code&gt; command to actually use our source installation:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo chroot ~/dest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;At this point you should have a shell inside the same Linux environment you originally copied. Try running some programs or looking at some files that came from your old machine. GUI programs may not work properly, but other then that you should have a fully functional copy of your old installation. Booting into an Ubuntu livecd and running the above &lt;code&gt;chroot&lt;/code&gt; commands every time you want to use this machine is not very practical though, so in the next step we’ll install the grub bootloader to make it into a full-fledged bootable Linux installation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Step 8: Run &lt;code&gt;grub-install&lt;/code&gt; from inside the chroot
&lt;/h1&gt;

&lt;p&gt;Grub is the most common Linux bootloader and is what we’ll use here. Grub has an MBR flavor and an EFI flavor. If the machine you cloned from was running Ubuntu it most likely already has grub installed, but may not have the EFI version of grub installed. Run the following to install the EFI version (feel free to skip if you’re doing an MBR clone):&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install grub-efi-amd64-bin  # skip if using MBR mode
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If your source distro is not Ubuntu based make sure to fully install grub via your distro’s package manager first.&lt;/p&gt;

&lt;p&gt;Once you have grub fully installed then you just need to run &lt;code&gt;grub-install&lt;/code&gt; against the drive you installed to. In my case that’s &lt;code&gt;/dev/sdb&lt;/code&gt;, but this may be different on your machine. If unsure fire up &lt;code&gt;gparted&lt;/code&gt; as we did in Step 4 and check the names listed in the partition column there.&lt;/p&gt;

&lt;p&gt;Next we install grub to our drive, thereby making it bootable. Be careful to install grub to a &lt;strong&gt;drive&lt;/strong&gt; and not to a partition. Partitions will usually have a number on the end while a drive will usually end with a letter (e.g. &lt;code&gt;/dev/sdb&lt;/code&gt;, not &lt;code&gt;/dev/sdb1&lt;/code&gt;).&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grub-install /dev/sdb
update-grub
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;If all went well you will see messages saying grub was successfully installed. When you see this feel free to reboot and check out your freshly cloned installation.&lt;/p&gt;

&lt;p&gt;If you got error messages and are installing in EFI mode it’s possible grub tried to use MBR mode. It might be worth a try running &lt;code&gt;grub-install&lt;/code&gt; this way to force EFI mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grub-install --target=x86_64-efi 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;That’s it, at this point you should have a fully operational clone of your original system, and hopefully also have a solid understanding of each step in the clone process and why it’s needed. Once you realize that a Linux installation is really just a filesystem and a mechanism for booting it, tools like Docker start to make a bit more sense: a docker image is basically just fancy version of the &lt;code&gt;tar&lt;/code&gt; image we created here, with some changes to handle docker layers and make the image files easier to distribute.&lt;/p&gt;

&lt;p&gt;In fact, just as we were able to “run” the system we installed via &lt;code&gt;chroot&lt;/code&gt; before we actually made it bootable, you can convert the tar image we created into a docker container quite easily:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker import [image-file]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;99% of the time you’re better off just using a &lt;code&gt;Dockerfile&lt;/code&gt; and docker’s own tooling to build your images, but if you need a quick and dirty way to “dockerize” an existing server you could do this without even having to shut down the existing server!&lt;/p&gt;

&lt;p&gt;Similarly, the &lt;code&gt;docker export&lt;/code&gt; command can export a tarball like the one we created for any docker image. Once you extract it you could use the same &lt;code&gt;mount --bind&lt;/code&gt; and &lt;code&gt;chroot&lt;/code&gt; dance we did above to get a shell inside the “container.” If you wanted to get a bit crazy, you could even use the steps from this guide to restore a tarball exported from a docker image onto a physical machine and run it on bare metal. In real life this won’t work with many/most docker images because (for space conservation reasons) many docker images strip out some of the files needed to support physical booting, so you may be asking for trouble if you try this in real life.&lt;/p&gt;

</description>
      <category>ubuntu</category>
      <category>linux</category>
      <category>bash</category>
    </item>
    <item>
      <title>Zero-assumptions ZFS: how to actually understand ZFS</title>
      <dc:creator>Nik</dc:creator>
      <pubDate>Sun, 28 Jul 2019 05:26:09 +0000</pubDate>
      <link>https://dev.to/nikvdp/https-nikvdp-com-post-zfs-part1-31fj</link>
      <guid>https://dev.to/nikvdp/https-nikvdp-com-post-zfs-part1-31fj</guid>
      <description>&lt;p&gt;This is the first in a series of articles about ZFS, and is part of what I hope becomes an ongoing series here: the zero-assumptions write up. This article will be written assuming you know &lt;em&gt;nothing&lt;/em&gt; about ZFS.&lt;/p&gt;

&lt;p&gt;I’ve been interested in ZFS for a while now, but didn’t have a good reason to use it for anything. Last time I looked at it ZFS on Linux was still a bit immature, but in the past few years Linux support for ZFS has really stepped up, so I decided to give it another go. ZFS is a bit of a world unto itself though, with most resources walking you through some quick commands without explaining the concepts underlying ZFS, or assuming the user is very familiar with traditional RAID terminology.&lt;/p&gt;

&lt;h1&gt;
  
  
  Background
&lt;/h1&gt;

&lt;p&gt;I keep one desktop machine in my house (running &lt;a href="https://bedrocklinux.org/"&gt;bedrock linux&lt;/a&gt; with an Ubuntu base and an Arch Linux strata on top) that acts, among other things, as a storage/media server. I keep photos and other digital detritus I’ve collected over the years there, and would be very sad if they were to disappear. I back everything up nightly via the excellent &lt;a href="https://github.com/restic/restic"&gt;restic&lt;/a&gt; to the also excellent &lt;a href="https://www.backblaze.com/b2/cloud-storage.html"&gt;Backblaze B2&lt;/a&gt;, but since I have terabytes of data stored there I haven’t followed the cardinal rule of backing up: make sure that you can actually restore from your backups. Since testing that on my internet connection would take months, and I’m afraid of accidentally deleting data or drive failure, I decided to add a bit more redundancy.&lt;/p&gt;

&lt;p&gt;My server has 3 hard drives in it right now: one 4TB drive spinning disk drive, one 2TB drive spinning disk, and one 500GB SSD drive that holds the root filesystem. The majority of the data I want to keep is on the 2TB drive, and the 4TB drive is mostly empty. After doing some research (read: browsing posts on &lt;a href="https://www.reddit.com/r/DataHoarder/"&gt;/r/datahoarder&lt;/a&gt;), it seems the two most common tools people use to add transparent redundancy are a &lt;a href="https://www.snapraid.it"&gt;snapraid&lt;/a&gt; + &lt;a href="https://github.com/trapexit/mergerfs"&gt;mergerfs&lt;/a&gt; combo, or the old standby, &lt;a href="https://www.freebsd.org/doc/handbook/zfs.html"&gt;ZFS&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installing ZFS on Linux
&lt;/h1&gt;

&lt;p&gt;Getting ZFS installed on Linux (assuming you don’t try to use it as the root filesystem) is almost comically easy these days. On Ubuntu 16.04+ (and probably recent Debian releases too), this should be as straightforward as:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install zfs-dkms zfs-fuse zfs-initramfs zfsutils-linux
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For simplicity, the above command installs more than is strictly needed: &lt;code&gt;zfs-dkms&lt;/code&gt; and &lt;code&gt;zfs-fuse&lt;/code&gt; are different implementations of ZFS for linux, and either should be enough to use ZFS on it’s own. The reason there are multiple implementations is due to how linux does things. &lt;code&gt;zfs-dkms&lt;/code&gt; uses a technology (unsurprisingly) called &lt;a href="https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support"&gt;DKMS&lt;/a&gt;, while &lt;code&gt;zfs-fuse&lt;/code&gt; uses (even less surprisingly) a technology called &lt;a href="https://github.com/libfuse/libfuse"&gt;FUSE&lt;/a&gt;. FUSE makes it easier for developers to implement filesystems at the cost of a bit of performance. DKMS stands for Dynamic Kernel Module support, and is a means by which you can install the source code for a module and let the linux distro itself take care of compiling that source to match the running Linux kernel.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For Arch Linux you’ll need to use the &lt;a href="https://aur.archlinux.org/"&gt;AUR&lt;/a&gt; and install &lt;a href="https://aur.archlinux.org/packages/zfs-linux/"&gt;&lt;code&gt;zfs-linux&lt;/code&gt;&lt;/a&gt;. Check the &lt;a href="https://wiki.archlinux.org/index.php/ZFS"&gt;Arch wiki’s&lt;/a&gt; article on ZFS for more detailed instructions, but for most systems this should suffice:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo pacman -Syu zfs-linux
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h1&gt;
  
  
  Planning your drives
&lt;/h1&gt;

&lt;p&gt;The first step to getting started with ZFS was to figure out how I wanted to use my drives. Most people who use ZFS for these purposes seem to go out and buy multiple big hard drives, and then use ZFS to mirror them. I just wanted more data redundancy on the drives I already had, so I decided to partition my drives.&lt;/p&gt;

&lt;p&gt;Since I have one 2TB drive that I want backed up, I first partitioned my 4TB drive into two 2TB partitions using &lt;a href="https://gparted.org/"&gt;gparted&lt;/a&gt;. I then created an ext4 filesystem on the second drive.&lt;/p&gt;

&lt;p&gt;Then I used &lt;code&gt;blkid&lt;/code&gt; and &lt;code&gt;lsblk&lt;/code&gt; to check my handiwork. These two tools print lists of all the “block devices” (read: hard disks) in my system and show different ways to refer to them in Linux:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ blkid
/dev/sda1: UUID="7600-739F" TYPE="vfat" PARTUUID="ded30b23-f318-433c-bfb2-15738d42cc01"
/dev/sda2: LABEL="500gb-ssd-root" UUID="906bd064-2156-4a88-8d88-8940af7c5a34" TYPE="ext4" PARTLABEL="500gb-ssd-root" PARTUUID="cc6695ed-1a2b-4cb1-b302-37614cf07bf7"
/dev/sdc1: LABEL="zstore" UUID="5303013864921755800" UUID_SUB="17834655468516818280" TYPE="ext4" PARTUUID="072d0dd9-a1bf-4c67-b9b3-046f37c48846"
/dev/sdc2: LABEL="longterm" UUID="7765758551585446647" UUID_SUB="266677788785228698" TYPE="ext4" PARTLABEL="extra2tb" PARTUUID="1f9e7fd1-1da6-4dbd-9302-95f6ea62fff0"
/dev/sdb1: LABEL="longterm" UUID="7765758551585446647" UUID_SUB="89185545293388421" TYPE="zfs_member" PARTUUID="5626d9ea-01"
/dev/sde1: UUID="acd97a41-df27-4b69-924c-9290470b735d" TYPE="ext4" PARTLABEL="wd2tb" PARTUUID="6ca94069-5fc8-4466-bba2-e5b6237a19b7"

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdb      8:16   0   1.8T  0 disk
└─sdb1   8:17   0   1.8T  0 part
sde      8:64   0   1.8T  0 disk
└─sde1   8:65   0   1.8T  0 part
sdc      8:32   0   3.7T  0 disk
├─sdc2   8:34   0   1.8T  0 part
└─sdc1   8:33   0   1.8T  0 part
sda      8:0    0   477G  0 disk
├─sda2   8:2    0 476.4G  0 part
└─sda1   8:1    0   512M  0 part
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’re not familiar with how Linux handles hard disks, Linux refers to hard disks as “block devices.” Linux provides access to physical hardware through a virtual filesystem it mounts at &lt;code&gt;/dev&lt;/code&gt;, and depending on what type of hard drive you have, hard disks will generally be of the format &lt;code&gt;/dev/sdX&lt;/code&gt; where the &lt;code&gt;X&lt;/code&gt; is a letter from a-z that Linux assigns to the drive. Partitions on each disk are then assigned a number, so in the &lt;code&gt;lsblk&lt;/code&gt; output above, you can see that disk &lt;code&gt;sdc&lt;/code&gt; has two partitions, which show up in the output as &lt;code&gt;sdc1&lt;/code&gt; and &lt;code&gt;sdc2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;blkid&lt;/code&gt; command shows the traditional &lt;code&gt;/dev/sdX&lt;/code&gt; labels, but also adds &lt;code&gt;UUID&lt;/code&gt;s, which you can think of as a random id that will always refer to that particular disk. The reason for this is that if you were to unplug one of your drives and plug it into a different port Linux may give it a different &lt;code&gt;/dev/sdX&lt;/code&gt; name, e.g. if you unplugged the &lt;code&gt;/dev/sdc&lt;/code&gt; drive and plugged it into another port it may become &lt;code&gt;/dev/sda&lt;/code&gt;, but it would keep the same &lt;code&gt;UUID&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I wanted to convert my 2TB drive to ZFS, but since my precious data is all currently located on my 2TB drive (&lt;code&gt;/dev/sdb1&lt;/code&gt; above), I decided to pull a swaparoo and first copy everything onto the second partition of my 4TB drive (&lt;code&gt;/dev/sdc2&lt;/code&gt; above), then let ZFS takeover the original partition (&lt;code&gt;/dev/sdb1&lt;/code&gt;) and copy the data back onto that drive.&lt;/p&gt;

&lt;p&gt;The end result I’m looking for is to have a layout with two “pools” (zfs-speak for sets of drives, more on this later). One pool should consist of my original 2TB drive, replicated to one of the 2TB partitions on my 4TB drive. The extra 2TB partition available on the 4TB drive will act as a second pool, which gives me nice ZFS benefits like checksumming and the ability to take snapshots of the drive, as well as the option to add another 2TB drive/partition later and mirror the data.&lt;/p&gt;

&lt;p&gt;If you’re already familiar with &lt;code&gt;zpool&lt;/code&gt;, this is what the finished setup looks like:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo zpool status
pool: longterm
state: ONLINE
config:

       NAME        STATE     READ WRITE CKSUM
       longterm    ONLINE       0     0     0
         mirror-0  ONLINE       0     0     0
           sdc2    ONLINE       0     0     0
           sdb1    ONLINE       0     0     0

pool: zstore
state: ONLINE
config:

       NAME        STATE     READ WRITE CKSUM
       zstore      ONLINE       0     0     0
         sdc1      ONLINE       0     0     0
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h1&gt;
  
  
  ZFS terminology and concepts: mirrors, stripes, pools, vdevs and parity
&lt;/h1&gt;

&lt;p&gt;ZFS introduces a fair amount of new concepts and terminology which can take some getting used to. The first bit to understand is what ZFS actually does. ZFS usually works with pools of drives (hence the name of the &lt;code&gt;zpool&lt;/code&gt; command), and allows you to do things like mirroring or striping the drives.&lt;/p&gt;

&lt;p&gt;And what does it mean to mirror or stripe a drive you ask? When two drives are mirrored they do everything in unison, so any data written to one drive is also written to the other drive at the same time. This way if one of your drives were to fail, your data would still be safe and sound on the other drive, and through a process ZFS calls “resilvering” if you were to install a new hard drive to replace the failed one ZFS would automatically take care of syncing all your data back on to it.&lt;/p&gt;

&lt;p&gt;Striping is a different beast. Mirroring drives is great for redundancy, but has the obvious drawback that you only get to use half the disk space you have available. Sometimes the situation calls for the opposite trade-off: if you bought two 2TB drives and you wanted to be able to use all 4TB of available storage, striping would let you do that. In striped setups ZFS writes “stripes” of data to each drive. This means that if you write a single file ZFS may actually store part of the file on one drive and part of the file on another.&lt;/p&gt;

&lt;p&gt;This has many advantages: it speeds up your reads and writes by making them concurrent. Since it’s storing pieces of one file on each drive, both drives can be writing at the same time, so your write speed could theoretically double. Read speed also gets a boost since you can also read from both drives at the same time. The downside to all this speed and space is that your data is less safe. Since your data is split between two drives, if one of the hard drives dies you will probably lose all your data – no one file will be complete because while your good drive might have half the file on it, the other half is gone with your dead hard disk. So in effect you’re trading close to double the speed and space for close to double the risk of losing all your data. Depending what you’re doing that might be a good choice to make, but I wouldn’t put any data I didn’t want to lose into a striped setup.&lt;/p&gt;

&lt;p&gt;There’s a third type, a sort of compromise solution which is to use parity. This type of setup is frequently referred to as RAIDZ (or RAIDZ2 or RAIDZ3) and is somewhere between a full-on striped setup and a mirrored setup. This approach uses what’s called a parity disk to act as a kind of semi-backup. This is backed by a lot of complicated math that I don’t pretend to understand, but the take-home message is that it provides a way to restore your data if a drive fails. So if you have three 2TB drives, you can choose to stripe them but dedicate one to parity. In this setup, you’d have 4TB of available storage, but if a drive were to fail you wouldn’t lose any data (although performance would probably be pretty horrible until you replaced the failed disk). Think of it as a kind of half backup. You can tweak the ratio as well, if you dedicate more disks to parity you can survive more failing drives without losing data–this is what the 2 and 3 in RAIDZ2 and RAIDZ3 mean.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;More info on the different RAID levels you can use with ZFS &lt;a href="http://www.zfsbuild.com/2010/05/26/zfs-raid-levels/"&gt;here&lt;/a&gt; and &lt;a href="https://calomel.org/zfs_raid_speed_capacity.html"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we’ve gone over the high-level concepts of drive arrays and RAID, we can dive into the more ZFS-specific aspects. The first item to go over is the concept of a vdev. A vdev is a “virtual device,” and when &lt;code&gt;zpool&lt;/code&gt; pools drives it pools collections of these virtual devices using one of the RAID approaches (striped or mirrored) we discussed above. However what makes vdevs useful is that you can put more than one physical drive (or partition) into a single vdev.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;zpool&lt;/code&gt; can create striped and mirrored arrays over pools of vdevs, a vdev can create striped or mirrored arrays over sets of drives. This is part of what makes ZFS so flexible. For example, you could get the speed benefits of a striped setup with the redundancy benefits of a mirrored setup by creating two mirror vdevs, each of which is configured to mirror data across two physical drives. You could then add both vdevs into a striped pool to get fast the fast reads and writes that striping allows without running the risk of losing your data if a single drive were to fail (this is actually a fairly popular setup and is known as RAID10 outside of ZFS-land).&lt;/p&gt;

&lt;p&gt;This can get quite complicated quite quickly, but &lt;a href="https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/"&gt;this article&lt;/a&gt; (backup link &lt;a href="https://webcache.googleusercontent.com/search?q=cache:xwgzVPNhZ9kJ:https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/+&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=us"&gt;here&lt;/a&gt; since original was down at the time of writing) does a nice job walking through the various permutations of vdevs and zpools that are possible.&lt;/p&gt;
&lt;h1&gt;
  
  
  Experimenting
&lt;/h1&gt;

&lt;p&gt;ZFS can also be used on loopback devices, which is a nice way to play with ZFS without having to invest in lots of hard drives. Let’s run through a few of the possibilities with some loopback devices so you can get a feeling for how ZFS works.&lt;/p&gt;

&lt;p&gt;When ZFS uses files on another filesystem instead of accessing devices directly it requires that the files be allocated first. We can do that with a shell &lt;code&gt;for&lt;/code&gt; loop by using the &lt;code&gt;dd&lt;/code&gt; command to copy 1GB of zeros into each file (you should make sure you have at least 4GB of available disk space before running this command):&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in &lt;/span&gt;1 2 3 4&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;dd &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/dev/zero &lt;span class="nv"&gt;of&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;zfs&lt;span class="nv"&gt;$i&lt;/span&gt; &lt;span class="nv"&gt;bs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1024M &lt;span class="nv"&gt;count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now that we have our empty files we can put them into a ZFS pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;zpool create testpool mirror &lt;span class="nv"&gt;$PWD&lt;/span&gt;/zfs1 &lt;span class="nv"&gt;$PWD&lt;/span&gt;/zfs2 mirror &lt;span class="nv"&gt;$PWD&lt;/span&gt;/zfs3 &lt;span class="nv"&gt;$PWD&lt;/span&gt;/zfs4
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE:&lt;/strong&gt; The &lt;code&gt;$PWD&lt;/code&gt; above is important, ZFS requires absolute paths when using files&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You should now have a new zpool mounted at &lt;code&gt;/testpool&lt;/code&gt;. Check on it with &lt;code&gt;zpool status&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ zpool status
  pool: testpool
 state: ONLINE
  scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        testpool            ONLINE       0     0     0
          mirror-0          ONLINE       0     0     0
            /home/nik/zfs1  ONLINE       0     0     0
            /home/nik/zfs2  ONLINE       0     0     0
          mirror-1          ONLINE       0     0     0
            /home/nik/zfs3  ONLINE       0     0     0
            /home/nik/zfs4  ONLINE       0     0     0

errors: No known data errors
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Your new ZFS filesystem is now live, and you can &lt;code&gt;cd&lt;/code&gt; to &lt;code&gt;/testpool&lt;/code&gt; and copy some files into your new ZFS filesystem.&lt;/p&gt;

&lt;h1&gt;
  
  
  Next steps
&lt;/h1&gt;

&lt;p&gt;We’ve gone over the basics of ZFS, in the next post we’ll go on to some of the more powerful and advanced features ZFS offers like compression, snapshots and the &lt;code&gt;zfs send&lt;/code&gt; and &lt;code&gt;zfs receive&lt;/code&gt; commands, and the secret &lt;code&gt;.zfs&lt;/code&gt; dir.&lt;/p&gt;




</description>
      <category>ubuntu</category>
      <category>zfs</category>
      <category>linux</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
