DEV Community

Cover image for Reset Existing RAID 0 & Create RAID 10 Array with mdadm on RHEL9
Dulanjana Ranasinge
Dulanjana Ranasinge

Posted on

Reset Existing RAID 0 & Create RAID 10 Array with mdadm on RHEL9

Introduction

Using Linux's software RAID features, storage arrays may be created and managed with the mdadm utility. When it comes to organising their individual storage devices and developing logical storage devices with improved performance or redundancy features, administrators have a lot of options.

Several RAID setups that can be set up with a RHEL9 server will be carried out in this tutorial.

RAID 0 - Minimum of 2 storage devices, Performance in terms of read/write and capacity.
RAID 1 - Minimum of 2 storage devices, Redundancy between two storage devices.
RAID 5 - Minimum of 3 storage devices, Redundancy with more usable capacity.
RAID 6 - Minimum of 4 storage devices, Double redundancy with more usable capacity.
RAID 10 - Minimum of 3 storage devices, Performance and redundancy.

Prerequisites

  1. SSH access to the RHEL9 server using root or sudo privilege user account.
  2. Two or Four storage devices based on type of the array.

Resetting Existing RAID Devices

If you haven't set up any arrays yet, you can skip this part for the time being. Several RAID levels will be introduced in this guide. You will probably want to reuse your storage devices after each section if you want to continue up and finish each RAID level for your devices. Before testing a new RAID level, you can reset your component storage devices by consulting the section on Resetting Existing RAID Devices.

  • Finding the active arrays in the /proc/mdstat file:
cat /proc/mdstat
Enter fullscreen mode Exit fullscreen mode

Output

Personalities : [raid0]
md0 : active raid0 sdc[1] sdb[0]
      10475520 blocks super 1.2 512k chunks

unused devices: <none>
Enter fullscreen mode Exit fullscreen mode
  • Unmount the array from the filesystem:
umount /dev/md0
Enter fullscreen mode Exit fullscreen mode
  • Stop and remove the array:
mdadm --stop /dev/md0
Enter fullscreen mode Exit fullscreen mode
  • Find the storage devices that were used to build the array (sdb and sdc in this case):
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Enter fullscreen mode Exit fullscreen mode

Output

NAME               SIZE FSTYPE            TYPE MOUNTPOINT
sda                 50G                   disk
├─sda1               1G xfs               part /boot
├─sda2             600M vfat              part /boot/efi
└─sda3            48.4G LVM2_member       part
  └─rootVG-rootLV    4G xfs               lvm  /
sdb                  5G linux_raid_member disk
sdc                  5G linux_raid_member disk
Enter fullscreen mode Exit fullscreen mode
  • After identify the devices used to create an array, zero their superblock and removes the RAID metadata and resets them to normal:
mdadm --zero-superblock /dev/sdb
mdadm --zero-superblock /dev/sdc
Enter fullscreen mode Exit fullscreen mode
  • Remove or comment any array related entries from following files:
/etc/fstab
/etc/mdadm/mdadm.conf
Enter fullscreen mode Exit fullscreen mode
  • Rebuild the current kernel and so boot process does not bring the array online:
dracut -f
Enter fullscreen mode Exit fullscreen mode
  • At this stage storage devices are ready to rebuild as different array. Reboot the server and start the new build if safe to do so.

Create RAID 10 Array.

  • Find the storage devices that will be use to build the array (sdb, sdc, sdd and sde in this case):
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
Enter fullscreen mode Exit fullscreen mode

Output

NAME               SIZE FSTYPE      TYPE MOUNTPOINT
sda                 50G             disk
├─sda1               1G xfs         part /boot
├─sda2             600M vfat        part /boot/efi
└─sda3            48.4G LVM2_member part
  └─rootVG-rootLV    4G xfs         lvm  /
sdb                  5G             disk
sdc                  5G             disk
sdd                  5G             disk
sde                  5G             disk
Enter fullscreen mode Exit fullscreen mode
  • Basically RAID 10 array is the combination of RAID 0 and RAID 1. This will give high redundancy and high performance with three different layouts call near, far and offset. Find out detail information about each layout in man page:
man 4 md
Enter fullscreen mode Exit fullscreen mode
  • Execute create array command with the device name (md0), the RAID level, and the number of devices as following : Note : By not specifying a layout and copy number will consider as near layout and setup the copies.
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
Enter fullscreen mode Exit fullscreen mode

Output

To optimalize recovery speed, it is recommended to enable write-indent bitmap, do you want to enable it now? [y/N]? y
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Enter fullscreen mode Exit fullscreen mode
  • Verify RAID status:
cat /proc/mdstat
Enter fullscreen mode Exit fullscreen mode

Output

Personalities : [raid4] [raid5] [raid6] [raid10]
md0 : active raid10 sde[3] sdd[2] sdc[1] sdb[0]
      6983680 blocks super 1.2 512K chunks 3 offset-copies [4/4] [UUUU]
      [===>.................]  resync = 18.2% (1271808/6983680) finish=0.8min speed=115618K/sec
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
Enter fullscreen mode Exit fullscreen mode
  • Create a filesystem on the array:
mkfs.xfs /dev/md0
Enter fullscreen mode Exit fullscreen mode
  • Create a mount point to attach the new filesystem:
mkdir -p /mnt/md0
Enter fullscreen mode Exit fullscreen mode
  • Mount the filesystem with the following:
mount /dev/md0 /mnt/md0
Enter fullscreen mode Exit fullscreen mode
  • Check the mount point and space:
df -h
Enter fullscreen mode Exit fullscreen mode

Output

Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   4.0M     0  4.0M   0% /dev
tmpfs                      3.2G     0  3.2G   0% /dev/shm
tmpfs                      1.3G  8.6M  1.3G   1% /run
/dev/mapper/rootVG-rootLV  4.0G  1.6G  2.4G  41% /
/dev/sda1                  960M  247M  714M  26% /boot
/dev/sda2                  599M   12K  599M   1% /boot/efi
tmpfs                      653M     0  653M   0% /run/user/0
/dev/md0                   6.6G   80M  6.6G   2% /mnt/md0
Enter fullscreen mode Exit fullscreen mode
  • Execute following to make the array automatically reassemble during the boot:
mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Enter fullscreen mode Exit fullscreen mode
  • Add mount point to the /etc/fstab to enable automatic mount during the reboot:
echo '/dev/md0 /mnt/md0 xfs defaults,nofail,discard 0 0' | tee -a /etc/fstab
Enter fullscreen mode Exit fullscreen mode
  • RAID 10 array build complete and it will mount automatically after the server reboot. Reboot the server and verify the status.

Top comments (0)