DEV Community

Cover image for Using RAID in Linux
Waji
Waji

Posted on

Using RAID in Linux

Introduction

RAID (redundant array of independent disks) is a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs) to protect data in the case of a drive failure

✨ There are different RAID Levels:

👉 RAID Level 0 (Stripe Volume)

Combines empty spaces from two or more disks (up to a maximum of 32) into a single volume
When data is written to the stripe volume, it is evenly distributed across all disks in 64KB blocks
Provides improved performance, but no fault tolerance
Performance improves with more disks

👉 RAID Level 1 (Mirror Volume)

Requires an even number of disks
Mirrors an existing simple volume
Provides fault tolerance
Available disk capacity is half the total disk capacity

👉 RAID Level 5 (Stripe with Parity)

Requires a minimum of three disks
Provides fault tolerance with a single additional disk
Uses parity bits for error checking
Available disk capacity is the total disk capacity minus one disk's capacity

👉 RAID Level 6 (Dual Parity)

Requires a minimum of four disks
Can recover from the failure of two or more HDDs, which is a weakness of RAID 5
Uses dual parity bits for error checking
Available disk capacity is the total disk capacity minus two disks' capacity

👉 RAID Level 1+0

Requires a minimum of four disks
Forms a RAID 1 configuration and then reconfigures as RAID 0
Provides excellent reliability and performance but is less efficient
Available disk capacity is half the total disk capacity


Simple hands on to test every level

I will be using Linux CentOS7 installed in my VM.

We can first check,

[root@Linux-1 ~]# rpm -qa | grep mdadm
Enter fullscreen mode Exit fullscreen mode

As there is nothing, we will install this package using ‘yum’,

[root@Linux-1 ~]# yum -y install mdadm
Enter fullscreen mode Exit fullscreen mode

Now,

[root@Linux-1 ~]# rpm -qa | grep mdadm
mdadm-4.1-9.el7_9.x86_64
Enter fullscreen mode Exit fullscreen mode

Raid - 0 구성

👉 Raid - 0 을 Test 하기 위해서 새로운 HDD 2개를 추가

*우리는 이미 2개가 있기때문에 바로 시작!

We will first make raid system partition for /dev/sdb and /dev/sdc,

[root@Linux-1 ~]# fdisk /dev/sdb

Command (m for help): n

Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

# Checking with p

Command (m for help): p

Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     2097151     1047552   fd  Linux raid

# Same for /dev/sdc

[root@Linux-1 ~]# fdisk /dev/sdc

Command (m for help): n

Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-2097151, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): 
Using default value 2097151
Partition 1 of type Linux and of size 1023 MiB is set

Command (m for help): t
Selected partition 1

Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

# Checking with p

Command (m for help): p

Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048     2097151     1047552   fd  Linux raid
Enter fullscreen mode Exit fullscreen mode

We will save and exit using ‘w’.

👉 Linux에서는 기본적은 장치를 컨트롤 하기 위해 장치 파일이 있어야 한다, 하지만 현재 RAID구성 시 RAID에 관한 장치 파일이 없으므로
장치 파일을 수동으로 생성, 이때 쓰는 명령어가 mknod 명령어 이며, 기본사용 형식은 mknod [생성할 장치파일 이름] [장치파일형식] [주 번호] [보조 번호] 형식이다.
장치 파일형식은 b , (c, u) , p를 사용 각 의미는 b=blocks Device, p=FIFO, c, u = Character 특수파일을 의미 한다.
주 번호와 보조 번호는 특별한 의미는 없으며, 비슷한 역할을 진행하는 장치 파일간 주 번호는 통일해서 사용하고 보조 번호로 각 장치를 세부 구분하는 형태로 쓰인다.
통상적으로 md 장치의 주 번호 값은 9번으로 통일하여 사용한다.

mknod /dev/md0 b 9 0
Enter fullscreen mode Exit fullscreen mode

Now, we will use ‘mdadm’ command to create raid devices

[root@Linux-1 ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Fail to create md0 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Enter fullscreen mode Exit fullscreen mode

위에서 생성한 장치 파일에 Raid 0의 정보를 입력 한다.

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=Linux-1:0 UUID=d35b06b8:3ba0c441:8ba52bb1:02fa155d
Enter fullscreen mode Exit fullscreen mode

정보 입력 후 확인하게 되면, 입력 된 정보를 토대로 장치의 UUID값 등이 표시되는 것을 확인

[root@Linux-1 ~]# mdadm --query --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Jan 25 12:41:04 2023
        Raid Level : raid0
        Array Size : 2091008 (2042.00 MiB 2141.19 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 12:41:04 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : Linux-1:0  (local to host Linux-1)
              UUID : d35b06b8:3ba0c441:8ba52bb1:02fa155d
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
Enter fullscreen mode Exit fullscreen mode

Formatting the Raid device with the xfs file system.

[root@Linux-1 ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=522752, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode

Finally, making the /raid0 directory to mount the /dev/md0 into it.

[root@Linux-1 ~]# mkdir /raid0
[root@Linux-1 ~]# mount /dev/md0 /raid0
[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G  10% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md0                          2.0G   33M  2.0G   2% /raid0
Enter fullscreen mode Exit fullscreen mode

We can save md information into the /etc/mdadm.conf as the device number can change when the system reboots.

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Now, let’s configure auto mount,

[root@Linux-1 ~]# vi /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0

/dev/md0        /raid0          xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

Now, we can reboot the system to check if the auto mount works.


Raid - 1 구성

👉 스냅샷 초기설정 상태로 변경 할 것 !!

We will first make 3 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc and sdd as we did above using fdisk command.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
Enter fullscreen mode Exit fullscreen mode

We will use mknod command here,

[root@Linux-1 ~]# mknod /dev/md1 b 9 1
Enter fullscreen mode Exit fullscreen mode

Now we will use mdadm command.

[root@Linux-1 ~]# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? Y
mdadm: Fail to create md1 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
Enter fullscreen mode Exit fullscreen mode

Raid 1번의 경우 중복데이터를 저장 하기 때문에 기본 적으로 /boot에 관한 부분은 Raid로 설정을 하면 안 된다.
즉, 부팅과 관련 된 데이터는 Raid 1번 장치에는 적합한 형태가 아니다.

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.2 name=Linux-1:1 UUID=91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:10:23 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
Enter fullscreen mode Exit fullscreen mode

Formatting using the xfs file system,

[root@Linux-1 ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=261632, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode

Finally, we will mount the /dev/md1 partition,

[root@Linux-1 ~]# mkdir /raid1
[root@Linux-1 ~]# mount /dev/md1 /raid1
Enter fullscreen mode Exit fullscreen mode

We can confirm,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md1                         1019M   33M  987M   4% /raid1
Enter fullscreen mode Exit fullscreen mode

To make the md information be saved to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Configuring auto mount,

[root@Linux-1 ~]# vi /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Jan 10 10:45:44 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_linux--1-root /                       xfs     defaults        0 0
UUID=2d2f3276-dc8a-403c-bb04-53e472b9184c /boot                   xfs     defaults        0 0
/dev/mapper/centos_linux--1-swap swap                    swap    defaults        0 0

/dev/md1        /raid1          xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

Now, we can reboot the system to check if the auto mount works.

We can now test if our raid 1 is working. We will turn off the linux system and remove one hard disk from the VM machine. (Remove the RAID1 hard disk partition).

This will trigger an issue in the RAID 1 and so we will use a new HDD 1GB to perform the backup.

[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:29:00 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
Enter fullscreen mode Exit fullscreen mode

💡 여기서 주의 할 점으로는 물리적인 HDD를 삭제를 하게 되면 fdisk -l /dev/sd* 명령어를 이용하여 확인하면 각 Disk의 알파벳 이름이 달라진다.
기존에 3개의 HDD에서 2개로 변경되었는데 기존의 이름은 /dev/sdb , /dev/sdc , /dev/sdd 로 표시 되었지만, /dev/sdc Disk 하나를 삭제한 지금
확인해 보면 /dev/sdb, /dev/sdd 가 아닌 /dev/sdb , /dev/sdc 로 표시 되는 것을 확인 할 수 있다, 기본 적으로 System이 부팅 되면서
삭제 HDD의 이름을 비어 두고 사용하는 것이 아닌, 삭제 Disk 다음 Disk에게 순차적인 이름부여를 진행하게 된다.

🚫 기본적으로 우리는 HDD를 제거 하였기 때문에, Removed 상태로 표시가 된다.
하지만, 실제 업무에서는 HDD가 제거되는 일은 거의 없고 대부분 HDD에 문제가 발생하는 상태이다, 이때 장치의 상태를 Failed 상태라 말한다,
Failed 상태일 때에는 문제가 생긴 Failed Disk를 먼저 md 장치에서 제거 후 복구 작업을 진행 해야 한다.

Failed 장치 제거 및 복구 순서

  1. umount /dev/md1 ( Mount 해제 )
  2. mdadm /dev/md1 -r /dev/sdc1 ( Failed 장치 MD장치에서 제거 )
  3. 복구 작업 진행
[root@Linux-1 ~]# mdadm /dev/md1 --add /dev/sdc1
mdadm: added /dev/sdc1
Enter fullscreen mode Exit fullscreen mode

Now if we check,

[root@Linux-1 ~]# mdadm --query --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Wed Jan 25 14:10:18 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Jan 25 14:48:40 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : Linux-1:1  (local to host Linux-1)
              UUID : 91c13c6f:95cda8a6:28b59cb1:4c2b4cf0
            Events : 38

    Number   Major   Minor   RaidDevice State
       2       8       33        0      active sync   /dev/sdc1
       1       8       17        1      active sync   /dev/sdb1
Enter fullscreen mode Exit fullscreen mode

For the final step,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Raid - 5 구성

👉 스냅샷 초기설정 상태로 변경 할 것 !!

We will first make 5 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde and sdf as we did using fdisk command.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
Enter fullscreen mode Exit fullscreen mode

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 5
Enter fullscreen mode Exit fullscreen mode

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md5 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
Enter fullscreen mode Exit fullscreen mode

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md5 metadata=1.2 name=Linux-1:5 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847
Enter fullscreen mode Exit fullscreen mode

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
Enter fullscreen mode Exit fullscreen mode

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md5
meta-data=/dev/md5               isize=512    agcount=8, agsize=98048 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=784128, imaxpct=25
         =                       sunit=128    swidth=384 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid5
[root@Linux-1 ~]# mount /dev/md5 /raid5
Enter fullscreen mode Exit fullscreen mode

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md5                          3.0G   33M  3.0G   2% /raid5
Enter fullscreen mode Exit fullscreen mode

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md5        /raid5          xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

We can confirm after the reboot if the auto mount is working correctly.

Raid - 5 복구 작업

halt
Enter fullscreen mode Exit fullscreen mode

👉 이 작업으로 인하여 기존에 RAID 5번에 문제가 발생하게 될 것이며, 우리는 새로운 HDD 1GB를 이용하여 복구 작업을 진행한다

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 12:45:34 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 12:59:29 2017
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:5  (local to host RAID)
           UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
         Events : 22

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       -       0        0        3      removed

Enter fullscreen mode Exit fullscreen mode

👉 기본적으로 우리는 HDD를 제거 하였기 때문에, Removed 상태로 표시가 된다. 하지만, 실제 업무에서는 HDD가 제거되는 일은 거의 없고 대부분 HDD에 문제가 발생하는 상태이다, 이때 장치의 상태를 Failed 상태라 말한다.

Failed 상태일 때에는 문제가 생긴 Failed Disk를 먼저 md 장치에서 제거 후 복구 작업을 진행 해야 한다.

Failed 장치 제거 및 복구 순서

  1. umount /dev/md5 ( Mount 해제 )
  2. mdadm /dev/md5 -r /dev/sdb1 ( Failed 장치 MD장치에서 제거 )
  3. 복구 작업 진행
[root@Linux-1 ~]# mdadm /dev/md5 --add /dev/sde1
mdadm: added /dev/sde1
Enter fullscreen mode Exit fullscreen mode

위에서 생성한 복구용 Disk를 이용하여 md5 장치에 복구 Partition으로 지정한다.

[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 13:51:00 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 13:55:40 2017
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 95% complete

           Name : RAID:5  (local to host RAID)
           UUID : 5b78e0c0:648d86dd:9fa5f44d:fea935de
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      spare rebuilding   /dev/sde1
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --query --detail /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Fri Jun 23 12:45:34 2017
     Raid Level : raid5
     Array Size : 3139584 (2.99 GiB 3.21 GB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 13:02:36 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:5  (local to host RAID)
           UUID : eb497ba9:59a635f0:e4a4acc1:4876bb0c
         Events : 41

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       4       8       65        3      active sync   /dev/sde1
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Raid - 6 구성

👉 스냅샷 초기설정 상태로 변경 할 것 !!

We will first make 6 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg
Enter fullscreen mode Exit fullscreen mode

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 6
Enter fullscreen mode Exit fullscreen mode

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md6 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Fail to create md5 when using /sys/module/md_mod/parameters/new_array, fallback to creation via node
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
Enter fullscreen mode Exit fullscreen mode

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md6 metadata=1.2 name=Linux-1:6 UUID=00f0e81a:fd3cf4e3:29b61bf1:9fd35847
Enter fullscreen mode Exit fullscreen mode

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
Enter fullscreen mode Exit fullscreen mode

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md6
meta-data=/dev/md6               isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=523264, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid6
[root@Linux-1 ~]# mount /dev/md6 /raid6
Enter fullscreen mode Exit fullscreen mode

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md6                          2.0G   33M  2.0G   2% /raid6
Enter fullscreen mode Exit fullscreen mode

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md6        /raid6          xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

We can confirm after the reboot if the auto mount is working correctly.

Raid - 6 복구 작업

halt
Enter fullscreen mode Exit fullscreen mode

👉 이 작업으로 인하여 기존에 Raid 6번에 문제가 발생하게 될 것이며, 우리는 HDD 1GB * 2개를 이용하여 복구 작업을 진행한다.

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
        Version : 1.2
  Creation Time : Fri Jun 23 14:02:07 2017
     Raid Level : raid6
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 14:09:38 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:6  (local to host RAID)
           UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
         Events : 21

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       -       0        0        2      removed
       -       0        0        3      removed
Enter fullscreen mode Exit fullscreen mode

👉 기본적으로 우리는 HDD를 제거 하였기 때문에, Removed 상태로 표시가 된다.
하지만, 실제 업무에서는 HDD가 제거되는 일은 거의 없고 대부분 HDD에 문제가 발생하는 상태이다, 이때 장치의 상태를 Failed 상태라 말한다, Failed 상태일 때에는 문제가 생긴 Failed Disk를 먼저 md 장치에서 제거 후 복구 작업을 진행 해야 한다.

Failed 장치 제거 및 복구 순서

  1. umount /dev/md1 ( Mount 해제 )
  2. mdadm /dev/md1 -r /dev/sdb1 ( Failed 장치 MD장치에서 제거 )
  3. 복구 작업 진행
[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sdd1
mdadm: added /dev/sdd1

[root@Linux-1 ~]# mdadm /dev/md6 --add /dev/sde1
mdadm: added /dev/sde1
Enter fullscreen mode Exit fullscreen mode

복구용 HDD를 이용하여, 복구 작업 진행

[root@Linux-1 ~]# mdadm --query --detail /dev/md6
/dev/md6:
        Version : 1.2
  Creation Time : Fri Jun 23 14:02:07 2017
     Raid Level : raid6
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 14:14:09 2017
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : RAID:6  (local to host RAID)
           UUID : d7dfa1f7:3cfbb984:2c40ff2f:d38404f5
         Events : 58

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1
       5       8       65        3      active sync   /dev/sde1
Enter fullscreen mode Exit fullscreen mode

복구 작업 완료 후 상태 확인

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Raid - 1+0 구성

👉 스냅샷 초기설정 상태로 변경 할 것 !!

We will first make 6 1GBs of hard disks.

Then we will create raid system partitions for sdb, sdc , sdd, sde, sdf and sdg.

[root@Linux-1 ~]# fdisk /dev/sdb
[root@Linux-1 ~]# fdisk /dev/sdc
[root@Linux-1 ~]# fdisk /dev/sdd
[root@Linux-1 ~]# fdisk /dev/sde
[root@Linux-1 ~]# fdisk /dev/sdf
[root@Linux-1 ~]# fdisk /dev/sdg
Enter fullscreen mode Exit fullscreen mode

Using the mknod command,

[root@Linux-1 ~]# mknod /dev/md5 b 9 10
Enter fullscreen mode Exit fullscreen mode

Using mdadm command,

[root@Linux-1 ~]# mdadm --create /dev/md10 --level=10 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
Enter fullscreen mode Exit fullscreen mode

Confirming the details,

[root@Linux-1 ~]# mdadm --detail --scan
ARRAY /dev/md10 metadata=1.2 name=RAID:10 UUID=3d4080a1:2669cb55:1411317c:dcdf8fbd
Enter fullscreen mode Exit fullscreen mode

Confirming query details,

[root@Linux-1 ~]# mdadm --query --detail /dev/md10
.
.
.

Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1
Enter fullscreen mode Exit fullscreen mode

Formatting this partition,

[root@Linux-1 ~]# mkfs.xfs /dev/md10
meta-data=/dev/md10              isize=512    agcount=8, agsize=65408 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=523264, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode

Making an empty directory and mounting this partition,

[root@Linux-1 ~]# mkdir /raid10
[root@Linux-1 ~]# mount /dev/md10 /raid10
Enter fullscreen mode Exit fullscreen mode

Confirming the mount,

[root@Linux-1 ~]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
devtmpfs                          475M     0  475M   0% /dev
tmpfs                             487M     0  487M   0% /dev/shm
tmpfs                             487M  7.6M  479M   2% /run
tmpfs                             487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos_linux--1-root   17G  1.6G   16G   9% /
/dev/sda1                        1014M  168M  847M  17% /boot
tmpfs                              98M     0   98M   0% /run/user/0
/dev/md10                          2.0G   33M  2.0G   2% /raid10
Enter fullscreen mode Exit fullscreen mode

Saving md details to the .conf file,

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Making the auto mount,

[root@Linux-1 ~]# vi /etc/fstab
/dev/md10        /raid10          xfs     defaults        0 0
Enter fullscreen mode Exit fullscreen mode

We can confirm after the reboot if the auto mount is working correctly.

Raid - 1+0 복구 작업

halt
Enter fullscreen mode Exit fullscreen mode

👉 Raid 1+0의 경우 가상 디스크를 강제로 삭제하게 되면, MD장치가 사라지므로, 강제로 Failed 상태로 만든 후 복구 작업을 진행한다.
단, 주의 할 것은 Set으로 묶인 HDD 2개를 동시에 Failed 하면 안 된다, 반드시 각 Set 마다 1개씩만 Failed 할 것.

( /dev/sdb , /dev/sde 를 Failed 하면 된다. )

💡 복구가 가능한 Raid 1 , Raid 5 , Raid 6 같은 경우에는 HDD를 제거하여도 부팅이 가능하지만, Raid 0 , Raid 1+0은 불가능 하다.
증명, Raid 1 , 5 , 6을 구성 후 Mount 설정 각 장치에 Data를 생성 HDD 1개 혹은 2개를 지우고 재 부팅해도 해당 Data는 복구가 가능해 정상적으로 표시 된다.

[root@Linux-1 ~]# umount /dev/md10
[root@Linux-1 ~]# mdadm /dev/md10 -f /dev/sdb1 /dev/sde1
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:08:31 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 19

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       -       0        0        3      removed

       0       8       17        -      faulty   /dev/sdb1
       3       8       65        -      faulty   /dev/sde1 
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm /dev/md10 -r /dev/sdb1 /dev/sde1 
mdadm: hot removed /dev/sdb1 from /dev/md10
mdadm: hot removed /dev/sde1 from /dev/md10
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# reboot
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:13:57 2017
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 21

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       -       0        0        3      removed
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm /dev/md10 --add /dev/sdf1 /dev/sdg1
mdadm: added /dev/sdf1
mdadm: added /dev/sdg1
Enter fullscreen mode Exit fullscreen mode
[root@Linux-1 ~]# mdadm --query --detail /dev/md10
/dev/md10:
        Version : 1.2
  Creation Time : Fri Jun 23 16:03:46 2017
     Raid Level : raid10
     Array Size : 2093056 (2044.00 MiB 2143.29 MB)
  Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Fri Jun 23 16:29:22 2017
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 2

         Layout : near=2
     Chunk Size : 512K

           Name : RAID:10  (local to host RAID)
           UUID : 0eb90845:5d0cbec1:69c9a33d:0371708c
         Events : 48

    Number   Major   Minor   RaidDevice State
       5       8       97        0      active sync set-A   /dev/sdg1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       4       8       81        3      active sync set-B   /dev/sdf1
Enter fullscreen mode Exit fullscreen mode

복구 작업 완료 후 상태 확인

[root@Linux-1 ~]# mdadm --detail --scan > /etc/mdadm.conf
Enter fullscreen mode Exit fullscreen mode

Deleting the RAID array

👉 After this hands on, I needed to delete the RAID system that we configured

  1. mount 관련 정보 삭제 하기

    [root@Linux-1 ~]# umount /dev/md10
    [root@Linux-1 ~]# vi /etc/fstab
    
  2. md 장치 삭제하기

    [root@Linux-1 ~]# mdadm -S /dev/md10
    mdadm: stopped /dev/md10
    
  3. md 장치에서 사용한 Partition superblock 초기화 하기

    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdb1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdc1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sdd1
    [root@Linux-1 ~]# mdadm --zero-superblock /dev/sde1
    

In this post, I discussed what RAID is and how we can set them up. I also mentioned how to remove failed device and restore from the backup RAID.

Top comments (0)