Chapter 13. Linux Software RAID

Table of Contents

RAID1

RAID, short for Redundant Array of Inexpensive Disks, is a method whereby information is spread across several disks, using techniques such as disk striping (RAID Level 0) and disk mirroring (RAID level 1) to achieve redundancy, lower latency and/or higher bandwidth for reading and/or writing, and recoverability from hard-disk crashes. Over six different types of RAID configurations have been defined.

The current RAID drivers in Linux supports the following levels (the following definitions were mostly taken from "The Software-RAID HOWTO" for Linux):

The follow section will show how to setup a RAID1 configuration using a NST probe. We will only show and example on how to mirror 2 disk partitions. Performing a RAID1 configuration for the root file system will not be shown.

RAID1

We will demonstrate how to setup a RAID1 device: /dev/md0 with NST using IDE device components: /dev/hdc2 and /dev/hdd4. The RAID management tool: mdadm will be used throughout this example for creating and managing the RAID1 array. The Figure 13.1, “Linux Software RAID1” below pictorially represents the logical RAID1 mirror and and corresponding device components.

Figure 13.1. Linux Software RAID1

Linux Software RAID1

Note:

In this example it is very important to note that original data initially exists on device: /dev/hdc2. One must take special care when creating the RAID1 mirror device: /dev/md0 by including device components in the proper order so that original data is preserved.

First we create the initial RAID1 device: /dev/md0.

[root@probe root]# /sbin/mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hdc2 "missing"
mdadm: /dev/hdc2 appears to contain an ext2fs file system
    size=1004028K  mtime=Sat Apr 23 16:17:30 2005
mdadm: /dev/hdc2 appears to be part of a raid array:
    level=1 devices=2 ctime=Fri Apr 22 00:36:51 2005
Continue creating array? y
mdadm: array /dev/md0 started.
[root@probe root]#
      

Notice that the device component: /dev/hdc2 which contains the original data is added first followed by the place holder device entry "missing". Once the RAID1 device: /dev/md0 is created, then second device component: /dev/hdd4 will be added and take the place of the "missing" entry.

Lets take an initial look at the current state of the RAID1 array. Notice that it is in a "degraded" state.

[root@probe root]# /sbin/mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Mon May  2 20:35:37 2005
     Raid Level : raid1
     Array Size : 1003904 (980.38 MiB 1027.100 MB)
    Device Size : 1003904 (980.38 MiB 1027.100 MB)
   Raid Devices : 2
  Total Devices : 1
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon May  2 20:35:39 2005
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           UUID : f06414c0:39e569bb:a4e94613:1aa6b923
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0      22        6        0      active sync   /dev/hdc2
       1       0        0        -      removed
[root@probe root]#
      

We will now add the second device component: /dev/hdd4 to the RAID1 array: /dev/md0.

[root@probe root]# /sbin/mdadm /dev/md0 -a /dev/hdd4
mdadm: hot added /dev/hdd4
      

Note:

When adding device components to a RAID1 mirror array, the component should be of equal or greater size.

At this point the RAID1 array will start the synchronization process so that a replica copy of data exists on each device component. The status of the RAID1 is now shown during the synchronization phase. We also cat the kernel representation file of known RAID devices: /proc/mdstat.

[root@probe root]# /sbin/mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Mon May  2 20:35:37 2005
     Raid Level : raid1
     Array Size : 1003904 (980.38 MiB 1027.100 MB)
    Device Size : 1003904 (980.38 MiB 1027.100 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon May  2 20:39:51 2005
          State : clean, degraded, recovering
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 10% complete

           UUID : f06414c0:39e569bb:a4e94613:1aa6b923
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0      22        6        0      active sync   /dev/hdc2
       1       0        0        -      removed

       2      22       69        1      spare rebuilding   /dev/hdd4
[root@probe root]#
[root@probe root]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdd4[2] hdc2[0]
      1003904 blocks [2/1] [U_]
      [===>.................]  recovery = 19.4% (196224/1003904) finish=1.8min speed=7267K/sec
unused devices: <none>
      

Once array sychronization is complete, the array: /dev/md0 is ready to be used.

[root@probe root]# /sbin/mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Mon May  2 20:35:37 2005
     Raid Level : raid1
     Array Size : 1003904 (980.38 MiB 1027.100 MB)
    Device Size : 1003904 (980.38 MiB 1027.100 MB)
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon May  2 20:42:13 2005
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           UUID : f06414c0:39e569bb:a4e94613:1aa6b923
         Events : 0.3

    Number   Major   Minor   RaidDevice State
       0      22        6        0      active sync   /dev/hdc2
       1      22       69        1      active sync   /dev/hdd4
[root@probe root]#
[root@probe root]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 hdd4[1] hdc2[0]
      1003904 blocks [2/2] [UU]

unused devices: <none>
[root@probe root]#
      

We can now mount the array and demonstrate its use. Note that the device component: /dev/hdc2 already had an "ext3" file system on it.

[root@probe root]# /bin/mount /dev/md0 /mnt/ext3
[root@probe root]# /bin/mount
/dev/ram0 on / type ext2 (rw)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda on /mnt/cdrom type iso9660 (ro,nosuid,nodev)
/dev/md0 on /mnt/ext3 type ext2 (rw)
[root@probe root]#
[root@probe root]# ls -al /mnt/ext3
total 37
drwxr-xr-x   5 root root  4096 Apr 23 16:13 .
drwxr-xr-x  30 root root  1024 Apr 29 20:56 ..
drwxr-xr-x  51 root root  4096 Apr 21 22:46 etc
drwx------   2 root root 16384 Mar  8 07:55 lost+found
-rw-r--r--   1 root root    36 Apr 23 16:13 rwh.txt
drwxr-xr-x  24 root root  4096 Apr 23 09:59 var
-rw-r--r--   1 root root    36 Apr 23 16:02 xxx.txt
[root@probe root]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/ram0                63461     29476     33985  47% /
none                    126964         0    126964   0% /dev/shm
/dev/hda                379188    379188         0 100% /mnt/cdrom
/dev/md0                988212      4676    933336   1% /mnt/ext3
[root@probe root]#
      

One can populate the mdadm configuration file: /etc/mdadm.conf by obtaining information from the array with the following command.

[root@probe root]# /sbin/mdadm --misc --detail --brief /dev/md0
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923
   devices=/dev/hdc2,/dev/hdd4
[root@probe root]#
      

From the above 2 entries will need to be added to the mdadm configuration file: /etc/mdadm.conf

[root@probe root]# echo "DEVICE /dev/hdc2 /dev/hdd4" >> /etc/mdadm.conf
[root@probe root]# echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923" >> /etc/mdadm.conf
      

Once the mdadm configuration file: /etc/mdadm.conf is configured for the array, one can use the following command to obtain information about the md superblock on the device components.

[root@probe root]# /sbin/mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923
   devices=/dev/hdc2,/dev/hdd4
      

The following commands are used to unmount, stop and restart the array.

[root@probe root]# /bin/umount /dev/md0
[root@probe root]# /sbin/mdadm --misc --verbose --stop /dev/md0
[root@probe root]# /sbin/mdadm --assemble --verbose --run /dev/md0
mdadm: looking for devices for /dev/md0
mdadm: /dev/hdc2 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/hdd4 is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/hdd4 to /dev/md0 as 1
mdadm: added /dev/hdc2 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 2 drives.
[root@probe root]#