Table of Contents
RAID, short for Redundant Array of Inexpensive Disks, is a method whereby information is spread across several disks, using techniques such as disk striping (RAID Level 0) and disk mirroring (RAID level 1) to achieve redundancy, lower latency and/or higher bandwidth for reading and/or writing, and recoverability from hard-disk crashes. Over six different types of RAID configurations have been defined.
The current RAID drivers in Linux supports the following levels (the following definitions were mostly taken from "The Software-RAID HOWTO" for Linux):
Linear Mode
Two or more disks are combined into one physical device. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here.
There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.
RAID-0
Also called "stripe" mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 4 kB to disk 0, 4 kB to disk 1, 4 kB to disk 2, then 4 kB to disk 0 again, and so on. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.
The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec where N = number of active disks in the array (not counting spare-disks) and P = performance of one disk in the array, in MB/s.
RAID-1
Also called "mirror" mode. This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. With large RAID-1 arrays this can be a real problem, as you may saturate the PCI bus with these extra copies. This is in fact one of the very few places where Hardware RAID solutions can have an edge over Software solutions - if you use a hardware RAID card, the extra write copies of the data will not have to go over the PCI bus, since it is the RAID controller that will generate the extra copy. Read performance is good, especially if you have multiple readers or seek-intensive workloads. The RAID code employs a rather good read-balancing algorithm, that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 6 ms equals a read of 123 kB at 20 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement.
RAID-4
This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disk is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.
If one drive fails, the parity information can be used to reconstruct all data. If two drives fail, all data is lost.
The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.
RAID-5
This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4.
If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, all data are lost. RAID-5 can survive one disk failure, but not two or more.
Both read and write performance usually increase, but can be hard to predict how much. Reads are similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information), or similar to RAID-1 writes. The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.
The follow section will show how to setup a RAID1 configuration using a NST probe. We will only show and example on how to mirror 2 disk partitions. Performing a RAID1 configuration for the root file system will not be shown.
We will demonstrate how to setup a
RAID1 device: /dev/md0
with NST using IDE device components:
/dev/hdc2
and
/dev/hdd4
. The RAID
management tool: mdadm will be used
throughout this example for creating and managing the
RAID1 array. The Figure 13.1, “Linux Software RAID1” below pictorially represents the
logical RAID1 mirror and and corresponding
device components.
In this example it is very
important to note that original data initially exists on
device: /dev/hdc2
. One must take special
care when creating the RAID1 mirror device:
/dev/md0
by including device components
in the proper order so that original data is preserved.
First we create the initial RAID1
device: /dev/md0
.
[root@probe root]#
/sbin/mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hdc2 "missing"
mdadm: /dev/hdc2 appears to contain an ext2fs file system size=1004028K mtime=Sat Apr 23 16:17:30 2005 mdadm: /dev/hdc2 appears to be part of a raid array: level=1 devices=2 ctime=Fri Apr 22 00:36:51 2005 Continue creating array? y mdadm: array /dev/md0 started. [root@probe root]#
Notice that the device component:
/dev/hdc2
which contains the original data
is added first followed by the place holder device entry
"missing". Once the
RAID1 device: /dev/md0
is created, then second device component:
/dev/hdd4
will be added and take the place
of the "missing" entry.
Lets take an initial look at the current state of the RAID1 array. Notice that it is in a "degraded" state.
[root@probe root]#
/sbin/mdadm --detail /dev/md0
/dev/md0: Version : 00.90.01 Creation Time : Mon May 2 20:35:37 2005 Raid Level : raid1 Array Size : 1003904 (980.38 MiB 1027.100 MB) Device Size : 1003904 (980.38 MiB 1027.100 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon May 2 20:35:39 2005 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : f06414c0:39e569bb:a4e94613:1aa6b923 Events : 0.1 Number Major Minor RaidDevice State 0 22 6 0 active sync /dev/hdc2 1 0 0 - removed [root@probe root]#
We will now add the second device component:
/dev/hdd4
to the RAID1
array: /dev/md0
.
[root@probe root]#
/sbin/mdadm /dev/md0 -a /dev/hdd4
mdadm: hot added /dev/hdd4
When adding device components to a RAID1 mirror array, the component should be of equal or greater size.
At this point the RAID1 array will
start the synchronization process so that a replica copy of data
exists on each device component. The status of the
RAID1 is now shown during the synchronization
phase. We also cat the kernel representation
file of known RAID devices:
/proc/mdstat
.
[root@probe root]#
/sbin/mdadm --detail /dev/md0
/dev/md0: Version : 00.90.01 Creation Time : Mon May 2 20:35:37 2005 Raid Level : raid1 Array Size : 1003904 (980.38 MiB 1027.100 MB) Device Size : 1003904 (980.38 MiB 1027.100 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon May 2 20:39:51 2005 State : clean, degraded, recovering Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Rebuild Status : 10% complete UUID : f06414c0:39e569bb:a4e94613:1aa6b923 Events : 0.2 Number Major Minor RaidDevice State 0 22 6 0 active sync /dev/hdc2 1 0 0 - removed 2 22 69 1 spare rebuilding /dev/hdd4 [root@probe root]#[root@probe root]#
cat /proc/mdstat
Personalities : [raid1] md0 : active raid1 hdd4[2] hdc2[0] 1003904 blocks [2/1] [U_] [===>.................] recovery = 19.4% (196224/1003904) finish=1.8min speed=7267K/sec unused devices: <none>
Once array sychronization is complete, the array:
/dev/md0
is ready to be used.
[root@probe root]#
/sbin/mdadm --detail /dev/md0
/dev/md0: Version : 00.90.01 Creation Time : Mon May 2 20:35:37 2005 Raid Level : raid1 Array Size : 1003904 (980.38 MiB 1027.100 MB) Device Size : 1003904 (980.38 MiB 1027.100 MB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon May 2 20:42:13 2005 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : f06414c0:39e569bb:a4e94613:1aa6b923 Events : 0.3 Number Major Minor RaidDevice State 0 22 6 0 active sync /dev/hdc2 1 22 69 1 active sync /dev/hdd4 [root@probe root]#[root@probe root]#
cat /proc/mdstat
Personalities : [raid1] md0 : active raid1 hdd4[1] hdc2[0] 1003904 blocks [2/2] [UU] unused devices: <none> [root@probe root]#
We can now mount the array and demonstrate its use. Note
that the device component: /dev/hdc2
already had an "ext3" file system on
it.
[root@probe root]#
/bin/mount /dev/md0 /mnt/ext3
[root@probe root]#
/bin/mount
/dev/ram0 on / type ext2 (rw) none on /proc type proc (rw) none on /dev/pts type devpts (rw,gid=5,mode=620) none on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) usbfs on /proc/bus/usb type usbfs (rw) /dev/hda on /mnt/cdrom type iso9660 (ro,nosuid,nodev) /dev/md0 on /mnt/ext3 type ext2 (rw) [root@probe root]#[root@probe root]#
ls -al /mnt/ext3
total 37 drwxr-xr-x 5 root root 4096 Apr 23 16:13 . drwxr-xr-x 30 root root 1024 Apr 29 20:56 .. drwxr-xr-x 51 root root 4096 Apr 21 22:46 etc drwx------ 2 root root 16384 Mar 8 07:55 lost+found -rw-r--r-- 1 root root 36 Apr 23 16:13 rwh.txt drwxr-xr-x 24 root root 4096 Apr 23 09:59 var -rw-r--r-- 1 root root 36 Apr 23 16:02 xxx.txt[root@probe root]#
df
Filesystem 1K-blocks Used Available Use% Mounted on /dev/ram0 63461 29476 33985 47% / none 126964 0 126964 0% /dev/shm /dev/hda 379188 379188 0 100% /mnt/cdrom /dev/md0 988212 4676 933336 1% /mnt/ext3 [root@probe root]#
One can populate the mdadm
configuration file: /etc/mdadm.conf
by
obtaining information from the array with the following
command.
[root@probe root]#
/sbin/mdadm --misc --detail --brief /dev/md0
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923 devices=/dev/hdc2,/dev/hdd4 [root@probe root]#
From the above 2 entries will need to be added to the mdadm
configuration file: /etc/mdadm.conf
[root@probe root]#
echo "DEVICE /dev/hdc2 /dev/hdd4" >> /etc/mdadm.conf
[root@probe root]#
echo "ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923" >> /etc/mdadm.conf
Once the mdadm configuration file:
/etc/mdadm.conf
is configured for the
array, one can use the following command to obtain information
about the md superblock on the device
components.
[root@probe root]#
/sbin/mdadm --examine --scan
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f06414c0:39e569bb:a4e94613:1aa6b923 devices=/dev/hdc2,/dev/hdd4
The following commands are used to unmount, stop and restart the array.
[root@probe root]#
/bin/umount /dev/md0
[root@probe root]#
/sbin/mdadm --misc --verbose --stop /dev/md0
[root@probe root]#
/sbin/mdadm --assemble --verbose --run /dev/md0
mdadm: looking for devices for /dev/md0 mdadm: /dev/hdc2 is identified as a member of /dev/md0, slot 0. mdadm: /dev/hdd4 is identified as a member of /dev/md0, slot 1. mdadm: added /dev/hdd4 to /dev/md0 as 1 mdadm: added /dev/hdc2 to /dev/md0 as 0 mdadm: /dev/md0 has been started with 2 drives. [root@probe root]#