Mdadm derives its name from the “md” (multiple disk) device nodes it manages.
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc /dev/sde
To add RAID device md0 to /etc/mdadm.conf so that it is recognized the next time you boot.
mdadm -Es | grep md0 >>/etc/mdadm.conf
View the status of a multi disk array.
mdadm --detail /dev/md0
View the status of all multi disk arrays.
mdmpd - daemon to monitor MD multipath devices
Enterprise storage requirements often include the desire to have more than one way to talk to a single disk drive so that in the event of some failure to talk to a disk drive via one controller, the system can automatically switch to another controller and keep going. This is called multipath disk access. The linux kernel implements multipath disk access via the software RAID stack known as the md (Multiple Devices) driver. The kernel portion of the md multipath driver only handles routing I/O requests to the proper device and handling failures on the active path. It does not try and find out if a path that has previously failed might be working again. That's what this daemon does. Upon startup, the daemon will fork and place itself in the background. Then it reads the current state of the md raid arrays, saves that state, and then waits for the kernel to tell it something interesting has happened. It then wakes up, checks to see if any paths on a multipath device have failed, and if they have then it starts to poll the failed path once every 15 seconds until it starts working again. Once it starts working again, the daemon will then add the path back into the multipath md device it was originally part of as a new spare path.
If one is using the /proc filesystem, /proc/mdstat lists all active md devices with information about them. Mdmpd requires this to find arrays to monitor paths on and to get notification of interesting events.