diff options
author | Neil Brown <neilb@suse.de> | 2002-04-05 22:00:28 +0000 |
---|---|---|
committer | Neil Brown <neilb@suse.de> | 2002-04-05 22:00:28 +0000 |
commit | c913b90e6dba02613fe24d3e7f0b3f251a01bc50 (patch) | |
tree | 6cae4520d50adc36d528410baedc3ad6a9993069 /md.man | |
parent | e0d1903663dac9307a37646c26abf7991b0a9593 (diff) | |
download | mdadm-c913b90e6dba02613fe24d3e7f0b3f251a01bc50.tar.gz mdadm-c913b90e6dba02613fe24d3e7f0b3f251a01bc50.tar.xz mdadm-c913b90e6dba02613fe24d3e7f0b3f251a01bc50.zip |
mdadm-0.8.1mdadm-0.8.1
Diffstat (limited to 'md.man')
-rw-r--r-- | md.man | 240 |
1 files changed, 0 insertions, 240 deletions
@@ -1,240 +0,0 @@ -MD(4) MD(4) - - - -NNAAMMEE - md - Multiple Device driver aka Linux Software Raid - -SSYYNNOOPPSSIISS - //ddeevv//mmdd_n - //ddeevv//mmdd//_n - -DDEESSCCRRIIPPTTIIOONN - The mmdd driver provides virtual devices that are created - from one or more independent underlying devices. This - array of devices often contains redundancy, and hence the - acronym RAID which stands for a Redundant Array of Inde- - pendent Devices. - - mmdd support RAID levels 1 (mirroring) 4 (striped array with - parity device) and 5 (striped array with distributed par- - ity information. If a single underlying device fails - while using one of these level, the array will continue to - function. - - mmdd also supports a number of pseudo RAID (non-redundant) - configurations including RAID0 (striped array), LINEAR - (catenated array) and MULTIPATH (a set of different inter- - faces to the same device). - - - MMDD SSUUPPEERR BBLLOOCCKK - With the exception of Legacy Arrays described below, each - device that is incorporated into an MD array has a _s_u_p_e_r - _b_l_o_c_k written towards the end of the device. This - superblock records information about the structure and - state of the array so that the array can be reliably re- - assembled after a shutdown. - - The superblock is 4K long and is written into a 64K - aligned block that starts at least 64K and less than 128K - from the end of the device (i.e. to get the address of the - superblock round the size of the device down to a multiple - of 64K and then subtract 64K). The available size of each - device is the amount of space before the super block, so - between 64K and 128K is lost when a device in incorporated - into an MD array. - - The superblock contains, among other things: - - LEVEL The manner in which the devices are arranged into - the array (linear, raid0, raid1, raid4, raid5, mul- - tipath). - - UUID a 128 bit Universally Unique Identifier that iden- - tifies the array that this device is part of. - - - LLEEGGAACCYY AARRRRAAYYSS - Early versions of the mmdd driver only supported Linear and - Raid0 configurations and so did not use an MD superblock - (as there is not state that needs to be recorded). While - it is strongly recommended that all newly created arrays - utilise a superblock to help ensure that they are assem- - bled properly, the mmdd driver still supports legacy linear - and raid0 md arrays that do not have a superblock. - - - LLIINNEEAARR - A linear array simply catenates the available space on - each drive together to form one large virtual drive. - - One advantage of this arrangement over the more common - RAID0 arrangement is that the array may be reconfigured at - a later time with an extra drive and so the array is made - bigger without disturbing the data that is on the array. - However this cannot be done on a live array. - - - - RRAAIIDD00 - A RAID0 array (which has zero redundancy) is also known as - a striped array. A RAID0 array is configured at creation - with a CChhuunnkk SSiizzee which must be a multiple of 4 kibibytes. - - The RAID0 driver places the first chunk of the array to - the first device, the second chunk to the second device, - and so on until all drives have been assigned one chuck. - This collection of chunks forms a ssttrriippee. Further chunks - are gathered into stripes in the same way which are - assigned to the remaining space in the drives. - - If device in the array are not all the same size, then - once the smallest devices has been exhausted, the RAID0 - driver starts collecting chunks into smaller stripes that - only span the drives which still have remaining space. - - - - RRAAIIDD11 - A RAID1 array is also known as a mirrored set (though mir- - rors tend to provide reflect images, which RAID1 does not) - or a plex. - - Once initialised, each device in a RAID1 array contains - exactly the same data. Changes are written to all devices - in parallel. Data is read from any one device. The - driver attempts to distribute read requests across all - devices to maximise performance. - - All devices in a RAID1 array should be the same size. If - they are not, then only the amount of space available on - the smallest device is used. Any extra space on other - devices is wasted. - - - RRAAIIDD44 - A RAID4 array is like a RAID0 array with an extra device - for storing parity. Unlike RAID0, RAID4 also requires - that all stripes span all drives, so extra space on - devices that are larger than the smallest is wasted. - - When any block in a RAID4 array is modified the parity - block for that stripe (i.e. the block in the parity device - at the same device offset as the stripe) is also modified - so that the parity block always contains the "parity" for - the whole stripe. i.e. its contents is equivalent to the - result of performing an exclusive-or operation between all - the data blocks in the stripe. - - This allows the array to continue to function if one - device fails. The data that was on that device can be - calculated as needed from the parity block and the other - data blocks. - - - RRAAIIDD55 - RAID5 is very similar to RAID4. The difference is that - the parity blocks for each stripe, instead of being on a - single device, are distributed across all devices. This - allows more parallelism when writing as two different - block updates will quite possibly affect parity blocks on - different devices so there is less contention. - - This also allows more parallelism when reading as read - requests are distributed over all the devices in the array - instead of all but one. - - - MMUUTTIIPPAATTHH - MULTIPATH is not really a RAID at all as there is only one - real device in a MULTIPATH md array. However there are - multiple access points (paths) to this device, and one of - these paths might fail, so there are some similarities. - - A MULTIPATH array is composed of a number of different - devices, often fibre channel interfaces, that all refer - the the same real device. If one of these interfaces - fails (e.g. due to cable problems), the multipath driver - to attempt to redirect requests to another interface. - - - - UUNNCCLLEEAANN SSHHUUTTDDOOWWNN - When changes are made to an RAID1, RAID4, or RAID5 array - there is a possibility of inconsistency for short periods - of time as each update requires are least two block to be - written to different devices, and these writes probably - wont happen at exactly the same time. This is a system - with one of these arrays is shutdown in the middle of a - write operation (e.g. due to power failure), the array may - not be consistent. - - The handle this situation, the md driver marks an array as - "dirty" before writing any data to it, and marks it as - "clean" when the array is being disabled, e.g. at shut- - down. If the md driver finds an array to be dirty at - startup, it proceeds to correct any possibly inconsis- - tency. For RAID1, this involves copying the contents of - the first drive onto all other drives. For RAID4 or RAID5 - this involves recalculating the parity for each stripe and - making sure that the parity block has the correct data. - - If a RAID4 or RAID5 array is degraded (missing one drive) - when it is restarted after an unclean shutdown, it cannot - recalculate parity, and so it is possible that data might - be undetectably corrupted. The md driver currently ddooeess - nnoott alert the operator to this condition. It should prob- - ably fail to start an array in this condition without man- - ual intervention. - - - RREECCOOVVEERRYY - If the md driver detects any error on a device in a RAID1, - RAID4, or RAID5 array, it immediately disables that device - (marking it as faulty) and continues operation on the - remaining devices. If there is a spare drive, the driver - will start recreating on one of the spare drives the data - what was on that failed drive, either by copying a working - drive in a RAID1 configuration, or by doing calculations - with the parity block on RAID4 and RAID5. - - Why this recovery process is happening, the md driver will - monitor accesses to the array and will slow down the rate - of recovery if other activity is happening, so that normal - access to the array will not be unduly affected. When no - other activity is happening, the recovery process proceeds - at full speed. The actual speed targets for the two dif- - ferent situations can be controlled by the ssppeeeedd__lliimmiitt__mmiinn - and ssppeeeedd__lliimmiitt__mmaaxx control files mentioned below. - - - -FFIILLEESS - //pprroocc//mmddssttaatt - Contains information about the status of currently - running array. - - //pprroocc//ssyyss//ddeevv//rraaiidd//ssppeeeedd__lliimmiitt__mmiinn - A readable and writable file that reflects the cur- - rent goal rebuild speed for times when non-rebuild - activity is current on an array. The speed is in - Kibibytes per second, and is a per-device rate, not - a per-array rate (which means that an array with - more disc will shuffle more data for a given - speed). The default is 100. - - - //pprroocc//ssyyss//ddeevv//rraaiidd//ssppeeeedd__lliimmiitt__mmaaxx - A readable and writable file that reflects the cur- - rent goal rebuild speed for times when no non- - rebuild activity is current on an array. The - default is 100,000. - - -SSEEEE AALLSSOO - mmddaaddmm(8), mmkkrraaiidd(8). - - - - MD(4) |