Sep 042016

Based on Changing a RAID-10 into a RAID-5

It is usually assumed that the best HDD organization on a backup server is a RAID5, since it provides a fairly good price/volume. Unfortunately, increasing of disk count affect to some RAID5 disadvantages, in particular the in reliability and recovery speed. For example if the server has RAID5 of 6 (six) SATA disk drives (even it’s reliable enough, such as WD Re), with one disk replacement, array recovery time is about 10 hours (in my case). At the recovery time reservation is absent, the load on the disks are increased, that increases the probability of failure of the remaining disks, and if all the disks have same series, the probability of another disk will deteriorate during the recovery, increase even more.
In this regard, it was decided to convert the existing disk array from RAID5 to RAID10 with the addition of two drives, that in theory should lead to increasing of server performance and improving reliability.

The conversion process: reduce the size of the file system on current array, reduce the size of the current array, convert it into the RAID0, convert resulted RAID0 into a RAID10, increase the size of the file system to an actual size.
WARNING: if action is done remotely, all operations should be under the screen, or something similar, due to the fact that the time spent on each operation can take several days and disconnections are highly undesirable (up to data loss).

For conversion, follow these steps:

  • remove all references to the disk array, unmount the partition, as well as do any other action to reduce the load on the server. In particular, you can stop the backup service, cron etc.
    umount /data
    /etc/init.d/cron stop
  • WARNING: Before decreasing array size, you need to decrease file system size:
    • Perform a file system check
      e2fsck -f /dev/md1
    • Decrease file system size (in my case, the current amount of data to place on 3 2TB drives)
      resize2fs -p /dev/md1 5G

      Execution time may take several days, it depends on count of files and it’s fragmentation

    • Decrease array size
      mdadm --grow /dev/md1 --array-size 5G
  • Convert array from RAID5 to RAID0
    mdadm --grow /dev/md1 --level=0 --raid-devices=3 --backup-file=md1.backup

    After command execution, you must wait about 10-20 minutes, after that check the execution status:

    cat /proc/mdstat

    If the status of the array is something like:

    reshape = 0% ... speed=0K/sec

    probably process was not began and waiting does not make sense. It is recommended to re-run the same command with the –continue. This will lead to continue operation in a console.

    mdadm --grow /dev/md1 --level=0 --raid-devices=3 --backup-file=md1.backup --continue

    Check the operation status from another console, you should get something like:

    root@backup:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md1 : active raid5 sdg4[0] sdd4[4] sde4[7] sdf4[6] sdc4[1]
    5848233984 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUUU]
    [==>..................] reshape = 14.5% (284427780/1949411328) finish=988.7min speed=28066K/sec

    In my case, execution time was about 20 hours.

  • After that you need to check operation completeness:
    mdadm -D /dev/md1

    If in the array data you see:

    Raid Level : raid5

    Than you need to execute command again:

    mdadm --grow /dev/md1 --level=0 --raid-devices=3

    Array type will change immediately:

    mdadm: level of /dev/md1 changed to raid0
  • After the previous operation RAID0 disk array have to be created from 3 discs. After that it must be set to RAID10, specifying the free drives.
    mdadm --grow /dev/md1 --level=10 --raid-devices=6 --add /dev/sdb4 --add /dev/sdf4 --add /dev/sdg4

    In my case, execution time was about 20 hours.

  • Increase file system size:
    resize2fs /dev/md1

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>