I had software RAID5 made from 3 disks, each of 450 GB. As I began to run out of disk space, I decided to add another disk to the array.
There are many guides how to do so, there is one of them
http://scotgate.org/2006/07/03/growing-a-raid5-array-mdadm/
but no one said how much time the reshape and resize process take.
So after the disk installation, we prepared the partitions. The /dev/sdd3 was the partition that needed to be added to the array. For additional safety we unmounted the partition. Then we add it to the array with following command:
[~]# mdadm --add /dev/md0 /dev/sdd3
The command returns prompt immediately, but the drive is added as spare:
[~]# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Oct 18 22:24:33 2010 Raid Level : raid5 Array Size : 891398400 (850.10 GiB 912.79 GB) Used Dev Size : 445699200 (425.05 GiB 456.40 GB) Raid Devices : 3 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Mar 8 19:53:02 2011 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K UUID : e49b6850:a534ff16:8c2fd8a1:d51fee00 Events : 0.6400 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 - spare /dev/sdd3
Note last row - spare /dev/sdd3.
In order to add the disk into the array as "normal" disk, we need to reshape the array to use 4 disks:
[~]# mdadm --grow /dev/md0 --raid-devices=4 mdadm: Need to backup 384K of critical section.. mdadm: ... critical section passed.
Command returns the prompt in about 20-30 sec. Now array is rebuilding. We can mount the partition again. To monitor what happens, we can use following command:
[~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdd2[2] sdc2[1] sdb2[0] 72565376 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0] 891398400 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 1.5% (6841472/445699200) finish=769.6min speed=9500K/sec unused devices:
To speedup the reshape, we did:
[~]# echo 500000 > /proc/sys/dev/raid/speed_limit_max [~]# echo 2500 > /proc/sys/dev/raid/speed_limit_min
Reshape took 9 hours and 30 min. Once done, information looked as:
[root@amadeus ~]# mdadm --detail /dev/md0 /dev/md0: Version : 0.91 Creation Time : Mon Oct 18 22:24:33 2010 Raid Level : raid5 Array Size : 891398400 (850.10 GiB 912.79 GB) Used Dev Size : 445699200 (425.05 GiB 456.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Mar 8 20:07:39 2011 State : active, recovering Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Reshape Status : 1% complete Delta Devices : 1, (3->4) UUID : e49b6850:a534ff16:8c2fd8a1:d51fee00 Events : 0.11103 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 3 active sync /dev/sdd3 [~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdd2[2] sdc2[1] sdb2[0] 72565376 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0] 1337097600 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices:
Now we can see that the RAID5 array uses 4 disks. However the filesystem is smaller - still 850.10 GB, because we never expanded it. To do so, we first need to unmount the partition again, then we need to check it for errors. This check is required from resize2fs. The check can be skipped but we better perform it. We use option -C to have progress bar, else we will never know if system is checked or program stay idle or crashed.
[~]# fsck.ext3 /dev/md0 -C 0 e2fsck 1.39 (29-May-2006) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/md0: 4116267/111427584 files (1.5% non-contiguous), 200844066/222849600 blocks
Check took 35 min.
Then we need to do resize itself. We use the -p option to show progress bar for same reason.
[~]# resize2fs /dev/md0 -p resize2fs 1.39 (29-May-2006) Resizing the filesystem on /dev/md0 to 334274400 (4k) blocks. Begin pass 1 (max = 3401) Extending the inode table XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX The filesystem on /dev/md0 is now 334274400 blocks long.
Resize took 5 min.
Now we can mount the partition, but we desided to do one more check:
[~]# fsck.ext3 /dev/md0 -C 0 e2fsck 1.39 (29-May-2006) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/md0: 4116267/111427584 files (1.5% non-contiguous), 200844066/222849600 blocks
The test took 40 min.
After that we mounted the partition and voila! Lots of free space. If we try mdadm info, result will be 1,275.16 GB:
[~]# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Mon Oct 18 22:24:33 2010 Raid Level : raid5 Array Size : 1337097600 (1275.16 GiB 1369.19 GB) Used Dev Size : 445699200 (425.05 GiB 456.40 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Mar 10 20:37:27 2011 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : e49b6850:a534ff16:8c2fd8a1:d51fee00 Events : 0.302764 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 51 3 active sync /dev/sdd3 [~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md1 : active raid5 sdd2[2] sdc2[1] sdb2[0] 72565376 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdd3[3] sdc3[2] sdb3[1] sda3[0] 1337097600 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices:
We did similar resize in Feb.2014.
Then we used NAS server with much lighter Dual Core AMD CPU, but it has new Kernel 3.12 and EXT4 filesystem.
Growing software RAID5 array on Linux and resizing ext4 filesystem
We did similar resize in Aug.2015.
Then we used NAS server with much lighter Dual Core AMD CPU, but it has new Kernel 3.12 and EXT4 filesystem.
Further growing software RAID5 array on Linux and resizing ext4 filesystem