This is part 2.
For part 1, made back in 2011, check here
We have RAID5 made of 2 disks x 4 TB - [WDC WD40EFRX].
We decided to add one more 4 TB disk.
Disk space prior beginning was 3.6 TB
[root@nas ~]# df -h /DATA/ Filesystem Size Used Avail Use% Mounted on /dev/md0 3.6T 2.7T 734G 79% /DATA
...and /proc/mdstat
[root@nas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdb1[2] sda1[0] 3906886144 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/2] [UU] bitmap: 0/30 pages [0KB], 65536KB chunk
...and full details
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 3906886144 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Feb 13 00:19:40 2014 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 87839 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 active sync /dev/sdb1
We hotplug the new 4 TB HDD and did repartitioning the disk. It is not shown, but it take place here.
Adding the disk as spare. The prompt returns almost immediately.
[root@nas ~]# mdadm --add /dev/md0 /dev/sdc1 mdadm: added /dev/sdc1
...and full details to confirm it is added properly
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 3906886144 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Feb 13 18:37:19 2014 State : active Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 87840 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 active sync /dev/sdb1 3 8 33 - spare /dev/sdc1
Grow the array. The prompt returns almost immediately.
[root@nas ~]# mdadm --grow /dev/md0 --raid-devices=3
...and full details
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 3906886144 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Thu Feb 13 18:38:43 2014 State : active, reshaping Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Reshape Status : 0% complete Delta Devices : 1, (2->3) Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 87854 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 active sync /dev/sdb1 3 8 33 2 active sync /dev/sdc1
...and /proc/mdstat
[root@nas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdc1[3] sdb1[2] sda1[0] 3906886144 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] [>....................] reshape = 0.0% (1225740/3906886144) finish=2252.1min speed=28903K/sec bitmap: 0/30 pages [0KB], 65536KB chunk
Speed up the reconstruction...
[root@nas ~]# echo 2500 > /proc/sys/dev/raid/speed_limit_min [root@nas ~]# echo 500000 > /proc/sys/dev/raid/speed_limit_max
Reconstruction took 38 hours.
...and full details
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 7813772288 (7451.79 GiB 8001.30 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Feb 15 08:42:53 2014 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 100288 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 17 1 active sync /dev/sdb1 3 8 33 2 active sync /dev/sdc1
...and /proc/mdstat
[root@nas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdc1[3] sdb1[2] sda1[0] 7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] bitmap: 0/30 pages [0KB], 65536KB chunk
FSCK prior resize. The prompt returns in 3 min
[root@nas ~]# e2fsck -f /dev/md0 -C 0 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information MAINDATA: 2840913/244187136 files (0.1% non-contiguous), 737148357/976721536 blocks
Resize. The prompt returns in 5 min
[root@nas ~]# resize2fs /dev/md0 -p resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/md0 to 1953443072 (4k) blocks. The filesystem on /dev/md0 is now 1953443072 blocks long.
Additional FSCK after the resize. The prompt returns in 5 min
[root@nas ~]# e2fsck -f /dev/md0 -C 0 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information MAINDATA: 2840913/488366080 files (0.1% non-contiguous), 752470180/1953443072 blocks
Mount and check free space :)
[root@nas ~]# mount /DATA/ [root@nas ~]# df -h /DATA Filesystem Size Used Avail Use% Mounted on /dev/md0 7.2T 2.7T 4.2T 40% /DATA
We did further resize of same NAS in Aug.2015.
Further growing software RAID5 array on Linux and resizing ext4 filesystem