This is part 3.
For part 2, made back in 2014, check here.
For part 1, made back in 2011, check here
We have RAID5 made of 3 disks x 4 TB - [WDC WD40EFRX].
We decided to add one more 4 TB disk.
Disk space prior beginning was 7.2 TB
We hotplug the new 4 TB HDD and did repartitioning the disk. It is not shown, but it take place here.
Adding the disk as spare. The prompt returns almost immediately.
[root@nas ~]# mdadm --add /dev/md0 /dev/sde1 mdadm: added /dev/sde1
...and full details to confirm it is added properly
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 7813772288 (7451.79 GiB 8001.30 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Aug 7 09:56:30 2015 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 100950 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 33 1 active sync /dev/sdc1 3 8 17 2 active sync /dev/sdb1 4 8 65 - spare /dev/sde1
Grow the array. The prompt returns almost immediately.
[root@nas ~]# mdadm --grow /dev/md0 --raid-devices=4
...and full details
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 7813772288 (7451.79 GiB 8001.30 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Aug 7 09:57:34 2015 State : clean, reshaping Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Reshape Status : 0% complete Delta Devices : 1, (3->4) Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 100962 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 33 1 active sync /dev/sdc1 3 8 17 2 active sync /dev/sdb1 4 8 65 3 active sync /dev/sde1
...and /proc/mdstat
[root@nas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sde1[4] sdb1[3] sda1[0] sdc1[2] 7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 0.0% (357892/3906886144) finish=3092.5min speed=21052K/sec bitmap: 0/30 pages [0KB], 65536KB chunk unused devices: <none>
Speed up the reconstruction...
[root@nas ~]# echo 2500 > /proc/sys/dev/raid/speed_limit_min [root@nas ~]# echo 500000 > /proc/sys/dev/raid/speed_limit_max
Reconstruction took 48 hours.
...and full details
[root@nas ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Dec 22 01:08:58 2013 Raid Level : raid5 Array Size : 11720658432 (11177.69 GiB 12001.95 GB) Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Aug 9 03:33:15 2015 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : nas.xxxx:0 (local to host nas.xxxx) UUID : e7aef406:83f7794d:017b0d81:24cf4fbf Events : 115947 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 2 8 33 1 active sync /dev/sdc1 3 8 17 2 active sync /dev/sdb1 4 8 65 3 active sync /dev/sde1
...and /proc/mdstat
[root@nas ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sde1[4] sdb1[3] sda1[0] sdc1[2] 11720658432 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/30 pages [0KB], 65536KB chunk
FSCK prior resize. The prompt returns in 3:40 min
[root@nas ~]# e2fsck -f /dev/md0 -C 0 e2fsck 1.42.12 (29-Aug-2014) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information MAINDATA: 2921535/488366080 files (0.2% non-contiguous), 1754817165/1953443072 blocks
Resize. The prompt returns in 6 min
[root@nas ~]# resize2fs /dev/md0 -p resize2fs 1.42.12 (29-Aug-2014) Resizing the filesystem on /dev/md0 to 2930164608 (4k) blocks. The filesystem on /dev/md0 is now 2930164608 (4k) blocks long.
Additional FSCK after the resize. The prompt returns in 4:40 min
[root@nas ~]# e2fsck -f /dev/md0 -C 0 e2fsck 1.42.12 (29-Aug-2014) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information MAINDATA: 2921535/732545024 files (0.2% non-contiguous), 1770138988/2930164608 blocks
Mount and check free space :)
[root@nas ~]# mount /DATA [root@nas ~]# df -h /DATA Filesystem Size Used Avail Use% Mounted on /dev/md0 11T 6.5T 3.8T 63% /DATA