Welcome to FreeSoftwareServers Confluence Wiki

The setup for this example:

I have about 1.2 TBs of data on one of my disks.

I have 4x5TB disks.

When creating a RAID5 according to this doc you can only have 1 missing device, but I think I misunderstood what this means. I think that means you must have at least 2 clean disks to create the array. In this example we will follow how I did it, but what I should have done is created a RAID 5 Array with 3 disks and 1 missing.

What I did was create a RAID5 with 2 disks and 1 missing. AKA degraded. Then added another disk. (Each addition took ~2/3 days)

First you create the array with clean formatted disks and then copy over the data, format the disk that you just copied FROM and then add it to the array. This process of adding the missing disk with an RAID5 array with 1.2TB of existing data took me ~3Days. You can theoretically use the raid during that time, but it increases the time to create array and probably increases chance of something screwing up. Either way I did not use array during rebuild.

Fire up GParted in a LiveISO. I like Parted Magic, but whatever works for you. GParted has their own LiveISO but I found it ran from the HD and therefor locked the HD where Parted Magic ran in RAM allowing all disks to be manipulated.

Format the disk to ext4 and then ADD THE RAID FLAG. (This is important!)


Install MDADM

echo y | sudo apt-get install mdadm

Find UUID of disks that are freshly formatted and ready to be used in Array

sudo blkid = UUID 

If you have a disk with data on it, use df -h to figure out which /dev/sd** is the disk you don't want to add.

df -h 

Create Degraded Array

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 missing 

Check via

cd /dev/ 

Look for

/dev/md0 or /dev/md127 

Note: Not sure why, but I read that it is common for MDADM to randomly use md127 instead of md0.
Now check via

sudo mdadm --detail /dev/md0

The state should be degraded
Mount the Raid array via fstab:

sudo nano /etc/fstab && sudo mount -a 
/dev/md0    /mnt/mdadm   ext4    defaults,nobootwait,nofail     0    2 

Format the Array to ext4

sudo mkfs.ext4 /dev/md0 

Now copy over your data, and then we will add the missing device via:

mdadm --manage /dev/md0 --add /dev/sdd1 

Now it needs to "rebuild" the array, you can watch via:

watch /proc/mdstat 

This might take days.
Once Completed you need to check again:

sudo mdadm --detail /dev/md0 

The state should be clean.
To GROW or ADD a disk to an already created MDADM array use the following *After formatting ext4 and adding Raid FLAG!*

mdadm --add /dev/md0 /dev/sda1
mdadm --grow --raid-devices=4 --backup-file=/path/outside/array/grow_md0.bak /dev/md0 

This might take days.
Once Completed you need to check again:

sudo mdadm --detail /dev/md0 

The state should be clean.
BUT you will notice if you use

df -h 

That the size of the array has not changed! You must grow it via

sudo resize2fs /dev/md0 

Create the mdadm.conf file which is used to mount the md0 @ reboot via:

sudo cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.bak && sudo sh -c "mdadm --examine --scan >> /etc/mdadm/mdadm.conf" 

Now reboot and check the array via the following

df -h      #If md0 was mounted it should show here with correct size parameters

sudo blkid #If md0 was not mounted, but correctly presented itself to the OS it should appear under blkid 

I had the issue where md0 was created as md127. Originally I just used md127 in fstab and it worked, but I eventually fixed it, but I am not sure how, I was replacing mdadm.conf and messing around a lot and now its fixed :P. As it took days during rebuilds I will not be re-testing this problem any time soon.

  • No labels

1 Comment