I previously wrote about using different interfaces to improve redundancy in a RAID setup. One point which I did not clear up in the said article is why I choose to use striped mirrors. The choice was purely motivated by my desire to scale flexibly. I weighted the increased capacity of parity redundancy against the simplicity of a RAID 10 like setup. Additionally, I happened to have pairs of disks of equatable size and not 3-tuples as needed by parity RAID levels.
I should point out that in the previous article for simplicity’s sake I marked the drive capacities as the minimum size of the tuples. For instance, instead of two 320 GB drives I actually have a 320 and a 400.
I recently came into two external USB drives, one of 1 TB and one of 250 GB. I decided to take this opportunity to grow my storage pool.
Before upgrading, my storage pool was as follows:
Storage Pool: 320 GB Mirror 320 GB USB 400 GB ATA 640 GB Mirror 640 GB SATA 640 GB USB 120 GB Mirror 120 GB SATA 160 GB ATA
I had no intension of taking these newly received disks out of their USB cases so other disks would have to be moved to maintain the interface redundancy while increasing the capacity of the pool.
First I drew up the optimal setup for including the new drives. Rather than show that to you straight away and risk loosing you in all the changes, let me take you through the steps I took to perform the upgrade while maintaining access and full redundancy.
I started by adding the 1 TB drive to my largest mirror thereby minimizing the wasted capacity of the 1 TB drive.
Storage Pool: 320 GB Mirror 320 GB USB 400 GB ATA 640 GB Mirror 640 GB SATA 640 GB USB 1 TB USB 120 GB Mirror 120 GB SATA 160 GB ATA
After this setup had re-silvered—that is the term used by ZFS for coping data onto new drives—I moved the USB 640 GB to the second largest mirror and attached it via SATA.
Storage Pool: 320 GB Mirror 320 GB USB 400 GB ATA 640 GB SATA 640 GB Mirror 640 GB SATA640 GB USB1 TB USB 120 GB Mirror 120 GB SATA 160 GB ATA
You should note that at no time was redundancy compromised. The only downtime was the time it took me to slide the 640 GB into a vacant 3.5” slot and plug in the SATA cable and power. Re-silvering does not prevent disk access while it completes.
Storage Pool:320400 GB Mirror320 GB USB400 GB ATA 640 GB SATA 640 GB Mirror 640 GB SATA 1 TB USB 120 GB Mirror 120 GB SATA 160 GB ATA 320 GB USB
This transform brings the first capacity increase. By moving the 320 GB drive to the smallest mirror, the minimal capacity in the medium mirror goes up to 400 GB.
Storage Pool: 400 GB Mirror 400 GB ATA 640 GB SATA 640 GB Mirror 640 GB SATA 1 TB USB120160 GB Mirror120 GB SATA160 GB ATA 320 GB USB
This step simply adds 40 GB capacity by cutting back on some superfluous redundancy.
Storage Pool: 400 GB Mirror 400 GB ATA 640 GB SATA 640 GB Mirror 640 GB SATA 1 TB USB160250 GB Mirror160 GB ATA250 GB USB 320 GB USB
In this step we replace the 160 GB drive by the aforementioned 250, thereby increasing this mirror capacity by an additional 90 GB. You will note, however, this breaks the interface redundancy as the mirror is solely reliant on USB. We fix this in the final step where we briefly shut down the server (for the second time) to install the 320 USB disk into a free ATA 3.5” slot.
Storage Pool: 400 GB Mirror 400 GB ATA 640 GB SATA 640 GB Mirror 640 GB SATA 1 TB USB 250 GB Mirror 250 GB USB 320 GBUSBATA
You will note that in these steps the two smallest drives were removed and that two of the other drives were moved to maximize capacity increase and maintain interface redundancy.
The total capacity increase is only 80+40+90 = 210 GB, bringing the new capacity to 1.26 TB. It should, however, be noted that new setup is ready to receive another 1 TB drive, which through another cascade of disk movements, could yield as much as an additional 694 GB or a total capacity of 1.94 TB.