On NAS, I tend to go with raid6 (two failed drives with only four disks). My HP servers (spinning rust) only have four bays but do have an eSATA port on the back. They're not hot-swappable though, not even the eSATA. It wasn't sensible to have an eSATA caddy sat on top with a hot spare: too easily knocked. That was until I retired centos in favour of debian when I had a brainwave (those get further apart with age): slap an SSD into the eSATA caddy and install debian on that. In fact I attached the SSD to my desktop PC and created a virtual machine to install debian on it. Copied the config files off the first NAS, shut them both down and stuck the SSD in the NAS eSATA caddy. Aside from one glitch(*) it booted straight up. Fiddle with /etc/fstab to bring the spinning rust into play.I've only ever used mdadm on an rpi once, as an afternoon experiment.
I've used it twice and never in anger. Once when writing the RAID section of my NAS guide and once to test performane on a 4B as backing store for the mass storage gadget (with a pair of USB2 thumb drives it was unusably slow). Both cases were RAID1.
RAID10 doesn't make sense to me on flash drives as they don't have the latency issues caused by head movement and spinning rust.
FWIW, my eventual solution was btrfs filesystems with nightly snapshots that are rsynced to a different physical dvice.I suspect most users will follow a raid1 google tutorial and we'll hear nothing until it falls in a heap!
Very probably. Or worse, they'll follow mine
(*) Funny in hindsight but I've just seen the time and I need to be out early.
Statistics: Posted by swampdog — Sun Mar 24, 2024 12:34 am