2 RAID 1 or 1 RAID 10 for VM Server Host
-
@scottalanmiller yup. In my case i have limited slot, so i guess i will put host and vm guest in my raid 1 sata hdd 2tb. Including the non critical vm there. And put the critical vm in raid 1 ssd.
Correct?
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller yup. In my case i have limited slot, so i guess i will put host and vm guest in my raid 1 sata hdd 2tb. Including the non critical vm there. And put the critical vm in raid 1 ssd.
Correct?
This is correct.
Just make sure you only use a 20-50GB partition for the host.
-
@jaredbusch cool. Will setup it tomorrow and get back to you guys when i hit a wall
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller yup. In my case i have limited slot, so i guess i will put host and vm guest in my raid 1 sata hdd 2tb. Including the non critical vm there. And put the critical vm in raid 1 ssd.
Correct?
That makes the most sense, yes.
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@jaredbusch cool. Will setup it tomorrow and get back to you guys when i hit a wall
Good luck.
-
@scottalanmiller said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@jaredbusch ok noted with thanks. I got the idea now.
This is a spot where Jared and I differ. He prefers an "old drive" for the Hypervisor. I generally prefer that it share space on the main array. But both approaches work. The one thing you never do is invest in a high performance array just for the hypervisor, it just doesn't matter. Also, Jared's approach really only makes sense if you have spare old drives. If you have nothing, it doesn't make much sense. Most people have old drives around, so it often can be done easily. But not always.
This also assumes you have a slot to put that extra drive into.
-
@tim_g said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@jaredbusch ok noted with thanks. I got the idea now.
I usually cut a small partition out of the big RAID10 (or in your case the RAID1). Like 50 GB for the hypervisor, and use the rest for DATA (virtual disks).
Yup, I do the same. OS volume is around 50GB and the rest goes to something like /var/vms or the default /var/lib/libvirt/images.
Although I do put logs on their own volume.
-
@scottalanmiller about this, so is it correct to say i should use lvm for all array and partition. And use xfs as file system? Avoid ext4 at all cost?
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller about this, so is it correct to say i should use lvm for all array and partition. And use xfs as file system? Avoid ext4 at all cost?
Yes, that makes sense.
-
since I use raid 1 for sata hdd. Can i upgrade to raid 10 in future without losing data? Assuming i use software raid MD.
After reading lots of online reference. It seems better to use IT mode to bypass dependency on spesific hardware raid requirement and better to go MD. So i have better compability to swap the hdd to another machine in case of machine failure. MD also use very minimum CPU & Memory and generally faster for ssd. Hardware raid will need truly the high end one with BBU else it is not recommended especially for ssd in case of power lost. Is it all true? CMIIW
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
since I use raid 1 for sata hdd. Can i upgrade to raid 10 in future without losing data? Assuming i use software raid MD.
No, there is really no software or controller that will let you move from RAID 1 to RAID 10. You will need to delete and recreate to do that.
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
After reading lots of online reference. It seems better to use IT mode to bypass dependency on spesific hardware raid requirement and better to go MD. So i have better compability to swap the hdd to another machine in case of machine failure. MD also use very minimum CPU & Memory and generally faster for ssd. Hardware raid will need truly the high end one with BBU else it is not recommended especially for ssd in case of power lost. Is it all true? CMIIW
That's correct, MD uses almost no resources and is faster than realistically any hardware RAID. Hardware RAID is not for speed, it is for features like blind swap, flash backing and making it easier for IT departments.
-
@scottalanmiller how about power lost with MD? Will it be safe and reliable compared to hardware raid with bbu?
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller how about power lost with MD? Will it be safe and reliable compared to hardware raid with bbu?
No. Software RAID depends on you to ensure absolute solid power external to the chassis. You cannot let software RAID lose power.
-
@scottalanmiller any case that the raid is totally gone due to power lost in md? Or just acceptable file corrupt during the power lost?
Normally what happen when power is lost in md raid?
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller any case that the raid is totally gone due to power lost in md? Or just acceptable file corrupt during the power lost?
Normally what happen when power is lost in md raid?
Anything could happen. Corruption could easily cause full array loss.
Often you are fine. But it is a high risk. There is no protection.
-
@scottalanmiller after i dig further, it seems enterprise ssd have power lost protection capacitor to finish write cache. So it should be safe. So theoritically, md is safer than hardware raid without bbu.
I assume the same with sata enterprise hdd also hence the price diff compared to desktop hdd.
Cmiiw.
-
@kuyaz said in 2 RAID 1 or 1 RAID 10 for VM Server Host:
@scottalanmiller after i dig further, it seems enterprise ssd have power lost protection capacitor to finish write cache. So it should be safe. So theoritically, md is safer than hardware raid without bbu.
I assume the same with sata enterprise hdd also hence the price diff compared to desktop hdd.
Cmiiw.
No not safe. You are mixing the drives and the RAID. Software RAID is not protected by the disk cache. Thatβs unrelated.
-
MD has no protection. Enterprise hardware RAID has non-volatile flash or at worst BBU protection. How is protection worse than no protection? MD leaves you exposed, enterprise hardware RAID does not.
-
A good thing to have is a UPS that can start an auto-shutdown process when the battery gets to a certain level.