HyperV Server - Raid Best Practices
-
@Harry-Lui said in HyperV Server - Raid Best Practices:
OS on RAID 1 and DATA on RAID 5 is so out dated.
Think about it, why would you want HyperV OS to be fast, but data to be slow? You reboot once every 6 months, so you want that boot up to be super fast, but the data you use every second you want it super slow?
If you have to choose, you'd want the data to be fast, NOT slow.
I reboot monthly. Waiting 6 months is silly.
-
I'd say Option 2 is the way to go.
For the rest of you, do you typically store both the VHDs and the VM configuration files on the same data partition?
Before I'm FFS'd, I know I've asked this before, but it was in on Telegram. I figured we'd have the discussion chronicled on the forum.
-
@EddieJennings said in HyperV Server - Raid Best Practices:
I'd say Option 2 is the way to go.
For the rest of you, do you typically store both the VHDs and the VM configuration files on the same data partition?
Before I'm FFS'd, I know I've asked this before, but it was in on Telegram. I figured we'd have the discussion chronicled on the forum.
Here are the following setup that I've seen.
Virtual Hard Disks: V:\Hyper-V\Virtual Hard Disks\ Virtual Machines: V:\Hyper-V\
Virtual Hard Disks: V:\Hyper-V\ Virtual Machines: V:\Hyper-V\
And then sometime when creating your VM, the option to
Store the virtual machine in a different location
is selected too. So the folder has the same name of the VM and all its contents is stored there. -
@EddieJennings said in HyperV Server - Raid Best Practices:
I'd say Option 2 is the way to go.
For the rest of you, do you typically store both the VHDs and the VM configuration files on the same data partition?
Before I'm FFS'd, I know I've asked this before, but it was in on Telegram. I figured we'd have the discussion chronicled on the forum.
Why wouldn't you? Or more to the point, why are you even looking at changing defaults?
Of course the real answer is it depends.
For me the answer is I always "move" them but the form of that move depends on the hypervisor. That is because I almost always install the hypervisor on a sata disk (ssd/spinner/wtfever) connected to the motherboard and not to the RAID controller. So the RAID array is separate.
I do the same with KVM and Hyper-V.
VMWare, I use USB/SD (but been a long time since I used VMWare).With KVM, I simply mount the RAID array space in the default location.
/var/lib/libvirt/images
or something like that.With Hyper-V I update the default because it is now dreive
D
also it uses a really stupid location, but I still just shorten it fromD:\blah\blah\blah\Hyper-V
toD:\Hyper-V
-
@scottalanmiller said in HyperV Server - Raid Best Practices:
Now I might want to put the hypervisor, DC, and file server onto 4x spinners, and have a RAID 1 SSD pair for the SQL Server. But if you aren't putting the SQL Server onto SSD, then nothing should be on SSD.
Man - I keep forgetting about options like this - this could be a great option.
Install Hyper-V on the spinning disks leaving all the storage space of the SSDs for the SQL server - assuming that's enough storage for that job.
Though if you don't need the IOPs for SQL, then the spend might not be worth it to have SSD there, and going with HDDs would be better.
-
That said, I don't care where the VM configuration files are. Because they are not important.
Why? Because the backup software/script/process/WTFever backs them up.
-
@Joel said in HyperV Server - Raid Best Practices:
Hi guys.
Im drawn between two setup scenarios for a new server:Option1:
2x 240GB SSD Sata 6GB (for OS)
4X 2TB 12Gb/s (for Data)
I was planning on using Raid1 for the OS and then Raid5/6 for the Data storageOptions2:
6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).Is there any better options? What would you do.
Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).
Thoughts welcomed and appreciated.
I'd go with the Option 1 setup with the following changes:
RAID10 the 4x 2tb drives, and partition that for C:OS and D:Data.
RAID1 the SSDs (E:) , and store the database virtual disks on there, and the rest on D:.
-
@Obsolesce said in HyperV Server - Raid Best Practices:
@Joel said in HyperV Server - Raid Best Practices:
Hi guys.
Im drawn between two setup scenarios for a new server:Option1:
2x 240GB SSD Sata 6GB (for OS)
4X 2TB 12Gb/s (for Data)
I was planning on using Raid1 for the OS and then Raid5/6 for the Data storageOptions2:
6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).Is there any better options? What would you do.
Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).
Thoughts welcomed and appreciated.
I'd go with the Option 1 setup with the following changes:
RAID10 the 4x 2tb drives, and partition that for C:OS and D:Data.
RAID1 the SSDs (E:) , and store the database virtual disks on there, and the rest on D:.
Using SSD for most SMB database needs are overkill. They run perfectly fine on R10 spinners.
-
If the server has eight 2.5" bays then fill them with 10K SAS in RAID 6.
8x 10K SAS in RAID 6 sustains 800MiB/Second throughput and 250 IOPS to 450 IOPS per disk depending on the storage stack format.
Set up two logical disks:
1: 75GB for the host OS
2: Balance for virtual machines.Use FIXED VHDX for the VM OS VHDX files and same for 250GB or less data VHDX files.
Then, use one DYNAMIC VHDX for the file server's data to expand with. This setup would limit the performance degradation that would otherwise happen over time due to fragmentation. It also allows for better disaster recovery options as moving around a 1TB VHDX with only 250GB in it would be painful.
If there's a need for more performance then go with at least four Intel SSD D3-4610 series SATA in RAID 5 (more risky) or five in RAID 6 (less risky). We'd go for eight smaller SSDs versus five larger ones for more performance using the above formula.
Blog Post: Disaster Preparedness: KVM/IP + USB Flash = Recovery. Here's a Guide
-
@JaredBusch said in HyperV Server - Raid Best Practices:
@Obsolesce said in HyperV Server - Raid Best Practices:
@Joel said in HyperV Server - Raid Best Practices:
Hi guys.
Im drawn between two setup scenarios for a new server:Option1:
2x 240GB SSD Sata 6GB (for OS)
4X 2TB 12Gb/s (for Data)
I was planning on using Raid1 for the OS and then Raid5/6 for the Data storageOptions2:
6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).Is there any better options? What would you do.
Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).
Thoughts welcomed and appreciated.
I'd go with the Option 1 setup with the following changes:
RAID10 the 4x 2tb drives, and partition that for C:OS and D:Data.
RAID1 the SSDs (E:) , and store the database virtual disks on there, and the rest on D:.
Using SSD for most SMB database needs are overkill. They run perfectly fine on R10 spinners.
Having 6x drives is probably overkill, too.
In reality, maybe just two SSDs in RAID 1 is all that is needed.
-
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
-
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
-
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.
Cost wise, it's not that much of a step.
-
@scottalanmiller said in HyperV Server - Raid Best Practices:
@JaredBusch said in HyperV Server - Raid Best Practices:
@Obsolesce said in HyperV Server - Raid Best Practices:
@Joel said in HyperV Server - Raid Best Practices:
Hi guys.
Im drawn between two setup scenarios for a new server:Option1:
2x 240GB SSD Sata 6GB (for OS)
4X 2TB 12Gb/s (for Data)
I was planning on using Raid1 for the OS and then Raid5/6 for the Data storageOptions2:
6x 2TB Drives in OBR10 for everything and then creating two partiions (1 for OS) and (1 for data).Is there any better options? What would you do.
Environment will be Windows running. The server (bare metal) will run HyperV Server and the data drive will home 3x VM's (1x SQL, 1x DC and 1x FileServer).
Thoughts welcomed and appreciated.
I'd go with the Option 1 setup with the following changes:
RAID10 the 4x 2tb drives, and partition that for C:OS and D:Data.
RAID1 the SSDs (E:) , and store the database virtual disks on there, and the rest on D:.
Using SSD for most SMB database needs are overkill. They run perfectly fine on R10 spinners.
Having 6x drives is probably overkill, too.
In reality, maybe just two SSDs in RAID 1 is all that is needed.
Well - other than shear volume of storage.
-
@PhlipElder said in HyperV Server - Raid Best Practices:
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.
Cost wise, it's not that much of a step.
What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.
As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?
-
@Dashrender said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.
Cost wise, it's not that much of a step.
What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.
As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?
I was thinking more for the guests than the host.
A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.
EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.
-
@PhlipElder it's still added cost for little to no gain.
Try and justify this poor decision all you want. But it was and is still a poor decision.
-
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder it's still added cost for little to no gain.
Try and justify this poor decision all you want. But it was and is still a poor decision.
To each their own.
-
@PhlipElder said in HyperV Server - Raid Best Practices:
@Dashrender said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.
Cost wise, it's not that much of a step.
What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.
As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?
I was thinking more for the guests than the host.
A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.
EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.
OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.
Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...
-
@Dashrender said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
@Dashrender said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
@DustinB3403 said in HyperV Server - Raid Best Practices:
@PhlipElder said in HyperV Server - Raid Best Practices:
We have one of our boxes (R2208GZ4GC) that's being used as a platform for a client recovery.
It's set up with:
2x 240GB Intel SSD DC S4500 RAID 1 for host OS
2x 1.9TB Intel SSD D3-S4610 RAID 1 for VMsIt's working great for the seven or eight VMs currently stood up on it. RAID is provided by Intel RMS25CB080 (IIRC) series module.
What you have is wasted SSD performance and cost and storage capacity.
The performance pays for itself when they are in full swing with very little to no noticeable latency. And, updates run a lot faster.
Cost wise, it's not that much of a step.
What? The performance of the hypervisor is meaningless - assuming that's what's on the SSDs and not the VMs. So yeah, the SSDs would be a waste.
As for updates - how often are you updating the hypervisor that you care that much about how fast updates run on the hypervisor?
I was thinking more for the guests than the host.
A spindled RAID 1 for the host OS would be a real pain for updating the host. The CUs get yuge and time consuming.
EDIT: A pair of Intel SSD DC series SATA drives are not expensive at 240GB or smaller. In the overall scheme of things the biggest cost on the host are the CPUs and memory then the storage subsystem depending on setup.
OK fine - the CUs for the hypervisor get big - and? you're likely only patching them monthly, if even. How much time are you saving by using SSD there compared to HDD? 1 min? maybe 5? That's a lot of time to justify the cost of SSDs, even if they are only $99/ea. Why have them at all? Why not just install the hypervisor on the RAID 10 your VMs are on, save yourself $200.
Yeah I get it - it's a $3,000+ server, $200 is nothing... but it's still about 8%+... so it's not nothing...
Y'all realize WD/HGST no longer makes 2.5" SAS spindles? Seagate won't be too much further down the road. They've reached the end of the road.
As far as the dollar figure goes, where there's value there's value. That too is all in the eye of the beholder. Our clients see it as we're delivering flash in our standalone and clustered systems.
We shall need to agree to disagree.
TTFN