Need to Improve Disk Utilization on XenServer 7.2
-
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
My Win10 VM gets twice the I/O as his. We both had nothing in the background running.
I know this doesn't help with Xen, but food for thought. (we're both using m.2 SSDs) Mine was over 2k MBps reads, his was 1k, mine was 400 MBps writes, his was 200
Edit: We're both running Fedora 26. At that time I was running Gnome and him Cinnamon.
-
@tim_g what would be the comparison speed of a raid 10 off spinning rust to 1 ssd in iops?
-
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
@scottalanmiller well like they say all good things come to an end XS
I like XO I hope he considers porting to KVM.
Oh he is considering it. Just don't know what he decided.
-
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
My Win10 VM gets twice the I/O as his. We both had nothing in the background running.
I know this doesn't help with Xen, but food for thought. (we're both using m.2 SSDs) Mine was over 2k MBps reads, his was 1k, mine was 400 MBps writes, his was 200
Edit: We're both running Fedora 26. At that time I was running Gnome and him Cinnamon.
But both on KVM, right? No Xen involved?
-
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
@tim_g what would be the comparison speed of a raid 10 off spinning rust to 1 ssd in iops?
A SATA 7200 RPM drive is ~100 IOPS. So four of them in RAID 10 is 400 Read IOPS.
A typical SATA SSD is 10K - 100K IOPS.
You would need hundreds of SATA drives in a massive RAID 10 with huge cache to come close to a single $100 SSD drive, let alone a nice M.2 drive.
-
@scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
My Win10 VM gets twice the I/O as his. We both had nothing in the background running.
I know this doesn't help with Xen, but food for thought. (we're both using m.2 SSDs) Mine was over 2k MBps reads, his was 1k, mine was 400 MBps writes, his was 200
Edit: We're both running Fedora 26. At that time I was running Gnome and him Cinnamon.
But both on KVM, right? No Xen involved?
Correct.
-
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
Why RAW (.img), I thought qcow2 is/was preferred?
-
@fateknollogee said in Need to Improve Disk Utilization on XenServer 7.2:
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
Why RAW (.img), I thought qcow2 is/was preferred?
I don't need any special features like snapshotting or anything like that, only performance. RAW is presented as-is to the VM and gives the best IO performance.
I can always convert if need be, and there are other ways of snapshotting/checkpointing.
There are many other differences too, but it's better to google comparisons rather than me try to quickly explain it while preoccupied.
-
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
@fateknollogee said in Need to Improve Disk Utilization on XenServer 7.2:
@tim_g said in Need to Improve Disk Utilization on XenServer 7.2:
A co-worker has the same laptop as I do, except the 15" version. Everything else is the exact same.
He's using EXT4 + LVM, and qcow2 for virtualdisks.
I'm using XFS + LVM, and RAW (.img) for virtual disks.
Why RAW (.img), I thought qcow2 is/was preferred?
I don't need any special features like snapshotting or anything like that, only performance. RAW is presented as-is to the VM and gives the best IO performance.
I can always convert if need be, and there are other ways of snapshotting/checkpointing.
There are many other differences too, but it's better to google comparisons rather than me try to quickly explain it while preoccupied.
If you preallocate the qcow2s you get close to raw speeds.
-
For what I'm doing qcow2 is fine, except I can't snapshot "q35/uefi"...a fix is coming sometime in the future.
-
Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!
-
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!
Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.
-
@scottalanmiller said
You can also install the GUI on the server and have local management tools. Obviously managing purely remotely is better. But as this is a desktop anyway, local management tools are not out of the question and you can switch later once you are comfortable with it. There is no lock in to your GUI or tools choices like with Hyper-V.
"Obviously"
Hey do you consider cockpit a GUI?
-
@scottalanmiller
Hey Scott, I'm going to switch my SATA 7200 RPM spinning rust probably tonight. I'll go ahead and switch to a SSD (luckily I have a bunch sitting around). With that in mind, I'm trying to wrap my head around the performance impact per machine with SSD vs a HDD. I realize the RPM's drop per "vm" I have with spinning rust. Does my IOPS drop per vm with SSD?Also just to give some context, on my two Dell R530 @ work (thank God remember those oldy goldy days SAM I had haha), I went with SAS 7200's with my H700 512 MB Cache. The thing runs like a champ with just my four 2 TB HDD's in Raid 10. I have literally no IOPS problems that I've experienced. I have about 30 VM's running. With that in mind, does it make sense at work to consider the swap to full SSD?
The only server i have that takes pure storage is a Ubiquiti NVR, short of that nothing else runs slow or has even blimped at me wrong. No startup sprawl either which when I look back at my old craptacular Dell Tower T110 i, it died from sprawl.
-
@scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!
Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.
If you used 4 ssd in a raid 10 do you still get ever increasing levels of performance?
-
@brrabill said in Need to Improve Disk Utilization on XenServer 7.2:
@scottalanmiller said
You can also install the GUI on the server and have local management tools. Obviously managing purely remotely is better. But as this is a desktop anyway, local management tools are not out of the question and you can switch later once you are comfortable with it. There is no lock in to your GUI or tools choices like with Hyper-V.
"Obviously"
Hey do you consider cockpit a GUI?
Yes, do you consider it local?
-
@jmoore said in Need to Improve Disk Utilization on XenServer 7.2:
@scottalanmiller said in Need to Improve Disk Utilization on XenServer 7.2:
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
Scott I knew it was vast difference but not 100x!!!! Dude a 4 ssd raid 10 is Basiclally all you need !!!
Absolutely, that's why no one uses RAID 10 for SSDs, doesn't make sense as the leap in performance is so big. That's why RAID 5 is about all that is used.
If you used 4 ssd in a raid 10 do you still get ever increasing levels of performance?
RAID is RAID. That the disk is SSD isn't a factor to the RAID system.
What can be a factor is if you use a RAID controller that caps out lower than your RAID subsystem.
-
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
@scottalanmiller
Hey Scott, I'm going to switch my SATA 7200 RPM spinning rust probably tonight. I'll go ahead and switch to a SSD (luckily I have a bunch sitting around). With that in mind, I'm trying to wrap my head around the performance impact per machine with SSD vs a HDD. I realize the RPM's drop per "vm" I have with spinning rust. Does my IOPS drop per vm with SSD?No, SSDs do not have contention.
-
@jmoore
In short, yes RAID 10 in SSD would be astronomical faster. I think looking at it like this,
4 spinning rust HDD's at 7200 RPM will only net you a max of 400 IOPS, you can keep adding more HDD's in pairs and keep improving the speed. I assume that doesn't include overhead. I mean if you got a server and just want pure storage first, then speed, then sticking with RAID 10 allows you to keep incrementally improving.But when I took that into context in what Scott is saying, its like I would have use like 10 arrays of hard drives to equal the performance of 1 SSD! But this also increases risk, what if a drive fails! That's a lot of drives to baby sit!!!!
I did my digging around 550 MB/s is roughly where most consumer ssd drives and I assume some enterprise drives that don't use nve cap off at. That's with a conservative base of 10,000 IOPS and goes up to 2 Million IOPS!!!!!!!
[https://kb.sandisk.com/app/answers/detail/a_id/16376/~/sandisk-ultra-ii-ssd-specifications](link url)
I guess looking at it from a different perspective, I would find little need for many small companies to ever want to go past 4 SSD's with RAID 10.
To get the equivalent of that speed, you would have to stuff so many internal and external RAID controllers, you would have paid well more than needed to! Even if you went 15K SAS!
-
@krisleslie said in Need to Improve Disk Utilization on XenServer 7.2:
Also just to give some context, on my two Dell R530 @ work (thank God remember those oldy goldy days SAM I had haha), I went with SAS 7200's with my H700 512 MB Cache. The thing runs like a champ with just my four 2 TB HDD's in Raid 10. I have literally no IOPS problems that I've experienced. I have about 30 VM's running. With that in mind, does it make sense at work to consider the swap to full SSD?
So each NL-SAS there has 20% more IOPS than its SATA counterpart. Then RAID 10 on top of that. Then the "million IOPS" cache on top of that. Your base RAID there is getting nearly 5x the IOPS of your new machine, and then that cache makes it act many, many times that size. It's dramatic.
As far as if it is worth moving to SSD, all depends on if more IOPS would be beneficial or not. If you have plenty, what value would faster storage bring?