How to take advantage of virtualization. Major products get updated
-
@obsolesce said in How to take advantage of virtualization. Major products get updated:
@dustinb3403 said in How to take advantage of virtualization. Major products get updated:
Yes the skills required to use KVM on a production level is different, but it is no different than managing any other Hypervisor.
Yup, it's easier to use in every way I can think of. You don't easily get agentless backups for the VMs, but that doesn't have to be a deal breaker. Other than that, KVM is easier and better.
It does actually get agentless backup, it's just not a very good one. I have agentless backups on my KVM systems. Better than VMware Free, not anything close to Hyper-V.
-
For those new to KVM: What is the general consensus on having a Fedora-based KVM host that boots and runs from a SSD but uses a software RAID on spinning rust for the VM storage?
-
@obsolesce I was wondering about that. When she introduced the writer she did so saying he had some mvp title or something like that. He should know precisely what is wrong and right with the technologies he was talking about. He reminded me, instead, of a plain writer who tried to research something for an article and just got it all wrong.
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
For those new to KVM: What is the general consensus on having a Fedora-based KVM host that boots and runs from a SSD but uses a software RAID on spinning rust for the VM storage?
That is one option. Why not make the software raid bootable, install Fedora on a small partition, and use the SSD as cache for the LVM volumes?
-
@brandon220 completely wrong. Instead, use a couple of spinning rust for the Hypervisor and the SSD for the VMs.
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
For those new to KVM: What is the general consensus on having a Fedora-based KVM host that boots and runs from a SSD but uses a software RAID on spinning rust for the VM storage?
In theory you want fast disks were the workload is. So if in doubt just use fast disks for vm not for the hypervisor
-
I have some small SSDs that are not big enough to store VMs but would run a host machine very efficiently. I realize you want the performance where the workload is. Lets forget about the SSD for a minute.... Is software RAID in Linux acceptable to use? I always use hardware RAID controllers in servers but wanted to venture into trying software RAID.
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
I have some small SSDs that are not big enough to store VMs but would run a host machine very efficiently. I realize you want the performance where the workload is. Lets forget about the SSD for a minute.... Is software RAID in Linux acceptable to use? I always use hardware RAID controllers in servers but wanted to venture into trying software RAID.
The host doesn't do anything, so SSDs have no benefit besides making things go faster that aren't VM related... such as faster Fedora updates, which are already fast... and other minor things.
Software RAID is good in Linux. I'd use LVM.
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
I have some small SSDs that are not big enough to store VMs but would run a host machine very efficiently. I realize you want the performance where the workload is. Lets forget about the SSD for a minute.... Is software RAID in Linux acceptable to use? I always use hardware RAID controllers in servers but wanted to venture into trying software RAID.
Software RAID is acceptable to use anywhere that the people managing have the skills to do so.
That is why Hardware RAID is everywhere. It takes no skills. You put in disks and configure. After configured, there is nothing to ever do again even on disk failure as most hardware RAID systems have blind swap capabilities.
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
I have some small SSDs that are not big enough to store VMs but would run a host machine very efficiently. I realize you want the performance where the workload is. Lets forget about the SSD for a minute.... Is software RAID in Linux acceptable to use? I always use hardware RAID controllers in servers but wanted to venture into trying software RAID.
Assuming oyu have 3 of them, put them in a RAID5 and run some small workloads on them.
-
I plan on setting this up in the lab soon. dm-cache also sounds interesting. I've never touched software RAID because 95% of my environment has been MS for a long time. Always have gone the hw raid route.
-
@scottalanmiller said in How to take advantage of virtualization. Major products get updated:
@obsolesce said in How to take advantage of virtualization. Major products get updated:
@dustinb3403 said in How to take advantage of virtualization. Major products get updated:
Yes the skills required to use KVM on a production level is different, but it is no different than managing any other Hypervisor.
Yup, it's easier to use in every way I can think of. You don't easily get agentless backups for the VMs, but that doesn't have to be a deal breaker. Other than that, KVM is easier and better.
I just did a new install this week and it turned out actually easier than even VMware at this point! Which is amazing because Vmware is so easy.
Installing Fedora so you can setup KVM was a bit of a challenge.
-
Has this article been rewritten yet, I was looking forward to reading it.
-
@travisdh1 said in How to take advantage of virtualization. Major products get updated:
Out of these four products, only Citrix Xen Server comes with In Memory Read Caching feature.
ESXi has host local DRAM based Caching in two forms.
- vSAN Client Cache
- CBRC. It has the added benefit of including deduplication.
Note the IO path for this doesn't require anything strange like going through a DOM0 VM.
-
@travisdh1 said in How to take advantage of virtualization. Major products get updated:
Vmware ESXi provides VVOLS and VAAI technologies that no other virtualization products provide.
Nobody else calls it the same thing, but the same functionality is available is always available to use.
vVols (especially the block implementation that allows sub-LUN object creation against block) isn't replicated anywhere else. VASA-3 isn't replicated anywhere else. Some of the VAAI primates are represented in other places (XCOPY being a common one) but some of the others don't exist elsewhere (the NFS primates, TP_STUN) or lack quite a bit of finesse in implementation (UNMAP). This also gets back to "someone checked in a module" vs. a 3rd party with enterprise support will actually support you enabling that feature (Xen Remus was a great example of this).
VASA as a 2 way control channel is pretty badass. Cinder/SMI-S are NOT the same thing. There isn't a end to end control, manage, monitor framework like it.
-
@scottalanmiller said in How to take advantage of virtualization. Major products get updated:
I just did a new install this week and it turned out actually easier than even VMware at this point! Which is amazing because Vmware is so easy.
Attach ISO and mash enter and F2 once was what I joking called the ESXi installer back in the day.
Dell/HPE/Cisco/Lenovo/SuperMIcro will PRE-INSTALL ESXi to a M.2 mirror or SD cards. "It was already installed on the damn box" is hard to beat.If your talking about a AutoDeploy ESXi supports PXE booted stateless hosts also.
And that's before we look at templated deployments using Razor/Puppet/OpenManage/UCS that exist.Now you could do the same with other OS's (and use the state tool of your choice). Hell I've seen people use DSC to manage ESXi hosts even (weird, but it's a thing I guess).
-
@brandon220 said in How to take advantage of virtualization. Major products get updated:
I plan on setting this up in the lab soon. dm-cache also sounds interesting. I've never touched software RAID because 95% of my environment has been MS for a long time. Always have gone the hw raid route.
https://www.redhat.com/en/blog/improving-read-performance-dm-cache
Redhat did some testing with an SSD but it looks ugly. 5 passes and no performance improvement using a SSD. I suspect they are hamstrung by the patent minefield that is ARC (IBM of all people has this patent BTW) and the subsidiary cool optimizations that have been made to it (ARC was intended for CPU cache originally, your storage fun fact of the day!). Also I suspect the IO path on this thing isn't the cleanest. Looks like the Linux kernel file cache is going to be faster which if I"m using that I might as well just give memory to the guest and let it sort it out (especially with the lack of dedupe or single instancing of this cache).
If your looking to speed stuff up I say get "the good stuff".
We got some PMEM DIMMS in the lab, and this stuff is face melting fast. You can "bolt" it on with a DAX file system, but the best way to use it is with applications that have been redesigned to support it. We forked REDIS to support this and got latency 12x better than using local NVMe drives, and 2.8x better tha DAX.
Oracle had 57x better operational latency.