I don't know how Starwind vSAN can be run but if it's on a hypervisor it's severely limited by I/O congestion through the kernel. NVMe drives is causing problems that was of no concern whatsoever with spinners. Both KVM and Xen has made a lot of work to limit their I/O latency and use polling techniques now but it's still a problem. That's why you really need SR-IOV on NVMe drives so any VM can bypass the hypervisor and just have it's own kernel to slow things down.
Anton: There are no problems with polling these days You normally spawn a SPDK-enabled VM (Linux is unbeatable here as most of the new gen I/O development happens there) and pass thru RDMA-capable network hardware (virtual function with SR-IOV or whole card with PCIe pass-thru, this is really irrelevant...) and NMVe drives and... magic starts happening This is how our NVMe-oF target works on ESXi & Hyper-V (KVM & Xen have no benefits here architecturally, this is where you're either wrong or I failed to get your arguments). It's possible to port SPDK into Windows user-mode but lack of NVMe and NIC polling drivers takes away all the fun: to move the same amount of data we normally use ~4x more CPU horsepower on "Pure Windows" Vs. "Linux-SPDK-VM-on-Windows" models. Microsoft is trying to bring SPDK to Windows kernel (so does VMware from what I know), but it needs a lot of work from NIC and NVMe engineers and... nobody wants to contribute. Really.
Just my $0.02