If all hypervisors were priced the same...
-
@emad-r said in If all hypervisors were priced the same...:
KVM, not cause of KVM cause it runs and actively supported and updated on Linux OSes, so eventually we will get all the features if not more, and benefits and more of ESXi via external packages like mdraid + cockpit, so you can build pretty strong system but the learning curve can scare people away.
People talk a lot about MDRAID, but given how hit/miss hot-add are on HBA's (Glares at HPE) or that it's commonly done with AHCI controllers (Garbage performance QD=25 for ALL drives!) I don't see what the big deal is about buying a proper raid controller that you can access through out of band (iLO/iDRAC), has proper hot-add support, and a NVDIMM cache, or layering a distributed SDS system on top (in which case you don't use MDRAID. Even RedHat was requiring a local raid controller for their cluster HCI thing last time I checked.
-
@bnrstnr said in If all hypervisors were priced the same...:
I've never even used VMware, but I'm pretty sure if every single feature was available for free (like all the other hypervisors), then I'm pretty sure that's a no-brainer.
It's not just feature but ecosytem to consider. xxx hypervisor may work for what you do, but what if you need to run XenDesktop. It's not a supported hypervisor for them to do PVS/MCS automation with. What if you needs FIPS 140-2 compliance, or need a DISA STIG.
What if you need NSX/microsegmentation and service insertion support? NSX-T can cover KVM, but for Hyper-V or Xen you'll need to deploy a gateway.
Hypervisor requirements tend to not live in a vacuum, and that drives a lot of stuff.
-
@storageninja said in If all hypervisors were priced the same...:
What value does Fedora Server bring for actually running on the KVM hosts?
Control, access, security, open source, etc.
Cockpit for example. Any linux tools at your disposal.
I never said anything about installing kitchen sinks. Dumb assumption.
-
@storageninja said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
KVM, not cause of KVM cause it runs and actively supported and updated on Linux OSes, so eventually we will get all the features if not more, and benefits and more of ESXi via external packages like mdraid + cockpit, so you can build pretty strong system but the learning curve can scare people away.
People talk a lot about MDRAID, but given how hit/miss hot-add are on HBA's (Glares at HPE) or that it's commonly done with AHCI controllers (Garbage performance QD=25 for ALL drives!) I don't see what the big deal is about buying a proper raid controller that you can access through out of band (iLO/iDRAC), has proper hot-add support, and a NVDIMM cache, or layering a distributed SDS system on top (in which case you don't use MDRAID. Even RedHat was requiring a local raid controller for their cluster HCI thing last time I checked.
You can always start with centos minimal or fedora minimal , then install KVM on them, you will be surprised how lean and small the system is, regarding why Fedora check this:
https://mangolassi.it/topic/16450/meltdown-shows-why-to-avoid-lts-releases
2nd of all, the performance is not bad in my tests, its very good even with 2 degraded disks, and why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
Other than that if you look at the modular approach here
Any linux OS really
Linux RAID
cockpit
KVMThats pretty sweet, and KVM is getting a nice management interface by accident, which means we can build very reliable system on the cheap.
Not sure what you mean about proper hot add support ? you either have it or you dont.
And there are good Chipsets lately from AMD and INTEL, especially after Ryzen and we have truly enterprise quality SATA disks from WD, the RED series are very proven and reliable and good durable HDD using PMR, just steer away of those HAMR ones and you will be good to go. -
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
-
@emad-r said in If all hypervisors were priced the same...:
why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
You accelerate the end user fault because of issues with SES not working correctly (Getting the right drive light to blink is strangely hard with DAS shelfs), or because of lack of end to end testing (Good luck getting HotAdd to work on hot swap on some HBA's). You cripple performance at scale doing it on AHCI controllers (25 queue depth for all drives vs. 600+ for a proper raid controller or HBA).
SATA drives are fine for home backup type stuff (I have Reds at home too) but for production workloads 5400RPM means ~20 IOPS at low latency before they kinda fall over. I have a Ryzen desktop system, and I just boot from NVMe (M.2). Intel's vROC is interesting but I havn't seen any server OEM's adopt it yet.
-
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
It's a fork of XenServer to try to bring back the API's and the features that Citrix has locked from the free version (and also back port security patches to older versions as Citrix only does this for paid users now if you want beyond 6 months). It's being run by a small community group.
Citrix only sees XenServer as useful as a means to an end for VDI (and they have been slowly stepping down their investment in it). The linux foundation (who technically has Xen) Doesn't really care (They are backing KVM). So it's up to a rag tag band of rebels to keep Xen going...
-
@storageninja said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
It's a fork of XenServer to try to bring back the API's and the features that Citrix has locked from the free version (and also back port security patches to older versions as Citrix only does this for paid users now if you want beyond 6 months). It's being run by a small community group.
Sorry, I meant it looks like an acronym; if so, what does it mean?
-
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
https://wiki.xen.org/wiki/XCP_Overview -
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
-
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
-
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
Possibly... https://github.com/xcp-ng
-
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@black3dynamite said in If all hypervisors were priced the same...:
@bbigford said in If all hypervisors were priced the same...:
@dustinb3403 said in If all hypervisors were priced the same...:
XCP
What does xcp-ng mean? Couldn't find it on the introduction
Xen Cloud Platform
NG=New Generation?
I'm not sure, but it does makes more sense.
Possibly... https://github.com/xcp-ng
Well that's it. Xen Cloud Platform New Generation. That tops the longest hypervisor project name.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
It’s just a newer kernel and some packages over CentOS/RHEL. But it also has some trade offs.
-
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
-
@stacksofplates said in If all hypervisors were priced the same...:
@storageninja said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.I've always liked a tiny hypervisor and push the management off to API's (That can have layered UI/CLI) rather than install the damn kitchen sink on the hypervisor. What value does Fedora Server bring for actually running on the KVM hosts? You need to run Containers on ring 0 or something weird?
Red Hat has been looking at running their OpenStack platform in OpenShift on RHEL Atomic. Not as small as ESXi but it’s around 700MB.
That way nothing is installed in the OS at all. You can actually rebase between Fedora and CentOS/RHEL in Atomic and it doesn’t touch any of your apps.
-
@storageninja said in If all hypervisors were priced the same...:
@emad-r said in If all hypervisors were priced the same...:
why purchase RAID controller when you get good amount of reliability using software RAID, Linux software RAID have been tested alot and alot and many big companies of enterprise NAS systems utilize it. I understahd that hardware RAID controller works most of the time for nearly anything, and software raid most properly will fail due to end user fault, cause it has some learning curve.
You accelerate the end user fault because of issues with SES not working correctly (Getting the right drive light to blink is strangely hard with DAS shelfs), or because of lack of end to end testing (Good luck getting HotAdd to work on hot swap on some HBA's). You cripple performance at scale doing it on AHCI controllers (25 queue depth for all drives vs. 600+ for a proper raid controller or HBA).
SATA drives are fine for home backup type stuff (I have Reds at home too) but for production workloads 5400RPM means ~20 IOPS at low latency before they kinda fall over. I have a Ryzen desktop system, and I just boot from NVMe (M.2). Intel's vROC is interesting but I havn't seen any server OEM's adopt it yet.
Noted, but at the same time each project is different than each other, I kinda tend to the small to medium business, and in the middle east region that means something else than small to medium business in US.
Your ryzen system can support 6 SATA ports + 2 SATA express, the OEM can reuse the SATA express which in my understanding is each SATA express basically consists of 2 dedicated SATA ports + power, so therotically the chip-set can support 10 SATA normal ports, but realistically OEM can give you 200$ motherboard with 8 SATA ports, now add this to the fact that linux systems loves common hardware, and can be installed on anything.
Also the same 200$ board can have NVMe M2, which can be great for separate OS install.
You start to look at things differently I think we have desktop systems and AMD new way of doing things is giving us more for less, actually they have been doing this for some time, but only now we are really getting something good, with good CPU, since when AMD has 8 core (disregard the 16 threads cause with KVM you want to disable that https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liabp/liabpmicrothread.htm )with 65W ?
And were talking about 200$ motherboard and 300$ CPU, what was the cost of good fancy RAID controller again that can support up to 8 drives ?
Seems good ones will cost you at least $250+, and their viability of RAID cards where I live is not as common as CPU + motherboards.
Living in this region teach you alot of hacks and tricks, but if the system is durable enough then why not ? sure it will be slower but you just dont tax the chipset, dont fill it up if possible use
8TBx4
instead of 4TB x8I am actually gona go for similar build very soon, and feeling confident, cause I was able to simulate the environment using VMware Workstation, yes I cant afford a real physical machine as home lab (I can actually but the software one is good enough), salaries also are difference here, I am considered to have very high salary for my age in my country, (currently earning 1600 $ per month) but Workstation Pro can simulate and pass the CPU AMD-V to guest VMS, so i can get rough idea of the real deal, and the pitfalls and strength, but trust me for the price and how easy it is to manage it will rock and blow anything out of the water.
Sure only 1 person in the country will know how to run it, which is me but I guess that is extra point.
-
My 2 cents are based on the fact that if they all were priced the same, they may not have come this far. Take VMware for example.
-
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
-
@olivier said in If all hypervisors were priced the same...:
@tim_g said in If all hypervisors were priced the same...:
If features and costs (free) were identical across the board, I would choose KVM hands down.
I love being able to run off Fedora Server, plus all the doors that open up by doing that... which you can't get from Hyper-V or VMWare.
Sure Xen can be installed on there too, but it's dieing and I'm less familiar with it.
Can you stop with that FUD? Thanks. It's not dying at all. I hear this since 2006. It's like saying Linux is not secure because Open Source.
No fear or doubt, just uncertainty. But this is only because of how Citrix is treating Xen Server, and how Amazon is moving from Xen to KVM.
I feel the only thing that can save Xen is XCP-ng. I'm really hoping for it's success and have high hopes for it.