Infrastructure Needed for Hypervisor Cluster
-
If I were building my own for a lab, I'd install whatever hypervisor on RAID 1 or 10 (or 5, if I can get SSDs) and StarWind VSAN on both of them and go...
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
But SAN and VLAN don't when you purchase 1 SAN and X servers on top of it to go back and connect to it.
They still "do for storage what virtualization did for computing" for most people - which is allow consolidation and abstraction.
I suppose if you are going from a bunch of 1U servers with six 300GB 10 NL disks to two 1U servers with 2 disks and a SAN sitting behind it that it looks consolidated. . .
SAN has always been for storage consolidation. That was its only real purpose for a long time. Using it for anything else was a recent concept. SAN's primary functionality from inception to today was "cost savings through consolidation at the expense of all other primary factors such as performance, reliability, etc."
-
@scottalanmiller But my point is from looking at it in layman terms, seeing 3 boxes, verses seeing 6 boxes means "WOOT I saved money"
When the reality is that it likely cost as much or more with going with a well designed, more reliable approach.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
Sort of, but it then begs the question of "Didn't SAN and VLAN already do that?" And they did, so it's not a great definitely all on its own.
VLAN's don't provide end to end transport across long distances (unless your that insane person who believes in running layer 2 between continents or data centers at the physical underlay, and want to risk the spanning tree gods destroying your data center). VLAN's don't provide portability of networks across sites. VLAN's don't provide consistent layer 3 and layer 7 security and edge services between hardware. Yes I know PVLAN's exist, and no they don't do all or really any of this (Just useful for guest to guest isolation). Microsegmentation, security service insertion, VxLAN gateways and overlays, policies that stick to VM's (or users of VM's) and follow them etc fall under modern networking virtualization services.
Hypervisors provided similar features to mainframes of old (LPAR) but did so on generic servers, without the need for proprietary hardware. SAN's typically ended up with proprietary disk arrays, and while storage virtualization is a thing, it's generally always tied to one proprietary platform that it hair-pinned through. SDS systems also exist, but your dedicating compute to these platforms while HCI is about being able to flex that pool of resources for storage, compute and networking functions.
Notice I saw generic servers and not just x86. ARM HCI is upon us
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
I suppose if you are going from a bunch of 1U servers with six 300GB 10 NL disks to two 1U servers with 2 disks and a SAN sitting behind it that it looks consolidated. . .
I'm more a fan of not using spinning drives for boot devices. Flash SATADOM, M.2 devices. Even USB/SD cards (Slower on boot, have to redirect logs) tend to have better thermal resistance to spinning disks.
-
@StorageNinja said in Infrastructure Needed for Hypervisor Cluster:
VLAN's don't provide end to end transport across long distances (unless your that insane person who believes in running layer 2 between continents or data centers at the physical underlay, and want to risk the spanning tree gods destroying your data center). VLAN's don't provide portability of networks across sites. VLAN's don't provide consistent layer 3 and layer 7 security and edge services between hardware. Yes I know PVLAN's exist, and no they don't do all or really any of this (Just useful for guest to guest isolation). Microsegmentation, security service insertion, VxLAN gateways and overlays, policies that stick to VM's (or users of VM's) and follow them etc fall under modern networking virtualization services.
HC doesn't address any of that, either, though.
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
Yeah, that's a way to go. oVirt can be external, too.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
-
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
I'd love to have more, but two is what I have. I think for my initial goal of just learning to build a cluster of greater than 1 server can still be achieved.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
So if HA isn't necessary, you could potentially have nodes with various hardware -- such as in my lab where I've accumulated two different servers with different hardware specs: a Dell R310 and a Dell T420.
Sure. People do that all of the time.
Excellent. So back in my lab with two dissimilar servers, I'd have the same hypervisor on each, then one of them would have a VM running the application used to create and manage the cluster. Example applications would be oVirt if I'm wanting to use KVM on the nodes, or perhaps Failover Cluster Manager if I wanted to use Hyper-V on the nodes..
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Maybe try to find a used multi-node server. Many manufacturers makes them - Dell, HPE, Fujitsu, IBM, Supermicro, Intel etc. They're not blade servers, more often 2U servers with 2 or 4 motherboards inside. I guess you could for blade servers too. Search for node server and you'll find them.
PS. I see you have Dell servers. In the Dell world it's the PowerEdge C series servers that are their multinode machines.
I'd love to have more, but two is what I have. I think for my initial goal of just learning to build a cluster of greater than 1 server can still be achieved.
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
-
If you wanted to purchase cheap servers (excluding xByte as they are more for production in terms of cost and warranty) you might get more bang for your buck from a vendor like OrangeComputers.com
-
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
Yeah. I'm not interested in trying to pull off nested virtualization, since like you say, it's not something that I'll deal with in the real world.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
Yeah. I'm not interested in trying to pull off nested virtualization, since like you say, it's not something that I'll deal with in the real world.
Pick a Hypervisor and add StarWind VSAN to both nodes. You get your shared storage that way and for a lab environment, that should be perfect.
-
@dafyre said in Infrastructure Needed for Hypervisor Cluster:
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
Yeah. I'm not interested in trying to pull off nested virtualization, since like you say, it's not something that I'll deal with in the real world.
Pick a Hypervisor and add StarWind VSAN to both nodes, and make sure both nodes have enough room for holding all the VMs... You get your shared storage that way and for a lab environment, that should be perfect.
-
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
I don't think two nodes is enough if you want to play with clusters. Better to have more nodes with less ram/cpu and storage. Like 4 or 6 or something.
Not only is it enough, it's often recommended. There is no such requirement of needing three servers.
-
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
That's a VMware requirement, not a clustering requirement. And even on VMware, it's not an actual requirement, they just need a third witness that isn't part of the cluster.
-
@EddieJennings said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
But I guess you could even do it with just one server and use nested virtualization. The question is how realistic its going to be compared to a real cluster. But maybe it will be enough.
Yeah. I'm not interested in trying to pull off nested virtualization, since like you say, it's not something that I'll deal with in the real world.
Nothing like that needed, two node clusters are the standard for the SMB. Three and larger is for scalable clusters (where you can grow by just adding a node.)
-
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
That's a VMware requirement, not a clustering requirement. And even on VMware, it's not an actual requirement, they just need a third witness that isn't part of the cluster.
But what most people do is put that Witness on their 2-node cluster. . . .
Which could cause issues for all kinds of reasons. .
-
@DustinB3403 said in Infrastructure Needed for Hypervisor Cluster:
@scottalanmiller said in Infrastructure Needed for Hypervisor Cluster:
@Pete-S said in Infrastructure Needed for Hypervisor Cluster:
When I looked at it I came to the conclusion that I would need a minimum of three nodes and they should be the same CPU generation and have the same NIC configuration.
That's a VMware requirement, not a clustering requirement. And even on VMware, it's not an actual requirement, they just need a third witness that isn't part of the cluster.
But what most people do is put that Witness on their 2-node cluster. . . .
Which could cause issues for all kinds of reasons. .
Yeah, but no need for the witness node outside of VMware.