New Infrastructure to Replace Scale Cluster
-
@Pete-S said in New Infrastructure to Replace Scale Cluster:
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@Pete-S said in New Infrastructure to Replace Scale Cluster:
@mroth911 said in New Infrastructure to Replace Scale Cluster:
2x 6 core 72gb of ram. I just installed ovirt with 2x 300gb sas, as os, with 4tb storage. on each server
That's not far from what you have in the Scale cluster. I'd say build up the cluster on the R710s, move the VMs, re-purpose the Scale servers to KVM cluster nodes, move back the VMs.
If your Scale is 3 years old then it's newer than the R710s and you should be able to get another couple of years out of them - if you can put in generic discs and spare parts.
IMHO, the R710s are a little bit too old already to be running for few more years. But as a temporary solution why not?
It's more than in the Scales, by a bit. 50% more cores per node, 8GB more RAM.
I would not move back. His "new" hardware is bigger than the old.
Oups, I'm mathematically challenged.
But if the Scale is 3 years old then the computers are much younger than the R710s (which are 7-8 years old).
Probably R320
-
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@Pete-S said in New Infrastructure to Replace Scale Cluster:
@scottalanmiller said in New Infrastructure to Replace Scale Cluster:
@Pete-S said in New Infrastructure to Replace Scale Cluster:
@mroth911 said in New Infrastructure to Replace Scale Cluster:
2x 6 core 72gb of ram. I just installed ovirt with 2x 300gb sas, as os, with 4tb storage. on each server
That's not far from what you have in the Scale cluster. I'd say build up the cluster on the R710s, move the VMs, re-purpose the Scale servers to KVM cluster nodes, move back the VMs.
If your Scale is 3 years old then it's newer than the R710s and you should be able to get another couple of years out of them - if you can put in generic discs and spare parts.
IMHO, the R710s are a little bit too old already to be running for few more years. But as a temporary solution why not?
It's more than in the Scales, by a bit. 50% more cores per node, 8GB more RAM.
I would not move back. His "new" hardware is bigger than the old.
Oups, I'm mathematically challenged.
But if the Scale is 3 years old then the computers are much younger than the R710s (which are 7-8 years old).
Probably R320
According to Scale the only 1150 model that has 480GB SSD and 3x1TB has a E5-2620 V4 CPU (8 core 2.1Ghz).
E5-2600 V4 support came in the 13th gen poweredge. R330 is E3 series so it has to be R430 or some special cloud model.
Xeon E5 is also the dual CPU series but the Scale unit only has 2x400W power supply so it might not be able to run dual CPUs. -
Anyway, the R710 is 5500 or 5600 series CPUs. Then you had E5-2600 V1, E5-2600V2, E5-2600V3 and then E5-2600 V4.
So it's a couple of generations between them and every generation is faster.I think the 8-core CPU in the R430 will be pretty equally matched with the 2x6-cores in the R710.
But there are 20-core CPUs, even 22 cores, in the E5-2600 V4 series too so it's possible to go all-out if you want to upgrade the performance on those servers. In the Xeon 5600 series you only have 6-core CPUs so the R710s are maxed out already.
-
I haven't read the thread, so apologies if I repeat anyone else's words.
Here are some points:
- Central storage is not an SPOF, if done right, it will have redundant parts that can keep it going in case of a component failure, and it can be cloned. I've never seen a well built SAN go completely down in over 20 years of working with them.
- On the other hand, hyperconvergence is a resource drain, with systems like gluster and ceph eating up resources they share with the hypervisor, with neither being aware of each other, and VMs end up murdered by OOM, or just stalled due to CPU overcommitment.
- Gluster and other regular network based storage systems are going to be the bottleneck for the VM performance. So unless you don't care about everything being sluggish, you should think about getting a separate fabric for the storage comms, even if you hyperconverge.
- oVirt can be really nice, but you have to understand what it was built for, and not try to bend it out of shape with ridiculous requirements. A well built and pretty much zero maintenance oVirt setup will have a central storage, proper power management (you do have DRACs, right?) and doesn't use Hosted Engine. That will require more than 3 hosts.
- How many and how powerful will the VMs be? I would really go with a two node cluster, and use the third as a NAS and a standalone libvirt VM for the engine. This is the usual approach for a budget setup, where you can't afford something better.
-
@dyasny Wow, wow....no Hosted Engine? How come everyone keeps pushing HE?
Why no HE? -
@FATeknollogee because it doesn't scale. For a small setup it will work (because you don't want to waste a machine on it), but at scale you will keep getting hit by problems. Remember, the engine runs two postgres databases, both under stress, as well as a java based engine, which is also a resource hog (it's java after all). Add the fact it's doing a lot of network traffic polling all those hypervisors and getting a lot of data about everything they do every 2 seconds, and you have a VM that is doing a LOT.
For a few hypervisors, it will not be a huge issue, but drive that up to a point and you end up in a world of hurt. So for anything large-ish and where reliability is important, just avoid HE.
-
@dyasny The 300 host install you mentioned in the other thread is non-HE?
-
@FATeknollogee absolutely. Pretty much every setup with over 20 hosts I've ever built, wasn't using HE.
-
@dyasny What do you lose without HE?
Without HE does it become a manual setup where one can't use Cockpit to setup? -
@FATeknollogee Yes, it's a simple setup where you run ovirt-engine-setup and it asks you a few questions in the command line. For ease of management, I usually deploy it in a standalone VM on a separate machine. This way, if I need more resources, I can stop the machine, give it some more cores/ram or move it's disk to a faster storage, and start it up again. BAcking it all up is as simple as copying the VM disk.
-
@dyasny What are you deploying as a standalone VM? I thought you said no HE?
-
@FATeknollogee just a regular libvirt/KVM usually. If there is a multivendor virt environment, I install the engine in the second setup (vmware/hyper-v) and often the vCenter is installed in RHV
-
@dyasny Next time I do HE, I think I'll install it as a separate VM instead of the vm inside a vm approach.
-
@FATeknollogee that really depends on your cluster size. If you can afford to dedicate a separate host to it, then why not. Besides scalability, your main benefit will be not having to deal with all the hosted-engine clustering overhead. It really makes life simpler
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
@FATeknollogee that really depends on your cluster size. If you can afford to dedicate a separate host to it, then why not.
You mean a separate host where the HE vm lives on?
-
@FATeknollogee said in New Infrastructure to Replace Scale Cluster:
@dyasny said in New Infrastructure to Replace Scale Cluster:
@FATeknollogee that really depends on your cluster size. If you can afford to dedicate a separate host to it, then why not.
You mean a separate host where the HE vm lives on?
Yes, it's your choice whether to do it in a VM though, it can be on baremetal
-
@dyasny said in New Infrastructure to Replace Scale Cluster:
@FATeknollogee said in New Infrastructure to Replace Scale Cluster:
@dyasny said in New Infrastructure to Replace Scale Cluster:
@FATeknollogee that really depends on your cluster size. If you can afford to dedicate a separate host to it, then why not.
You mean a separate host where the HE vm lives on?
Yes, it's your choice whether to do it in a VM though, it can be on baremetal
Is doing it in a vm bad? That would be my choice unless there is some compelling reason to do it baremetal.
-
@FATeknollogee doing it in a VM is convenient. You can always move that VM to another host, you can easily back it up by copying it's disk and domxml, you can even easily set it up as an HA cluster with pacemaker protecting the libvirt service. Databases though, feel more convenient on baremetal, so if you're going to build something with hundreds of hosts, I'd suggest you invest in the engine host as well.
-
@dyasny so to clarify for me, as i'm fighting a headache.
This design is similar to that of ESXi with vsphere.In that you should have 3 physical hosts, and 1 of which is installed with the vSphere service.
Correct?
-
@DustinB3403 no, in this particular setup, you have two options. The original one would be to go hyperconverged, installing both the storage and hypervisors services on all 3 hosts, and to also deploy the engine (vsphere equivalent) as a VM in the setup (that's called self hosted engine).
The better option, IMO, is to use two hosts as hypervisors, and the third - pack with disks, and use as the storage device (NFS or iSCSI). And also install the engine on it, as a VM or on baremetal - doesn't matter.
You will have less hypervisors, true, but having a storage service on the hypervisors is a resource drain, so you don't actually lose as much in terms of resources. And you gain a proper storage server, less management headache, and a setup that can scale nicely if you decide to add hypervisors or buy a real SAN. Performance will also be better, and you might even end up with more available disk space, because you will not have to keep 3 replicas of every byte like gluster/ceph require you to do.