Looking at New Virtual Host Servers (ESXi)



  • @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s the best bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.



  • Even though it's a small workload, I would still look at storage performance requirements closely before you make a purchase so you get the correct speed of drives. How is the OBR10 with 7200 RPM drives performing today? Would looking at 10K RPM drives improve performance and make a true business impact with your applications?



  • And since you have already paid for Essentials Plus, I can see how something like Starwind makes sense to pool your storage together. And I like the idea of single CPU. Even though your vSphere license covers up to 2 CPU in each host, you can certainly add a physical CPU later and only if needed to save a little cost on the front side.

    I saw VxRail mentioned, and I saw Starwind mentioned. But there is another option here. You could go single CPU and license vSAN for either two or 3 hosts. A two host configuration does require a witness (basically a VM that must run outside the cluster, even if on a free ESXi host), where a 3-node cluster would not. With vSAN Standard, you can do a hybrid vSAN and use disk groups made up of one caching drive (SSD) and multiple capacity drives. With a 3-node cluster, there would be a copy of each VMDK on two hosts in the cluster and a witness component on the 3rd host, allowing one host to be put in maintenance node or even to completely fail without losing data.

    https://cormachogan.com/2017/03/27/debunking-behavior-myths-3-node-vsan-cluster

    Remember, as you are looking to do this, let your decision fall on something that will give you more capacity, better performance, and easy of management for future upgrades so you can stop focusing quite so much on keeping the lights on and use more time to innovate on other projects. Regardless of what you go with, I would plan this project so that you leave open drive bays in the hosts you are getting so you can scale up the storage in the future if you have the need.



  • @networknerd said in Looking at New Virtual Host Servers (ESXi):

    Even though it's a small workload, I would still look at storage performance requirements closely before you make a purchase so you get the correct speed of drives. How is the OBR10 with 7200 RPM drives performing today? Would looking at 10K RPM drives improve performance and make a true business impact with your applications?

    You're correct that it's important to first look at storage performance requirements closely.

    OBR10 with big TB 7200 RPM drives is still slow as hell for a big hypervisor. I know this for a fact and experienced it first hand on a host with about (back then, 50) running VMs using 6x 8TB 3.5" spinners (RAID10) as the main storage for the VMs, with a bunch of 1.8" SSDs for read/write caching.

    When I had the SSD caching disabled for some planned and scheduled maintenance, the whole thing crawled. You do not want to run a bunch of VMs on a few 7200 RPM drives. You can't get high-capacity HDDs at 10k+ RPM, so if you're limited to 4-8 or so 3.5" bays, you need the big slow ones generally.

    Basically, if you will be running a large number of VMs on a small number of 7200 RPM spinners even in a RAID10, you'll need to use some kind of r/w or read caching technology, typically, if your VMs are doing things.



  • @obsolesce said in Looking at New Virtual Host Servers (ESXi):

    @networknerd said in Looking at New Virtual Host Servers (ESXi):

    Even though it's a small workload, I would still look at storage performance requirements closely before you make a purchase so you get the correct speed of drives. How is the OBR10 with 7200 RPM drives performing today? Would looking at 10K RPM drives improve performance and make a true business impact with your applications?

    You're correct that it's important to first look at storage performance requirements closely.

    OBR10 with big TB 7200 RPM drives is still slow as hell for a big hypervisor. I know this for a fact and experienced it first hand on a host with about (back then, 50) running VMs using 6x 8TB 3.5" spinners (RAID10) as the main storage for the VMs, with a bunch of 1.8" SSDs for read/write caching.

    When I had the SSD caching disabled for some planned and scheduled maintenance, the whole thing crawled. You do not want to run a bunch of VMs on a few 7200 RPM drives. You can't get high-capacity HDDs at 10k+ RPM, so if you're limited to 4-8 or so 3.5" bays, you need the big slow ones generally.

    Basically, if you will be running a large number of VMs on a small number of 7200 RPM spinners even in a RAID10, you'll need to use some kind of r/w or read caching technology, typically, if your VMs are doing things.

    Could be an option to have both. One RAID1 array with SSDs say 2x4TB and one RAID1 with 3.5" HDDs, for example 2x10TB Ultrastar He10. Fast SSD storage for VMs that need that and plenty of slow storage for VMs that need that.


  • Service Provider

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.

    That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.



  • @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    ...That's way more performance per thread (just because these are two generations newer machines) and double the threads and reducing the CPU to CPU overhead...

    Are you talking about if the workload has to shift from one pCPU to the other one as some kind of bottle neck? If so, I've never thought of it this way but it would be an interesting point.



  • @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.

    That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.

    I'm not sure I understand?


  • Service Provider

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.

    That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.

    I'm not sure I understand?

    CPU performance will be impacted a little, which means workloads will run slower. With the only benefit being that in case he later needed a RAM increase of a completely absurd amount that would never, ever happen, he theoretically could do it.

    While it sounds nice to have access to memory options greater than 1.5TB, it's not of any real world value to the OP, he doesn't need anywhere close to that. But having slower CPUs will affect him, even if just a tiny bit, in the real world every day that they own the server.


  • Service Provider

    Also, with very rare exception, single CPU approaches use less power meaning lower cost of operating, better carbon footprint, less HVAC needs, less noise, etc.


  • Service Provider

    And if we are talking about the theoretical "well but he could grow significantly" kinds of things, far more realistically if he needed more than 1.5TB of RAM he's very likely to need more CPU, too. By going with a single CPU now, he leaves open the option of doubling the CPU in the future, too. Which in any normal case of needed 2TB of RAM or more, you'd want more than the small amount listed here.


  • Service Provider

    @donahue said in Looking at New Virtual Host Servers (ESXi):

    @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    ...That's way more performance per thread (just because these are two generations newer machines) and double the threads and reducing the CPU to CPU overhead...

    Are you talking about if the workload has to shift from one pCPU to the other one as some kind of bottle neck? If so, I've never thought of it this way but it would be an interesting point.

    Workloads shifting between definitely causes a bottleneck, as does a split cache, and in many cases people may be forced to have a workload running partially on one CPU and partially on another which cases a lot of extra latency.



  • @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.

    That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.

    I'm not sure I understand?

    CPU performance will be impacted a little, which means workloads will run slower. With the only benefit being that in case he later needed a RAM increase of a completely absurd amount that would never, ever happen, he theoretically could do it.

    While it sounds nice to have access to memory options greater than 1.5TB, it's not of any real world value to the OP, he doesn't need anywhere close to that. But having slower CPUs will affect him, even if just a tiny bit, in the real world every day that they own the server.

    Okay, I understand. The 6134 series are equivalent to the 3/7 series in E5-2600 CPUs. They are lower core count higher GHz parts. We almost always deploy for GHz before core count unless business needs, and budget, allow for the top end processors that have both.



  • While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.

    Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.

    I suggest having a boo at this setup. We are ...


  • Service Provider

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.

    Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.

    I suggest having a boo at this setup. We are ...

    Much like how IBM and Oracle have been designing servers for years.