Looking at New Virtual Host Servers (ESXi)



  • And if we are talking about the theoretical "well but he could grow significantly" kinds of things, far more realistically if he needed more than 1.5TB of RAM he's very likely to need more CPU, too. By going with a single CPU now, he leaves open the option of doubling the CPU in the future, too. Which in any normal case of needed 2TB of RAM or more, you'd want more than the small amount listed here.



  • @donahue said in Looking at New Virtual Host Servers (ESXi):

    @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    ...That's way more performance per thread (just because these are two generations newer machines) and double the threads and reducing the CPU to CPU overhead...

    Are you talking about if the workload has to shift from one pCPU to the other one as some kind of bottle neck? If so, I've never thought of it this way but it would be an interesting point.

    Workloads shifting between definitely causes a bottleneck, as does a split cache, and in many cases people may be forced to have a workload running partially on one CPU and partially on another which cases a lot of extra latency.



  • @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @scottalanmiller said in Looking at New Virtual Host Servers (ESXi):

    @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    @wrx7m said in Looking at New Virtual Host Servers (ESXi):

    Should I stick with 2 CPUs? We currently have 4 cores per CPU and 2 CPUs per server. I would be looking at increasing the core count, too. I don't think adding pCPUs would benefit me.

    A pair of 6134 would avoid the Windows Server core tax. It’s tje nest bang for the GHz buck and our goto for most builds.

    Need more pRAM then 6134M to gain access to 3TB per node.

    That would reduce CPU performance though, in order to get access to RAM sizes above 600% of his current need, not much of a benefit.

    I'm not sure I understand?

    CPU performance will be impacted a little, which means workloads will run slower. With the only benefit being that in case he later needed a RAM increase of a completely absurd amount that would never, ever happen, he theoretically could do it.

    While it sounds nice to have access to memory options greater than 1.5TB, it's not of any real world value to the OP, he doesn't need anywhere close to that. But having slower CPUs will affect him, even if just a tiny bit, in the real world every day that they own the server.

    Okay, I understand. The 6134 series are equivalent to the 3/7 series in E5-2600 CPUs. They are lower core count higher GHz parts. We almost always deploy for GHz before core count unless business needs, and budget, allow for the top end processors that have both.



  • While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.

    Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.

    I suggest having a boo at this setup. We are ...



  • @phlipelder said in Looking at New Virtual Host Servers (ESXi):

    While at Ignite Dell had their new R7415 AMD EPYC single socket based 2U there. There's also a 1U version in the R6415.

    Because of the extra PCIe lanes available in the EPYC CPU setup along with the extra memory channels one can get close to dual-processor performance out of a single CPU setup. So, go 16 Core EPYC single socket, load up the needed memory and storage, and off you go.

    I suggest having a boo at this setup. We are ...

    Much like how IBM and Oracle have been designing servers for years.



  • @wrx7m There are a lot of models which should fit your needs. You can find more information on this page: https://www.starwindsoftware.com/starwind-hyperconverged-appliance .
    Also, you can request a demo on that page to see how HA works in real life - it's free 😊


Log in to reply