Help choosing replacement Hyper-V host machines and connected storage
-
And the other is this: Xeon X5365 @ 3.00GHz, 2993 MHz
Same as above, quad core, no HT, 2007. This one is a 150W envelope instead of 80W so really wasteful on power. Discontinued in 2009 instead of 2010. But basically the same performance.
-
If high availability was not a concern, I am pretty confident that we could fit everything into a single R730xd without a problem and cut the overall project cost in half.
-
Just confirming, the R730xd won't just match the performance here, it will demolish it. Each of those R900 is a total of 16 eight year old cores. They were a much slower technology than cores are today. That's 16 threads per chassis. And those threads were significantly hampered by being in a quad socket system which is just not efficient for the hardware to handle.
The R730xd can not only deliver more, it can deliver more power per processor. It can't just delivery more per processor than the R900 can in total, it can nearly match both R900s in a single processor! The R730xd has an optional 18 core Xeon processor with hyperthreading. That means 36 threads per processor. That's 36 threads from a single, highly efficient processor instead of 32 threads spread out over eight processors split between two physical servers! The difference here is pretty dramatic. Now, of course, the R730xd could hold two of those processors for a total of 72 threads in a single chassis split between two processors. Each of those threads likely outperforms one of the threads from the R900 era.
So the degree to which the R730xd can potentially replace not one, not two, but FOUR R900s fully loaded with all four processors all in a single chassis is pretty dramatic.
Here is one of the procs from the R730xd:
http://ark.intel.com/products/81061/Intel-Xeon-Processor-E5-2699-v3-45M-Cache-2_30-GHz
-
Now that is the extreme approach but worth considering. The R730xd will give you unlimited power in a single chassis, nothing could be easier (or faster.)
The other option is dual R510 units. These only come with the largest processor being a sext-core and dual procs for a total of twelve raw cores per chassis compared to the sixteen of the R900. Of course these are faster, more efficient processors but only by one Dell generation so the leap is not the same as we were talking about above.
But the generation gap is enough to move us to hyperthreading. So while by core count we are losing 25%, by threading we get 24 instead of 16 which is a pretty major improvement. So we still, almost certainly, get quite a bit more performance per chassis than we did in the R900. We likely still need two chassis but the cost of each chassis is low.
So we have some excellent options both in single high end servers or dual older servers.
-
Oh and here is the sext-core processor from the R510: http://ark.intel.com/products/52577/Intel-Xeon-Processor-X5675-12M-Cache-3_06-GHz-6_40-GTs-Intel-QPI
-
@Dashrender said:
Your new storage seems to be many times more than you are current using. Is that right?
What are you using for a backup solution?
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
You are correct, the new storage is larger since I'm looking to consolidate physical servers and Hyper-V clients onto these two servers. I'm looking at a Quantum Scalar i40 2x LTO 5 Fibre 40 Slot Tape Library Autoloader.
-
@JohnFromSTL said:
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
Weekends off? That's not how this works!
-
@JohnFromSTL said:
@Dashrender said:
Your new storage seems to be many times more than you are current using. Is that right?
What are you using for a backup solution?
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
You are correct, the new storage is larger since I'm looking to consolidate physical servers and Hyper-V clients onto these two servers. I'm looking at a Quantum Scalar i40 2x LTO 5 Fibre 40 Slot Tape Library Autoloader.
40 slot - WTH? how much is that thing?
-
@Dashrender said:
@JohnFromSTL said:
@Dashrender said:
Your new storage seems to be many times more than you are current using. Is that right?
What are you using for a backup solution?
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
You are correct, the new storage is larger since I'm looking to consolidate physical servers and Hyper-V clients onto these two servers. I'm looking at a Quantum Scalar i40 2x LTO 5 Fibre 40 Slot Tape Library Autoloader.
40 slot - WTH? how much is that thing?
$2,000 w/25 licenses
-
Wow, not bad. Not bad at all.
-
@scottalanmiller said:
@JohnFromSTL said:
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
Weekends off? That's not how this works!
The funny thing is, I would have much rather been at work this weekend than dealing with installing a new tub/shower faucet at home. I hate plumbing!!
-
@scottalanmiller said:
Wow, not bad. Not bad at all.
Would a PowerVault MD3200i work with any of these servers that have been mentioned above?
-
@JohnFromSTL said:
@scottalanmiller said:
Wow, not bad. Not bad at all.
Would a PowerVault MD3200i work with any of these servers that have been mentioned above?
In what role?
-
@JohnFromSTL said:
Would a PowerVault MD3200i work with any of these servers that have been mentioned above?
That's a low end iSCSI SAN. So there is no consideration for compatibility. iSCSI is iSCSI and everything supports it. But there is no use case for you where this would make any sense. This would be about the worst possible options, right? Slow, expensive and incredibly dangerous - no failover and very fragile? It would defeat everything that you are trying to accomplish. Isn't this the antithesis of your goals?
-
@scottalanmiller said:
@JohnFromSTL said:
Would a PowerVault MD3200i work with any of these servers that have been mentioned above?
That's a low end iSCSI SAN. So there is no consideration for compatibility. iSCSI is iSCSI and everything supports it. But there is no use case for you where this would make any sense. This would be about the worst possible options, right? Slow, expensive and incredibly dangerous - no failover and very fragile? It would defeat everything that you are trying to accomplish. Isn't this the antithesis of your goals?
You're correct, it's the opposite of what I need. I'm frustrated the purse-strings are more watertight than a frog.
-
@JohnFromSTL said:
You're correct, it's the opposite of what I need. I'm frustrated the purse-strings are more watertight than a frog.
Even with tight pursestrings, though, doesn't the MD SAN approach make it worse? The cost of the SAN PLUS the cost of the drives and all for nought since it doesn't accomplish any of the goals? If we were to skip all of the goals we could do lots of things for way cheaper.
-
@scottalanmiller @Dashrender @StrongBad @Reid-Cooper
Apologies in advance if I missed anyone. So, there are several options up here.
@scottalanmiller said:
That is a huge amount of SQL Server workloads. Figuring out the CPU and memory needs for that will be the biggest part of doing capacity planning.
We have a lot of SQL databases but they are not being accessed that frequently. I'm working on gathering metrics for our 3 SQL Server instances.
-
Lots of information to digest, I know.
-
@scottalanmiller said:
@JohnFromSTL said:
You're correct, it's the opposite of what I need. I'm frustrated the purse-strings are more watertight than a frog.
Even with tight pursestrings, though, doesn't the MD SAN approach make it worse? The cost of the SAN PLUS the cost of the drives and all for nought since it doesn't accomplish any of the goals? If we were to skip all of the goals we could do lots of things for way cheaper.
Reducing power consumption and BTUs is most important.
I'm not opposed to having multiple Hyper-V host machines, but that would require twin servers for failover and load sharing.
Did I list each of the servers I'm currently using?
-
If you don't need redundant servers for high availability, then going down to a single R720xd or R730xd is all that you need. One server, only two CPUs. Fraction of the cost of what we have been proposing, way easier to manage, tiny fraction of the power that you are burning up today.