Help choosing replacement Hyper-V host machines and connected storage
-
Okay then, I guess the smaller 3TB drives are just a silly choice. I can only imagine that this is because they are in short supply or something.
-
So I'm bored and looking into drive options. Here is another one...
1TB NL-SAS 2.5" drive. This would work in the other R720xd with the 25x 2.5" drives. They are $260 so this would cost a lot more but let you go for performance by having lots of drives. But that would more than double the storage cost and I think only RAID 6 is an option to get enough to fit into the chassis. So other than more spindles it does not work out very well. Still just 7200 RPM NL-SAS so not super fast.
-
Here is the more interesting small drive option: 900GB 10K SAS 2.5" for $280. You would need even more of these and they are not cheap per GB but they are a lot faster than the NL-SAS options.
25 of these in an R720xd would be $7,000 just for the drives in one of the two servers. So that is the entire budget just for drives. With RAID 10 you could get 10.8TB, so not enough to even consider it. RAID 6 would be the only option and that would be 20.7TB which is plenty. So you could use one of the drives as a hot spare or buy a few fewer drives to save money, but then you would be losing performance again.
-
1.8TB 10K SAS 2.5" drives. This would fix the RAID 10 issue, but it is NOT cheap. That's not a good option.
-
Okay, I think that we have a pretty good round up of the options at this point. Until we have more information from the OP, like if there is more money, new requirements or specific disk requirements this seems to be the consensus:
Solution 1: The bare bones, cost saving solution is a two node Dell R510 or R720xd with 8x 4TB NL-SAS drives in RAID 10 (add drives as needed for performance up to 12) cluster. The 4TB 3.5" drives are just too cheap to not use and RAID 10 probably makes them the reasonably fast choice even though they are NL-SAS rather than 10K. Use HyperV and StarWind to do the clustering and failover. Might be able to come in somewhere around the assumed budget limits.
Solution 2: The expensive but easy approach. Scale with three nodes and everything included (hyperconverged) in a single package. A fraction of the work to set up or to maintain. Will grow easily in the future. Likely far more expensive than the OP can justify.
-
I agree, that seems to be where we are. RAID 10 makes sense, that workload is almost all database and there don't seem to be affordable 10K SAS drive options so we are a bit stuck there. Not very many options with those kind of budgetary constraints. Kind of just have to do what has to be done.
-
@scottalanmiller said:
What is your current setup for storage? How many IOPS do you have available to your systems today?
I haven't obtained those numbers yet from our current servers.
I neglected to mention that we are using physical servers for SQL 2008 R2 and SQL 2012.
SQL 2008 R2:
PowerEdge 2950 Gen-III
CPU: (2) Xeon E5450 @ 3.00GHz, 2993 MHz
RAM: 32GB
PERC 6/i, RAID-5
(4) x 146.8GB Seagate Savvio 15K.2 146.8GB SAS 15k RPM 16MB Cache 6Gb/s 2.5"
(4) x 146.8GB Hitachi Ultrastar C10K147 SAS 10k RPM 16MB Cache 3Gb/s 2.5"SQL 2012:
PowerEdge 2950, Gen-II
CPU: (2) x Xeon X5365 @ 3.00GHz, 2993 MHz
RAM: 32GB
PERC 5/i, RAID-10
4 x 600GB Toshiba AL13SXB600N SAS 15K RPM 64MB Cache 6Gb/s 2.5" -
@StrongBad said:
Okay, I think that we have a pretty good round up of the options at this point. Until we have more information from the OP, like if there is more money, new requirements or specific disk requirements this seems to be the consensus:
Solution 1: The bare bones, cost saving solution is a two node Dell R510 or R720xd with 8x 4TB NL-SAS drives in RAID 10 (add drives as needed for performance up to 12) cluster. The 4TB 3.5" drives are just too cheap to not use and RAID 10 probably makes them the reasonably fast choice even though they are NL-SAS rather than 10K. Use HyperV and StarWind to do the clustering and failover. Might be able to come in somewhere around the assumed budget limits.
Solution 2: The expensive but easy approach. Scale with three nodes and everything included (hyperconverged) in a single package. A fraction of the work to set up or to maintain. Will grow easily in the future. Likely far more expensive than the OP can justify.
Would the CPU options on the Dell R510 or R720xd provide enough horsepower for the VMs?
-
@scottalanmiller said:
If the R910 is maxing out at, say, 20% CPU, then my guess is that an R720xd will do the trick to take over its load. The R720xd has two, faster procs than the R910. Not only are the individual procs faster, but by moving from quad procs to dual procs you gain a small amount of efficiency just from that one move. So faster procs and more efficient proc usage and then cutting the total number of procs in half.... seems like you will be okay.
I apologize if I said R910, but I'm actually using 2 x PowerEdge R900 as Hyper-V host machines.
-
@JohnFromSTL said:
@StrongBad said:
Okay, I think that we have a pretty good round up of the options at this point. Until we have more information from the OP, like if there is more money, new requirements or specific disk requirements this seems to be the consensus:
Solution 1: The bare bones, cost saving solution is a two node Dell R510 or R720xd with 8x 4TB NL-SAS drives in RAID 10 (add drives as needed for performance up to 12) cluster. The 4TB 3.5" drives are just too cheap to not use and RAID 10 probably makes them the reasonably fast choice even though they are NL-SAS rather than 10K. Use HyperV and StarWind to do the clustering and failover. Might be able to come in somewhere around the assumed budget limits.
Solution 2: The expensive but easy approach. Scale with three nodes and everything included (hyperconverged) in a single package. A fraction of the work to set up or to maintain. Will grow easily in the future. Likely far more expensive than the OP can justify.
Would the CPU options on the Dell R510 or R720xd provide enough horsepower for the VMs?
The R720xd is better than the R510 in that regards. But if you are only hitting 10 - 15% on a monster R910, the R720 / R720xd with suitable processors should do just fine and probably not be much above 30%.
-
@JohnFromSTL said:
@scottalanmiller said:
If the R910 is maxing out at, say, 20% CPU, then my guess is that an R720xd will do the trick to take over its load. The R720xd has two, faster procs than the R910. Not only are the individual procs faster, but by moving from quad procs to dual procs you gain a small amount of efficiency just from that one move. So faster procs and more efficient proc usage and then cutting the total number of procs in half.... seems like you will be okay.
I apologize if I said R910, but I'm actually using 2 x PowerEdge R900 as Hyper-V host machines.
Oh, that makes this that much easier (and make that much more sense as the R910 is not that old.) That's a whole generation older than we were trying to match. I think the R510 has a decent chance of keeping up and the R720xd will have no problem at all.
-
I'm about to sit down to eat with my kids. Once done I'll do some processor comparisons to come up with a good feel for what I think will make sense and see if the R510 seems reasonable.
We have three R510 units ourselves and love them. We use them as XenServer nodes. They work great. My favourite box from the x1x generation.
-
Your new storage seems to be many times more than you are current using. Is that right?
What are you using for a backup solution?
-
Okay so the one processor is this: Intel Xeon Processor E5450 (12M Cache, 3.00 GHz, 1333 MHz FSB)
That's a quad core, no hyper-threading processor from 2007. I think that we are going to be doing okay replacing this. So this box is a total of 16 threads, we can get single processors significantly faster than this today.
-
And the other is this: Xeon X5365 @ 3.00GHz, 2993 MHz
Same as above, quad core, no HT, 2007. This one is a 150W envelope instead of 80W so really wasteful on power. Discontinued in 2009 instead of 2010. But basically the same performance.
-
If high availability was not a concern, I am pretty confident that we could fit everything into a single R730xd without a problem and cut the overall project cost in half.
-
Just confirming, the R730xd won't just match the performance here, it will demolish it. Each of those R900 is a total of 16 eight year old cores. They were a much slower technology than cores are today. That's 16 threads per chassis. And those threads were significantly hampered by being in a quad socket system which is just not efficient for the hardware to handle.
The R730xd can not only deliver more, it can deliver more power per processor. It can't just delivery more per processor than the R900 can in total, it can nearly match both R900s in a single processor! The R730xd has an optional 18 core Xeon processor with hyperthreading. That means 36 threads per processor. That's 36 threads from a single, highly efficient processor instead of 32 threads spread out over eight processors split between two physical servers! The difference here is pretty dramatic. Now, of course, the R730xd could hold two of those processors for a total of 72 threads in a single chassis split between two processors. Each of those threads likely outperforms one of the threads from the R900 era.
So the degree to which the R730xd can potentially replace not one, not two, but FOUR R900s fully loaded with all four processors all in a single chassis is pretty dramatic.
Here is one of the procs from the R730xd:
http://ark.intel.com/products/81061/Intel-Xeon-Processor-E5-2699-v3-45M-Cache-2_30-GHz
-
Now that is the extreme approach but worth considering. The R730xd will give you unlimited power in a single chassis, nothing could be easier (or faster.)
The other option is dual R510 units. These only come with the largest processor being a sext-core and dual procs for a total of twelve raw cores per chassis compared to the sixteen of the R900. Of course these are faster, more efficient processors but only by one Dell generation so the leap is not the same as we were talking about above.
But the generation gap is enough to move us to hyperthreading. So while by core count we are losing 25%, by threading we get 24 instead of 16 which is a pretty major improvement. So we still, almost certainly, get quite a bit more performance per chassis than we did in the R900. We likely still need two chassis but the cost of each chassis is low.
So we have some excellent options both in single high end servers or dual older servers.
-
Oh and here is the sext-core processor from the R510: http://ark.intel.com/products/52577/Intel-Xeon-Processor-X5675-12M-Cache-3_06-GHz-6_40-GTs-Intel-QPI
-
@Dashrender said:
Your new storage seems to be many times more than you are current using. Is that right?
What are you using for a backup solution?
Sorry for not replying sooner, the weekend sucked and I didn't had a chance to look at any of this.
You are correct, the new storage is larger since I'm looking to consolidate physical servers and Hyper-V clients onto these two servers. I'm looking at a Quantum Scalar i40 2x LTO 5 Fibre 40 Slot Tape Library Autoloader.