Proliant buying advice
-
Our head office currently has three Proliant DL380 G6s. One was bought in June 2009, and the other two in July 2010. I'm looking to replace the oldest - partly because it's nearly six years old, and partly because we're replacing our ERP system and need some extra resources during the transition. The other two I'll keep.
I don't have a great deal of confidence in our reseller's advice but am overwhelmed by the choices HP offer and I'm not a hardware guy. I'm hoping for some good MangoAdvice. I appreciate that specification should start with analysing our existing workload, but I don't know where to start. I'd rather put the money I'd spend on paying someone to analyse my workload towards over-specifying the hardware. Besides, our environment changes so I don't want to put to much emphasise on the present when things could be different in one year, two years, three years etc etc. I'm expecting our workload to reduce dramatically when/if we move to Office365 next year. I'd rather get a basic, but decent Proliant initially, then upgrade parts if I get any user complaints (by adding more disks etc etc). We don't get any alarms going off in vShpere, so our existing infrastructure is ok.
So, I'll get to the point!
Old server is a DL380G6 with 2 Xeon X5560 CPU, P410i controller, 12GB RAM and 8 146GB 15k disks in a single RAID10 array. Cost was $11,400 in 2009. This runs only one VM, our soon-to-be-replaced ERP system with SQL Server 2008.
I've been through the HP Quickspecs and pretty much picked a unit out of the air. This is:
DL380 Gen9 with 2 x E5-262-v3 CPU, P440a controller, 64GB RAM and 8 x 600GB 10k disks. Cost is $8,100.My reseller "recommends" upgrading the controller to a P440/4G, the fans with a "high performance temp fan kit", and the disks with 15k disks. Is this a good idea? 15k disks are very expensive - I'd have thought it would be better to add more 10k disks if performance is an issue (the unit supports 24 disks). I'm not expecting massive amounts of disk activity, as this host is likely to only be used for SQL Server applications and the databases will all fit into available RAM. It's the only area where we are theoretically downgrading, as our old server has 15k disks.
Am I right to go with Gen9? Gen8 are still available, and I presume they're discounted quite a bit, but I figure it's always best to buy the latest generation. Not least to protect against VMware support in the future as we may run these servers until VMware stops supporting them.
Thanks
-
First thing that comes to mind is Scott's adage, don't buy more than you need unless you already know what you're going to need in the near future, more often than not it will end up being wasted.
That said, do you know what your current IOPS usage is? By downgrading to 8 10K drives, you're losing up to a 1/3 vs what you have today. Maybe this isn't a problem, maybe it is. Even if it's not a problem today, who's to say if you're looking to put more services on this server in the future that you won't be short later.
If the DBs are small enough, you might consider a pair of SSDs for the DBs and everything else on the HDDs.
-
Use the recently postefd IOPS calculations to dtermine what you have now versus what you will have with the new one and then look at actu8al usage in vshpere and make your decision.
A lot fo SMB do not need 15K even with big SQL databases. Because big to a SMB is nothing to SQL.
-
@Dashrender said:
That said, do you know what your current IOPS usage is? By downgrading to 8 10K drives, you're losing approx 1/3 vs what you have today.
No I don't. A third? Does the fact that the drives and controller are six years newer effect performance (ie has disk performance improved in recent years)? Does the fact that they are 600GB versus 146GB make a difference? Also, would I be right in thinking that it is probably more economical to have a large number of 10k disks instead of a small number of 15k, given that 15k are nearly twice the price so I could have almost 16 x 10k disks instead of 8 x 15k?
If the DBs are small enough, you might consider a pair of SSDs for the DBs and everything else on the HDDs.
As I mentioned, the DBs will probably only reach around 20GB, so with 64GB RAM, SQL Server should place the entire DB into memory. I think that means disk performance won't matter (but I'm not sure on the technicalities).
Existing IOPS on the old server wouldn't help much, as we're replacing our ERP system. I'll give it a go though - is it easy to do in vShpere?
-
@Carnival-Boy said:
No I don't. A third? Does the fact that the drives and controller are six years newer effect performance (ie has disk performance improved in recent years)?
Pretty much, no. Controllers have not been a bottleneck for a long time and disks are pretty much speed constrained by their rotational speed. So a good controller from ten years ago and 10K drives would be nearly identical to a new controller and 10K drives today.
-
The 4G cache on the P440 controller is a big deal. Really big deal. That is the one place where performance really leaps forward potentially. That is a very large cache indeed and can absorb a lot of delay on the array itself.
-
@Carnival-Boy said:
No I don't. A third? Does the fact that the drives and controller are six years newer effect performance (ie has disk performance improved in recent years)? Does the fact that they are 600GB versus 146GB make a difference? Also, would I be right in thinking that it is probably more economical to have a large number of 10k disks instead of a small number of 15k, given that 15k are nearly twice the price so I could have almost 16 x 10k disks instead of 8 x 15k?
You can look the IOPS up on the drives, but they really haven't change that much from what I've seen. You get real change when you look at SSD vs HDD.
This WIKI page describes is well.
In looking at the table, 15K SAS drives top out around 210 IOPS, SSDs can be over 100K IOPS, and other options can put you over a million.
That said, as JaredBusch mentioned, your use could be low enough being an SMB that it really might not matter, that combined with the possibility that you could load the entire DB into RAM.
As for wither or not 16 x 10K disks is better than 8 x 15K, there's more than the shear number to look at. There's power consumption, cooling, a server large enough to hold 16 drives, etc.
If your DB is really only 20 Gig, you might be better off dumping the HDDs altogether and instead going with 2 SSDs in RAID 1. You'll have less heat, less power draw, WAY faster drives, etc.
Current cost and future growth become the questions then. -
@Carnival-Boy said:
Also, would I be right in thinking that it is probably more economical to have a large number of 10k disks instead of a small number of 15k, given that 15k are nearly twice the price so I could have almost 16 x 10k disks instead of 8 x 15k?
Using the numbers in the post @scottalanmiller linked a few days back it looks like this.
8x drives:
http://i.imgur.com/nCHEJS0.jpg16x drives:
http://i.imgur.com/pT1di5M.jpg -
@Carnival-Boy said:
As I mentioned, the DBs will probably only reach around 20GB, so with 64GB RAM, SQL Server should place the entire DB into memory. I think that means disk performance won't matter (but I'm not sure on the technicalities).
That is mostly true, although writes always have to go to disk. That is where the 4GB cache, especially when assigned mostly to writes, can pay huge dividends.
-
@Dashrender said:
If your DB is really only 20 Gig, you might be better off dumping the HDDs altogether and instead going with 2 SSDs in RAID 1. You'll have less heat, less power draw, WAY faster drives, etc.
This makes a lot of sense. Cheaper, many times faster.
-
The databases are small, but there is a lot of other data that will need plenty of storage. I'm guessing that database performance is the bottleneck rather than general file serving and other miscellaneous applications, so if I take away that concern (by ensuring plenty of RAM), I'm not sure I should be too concerned with other disk performance issues.
Are you saying that the best bang for my buck could be buying a P440 rather than P440ar? There isn't a lot of difference in price, but double the amount of write cache (4GB versus 2GB). HP don't do a unit that comes pre-configured with a P440 for some reason.
-
How much write cache do you have on the current machine?
-
512mb, battery backed.
-
The 4 GB cache along with your 8 drives will probably be enough, but without real metrics on what you new ERP's requirements will be, no one can be sure.
-
@Dashrender said:
The 4 GB cache along with your 8 drives will probably be enough, but without real metrics on what you new ERP's requirements will be, no one can be sure.
It's amazing how much different a good cache can make.
-
These are the controller options:
Can anyone tell me the difference between "Flexible" Smart Array Controllers and the others. What are the ports for (and why would I want 2 ports rather than 1)?
Will a 4GB always be faster than 2GB, or will it only have an effect if the 2GB gets filled whilst writing to disk and therefore has to wait (I'm not sure if I'm talking crap here or not)?
What is FIO?
And what does 'ar' stand for in the name P440ar?
-
@Carnival-Boy said:
Will a 4GB always be faster than 2GB, or will it only have an effect if the 2GB gets filled whilst writing to disk and therefore has to wait (I'm not sure if I'm talking crap here or not)?
Not "always" but anytime that you are doing any amount of disk IO. If you don't have a total of 2GB of storage, for example, then 4GB of cache would be overkill. But given the size of modern storage (and certainly with your database being more than 2GB and your OS being larger than 2GB) you are into a range where yes, 4GB will always be faster. There is no situation where you will not use at least 2GB of disk reads or writes on any given boot up. That number is just so tiny compared to the size of your storage that while technically it might be too big for some workloads, no real world ones and certainly not yours. You would be safe buying 16GB or more of cache and knowing for certain that bigger kept meaning faster.
-
FIO, I believe, refers to "Flexible IO" and means that it is neither internal nor external but has both. That's why you see internal, external or FIO as the options. Never Internal FIO or External FIO.
-
Each port is another full SAS channel with full SAS bandwidth. You want that if you have a place to use it. You don't, so does not matter to you. It is often used when you have an internal array and an external array or a massive internal array that you want to split.
-
I can't figure out what "ar" stands for but it appears that it means that it is a mezzanine card rather than an add on card.