SATA vs NL-SAS vs SAS For New Array
-
As you have seen from another thread, I am going to need to replace the SSD array in my new server.
This server will be running XenServer, and the VMs will be running a DC, a data server, and a mail server.
I mainly chose SSD because we do not upgrade that often, and I figured for the low storage we need, it would be a good upgrade. But it was not to be.
1TB of total space will be more than enough at this point for our needs.
I am assuming to set up a RAID 10 array for these disks. My question is what type of drive would you go with for this new array?
-
I run more than one client on 7.2k SATA drives. DC, Exchange, SQL.
My recent purchases have been NL SAS since Xbyte has them cheap.
-
I run 7200 SATA almost everywhere...
-
I guess my thinking was faster is always better.
If money was not an object, would this be true? Are the decisions to go with 7.2K purely financial based?
Or is there really very little difference in all the drives for these kinds of applications? I mean obviously SSD is super fast, but perhaps not needed.
-
@JaredBusch said:
I run more than one client on 7.2k SATA drives. DC, Exchange, SQL.
My recent purchases have been NL SAS since Xbyte has them cheap.
I agree with JB - I do the same - I currently have 6 VMs running on an 8 drive RAID 10 array of 7.2K NL SAS drives with no problems.
But you did mention a data server - is that just for file? or a DB?
-
@BRRABill said:
I guess my thinking was faster is always better.
If money was not an object, would this be true? Are the decisions to go with 7.2K purely financial based?
Or is there really very little difference in all the drives for these kinds of applications? I mean obviously SSD is super fast, but perhaps not needed.
If money is no object - why not by Dell SSDs and be done with it? lol because money is an object.
You'd need to look at an IOP difference from 7.2k vs 10K - 15K today is almost never done because the drive price is pretty darned close to SSD, and with SSDs ability to start using RAID 5 again, can save you a bundle.
You're load is what matters - if you don't know the load, you don't know what you'll really need.
-
@Dashrender said:
You're load is what matters - if you don't know the load, you don't know what you'll really need.
But since everyone seems to be running on the lower speed drives, what kind of load would ever require the higher speed?
Is it a small percentage that ever needs it?
I understand it is all based on load. But do you load test every server, and then always just end up with 7.2 or 10K?
Sounds like 10K NL-SAS might just be the way to go.
-
You may have heard about servers with hundreds of VMs running on them? Those servers definitely have higher IOPs needs, perhaps not really over what HDDs can provide, but assuming it's just a throughput thing and not the amount of storage thing, SSDs can do with with fewer devices, meaning less heat, fewer parts to fail, etc.
-
So the other main question I have is ... RAID1 or RAID10.
I know the consensus is always RAID10. But I also know that @scottalanmiller always says to "be planning our arrays holistically and not after the number of drives is determined" so since I am at square 1 here, i'm thinking of options.
Since my storage requirements are low, would it be acceptable to just buy 2 larger drives and put them in a RAID1 array?
To do (4) 600GB 10K drives would cost ($800). I could also buy (2) 1.2TB 10K drives for ($660). Same amount of storage.
Even as a better example, are the NL-SAS drives. The 1TB drive is $199. The 3TB version of the same drive is $159. (Based on ... inventory, I guess?) Could have more storage for less than half the cost.
I don't want to overcomplicate my situation. (I've been told I like to try to implement enterprise solutions in a SOHO space, which is a mistake.) But I know RAID10 is often considered "the safest of all choices, it is fast and safe".
But would RAID1 also work here, or am I nuts?
-
Wholistically. So the question is, how many IOPs do you need? Let's assume you need 400 IOPs. Can you get 400 IOPs from 2 larger HDD drives? probably not.
Standard 7.2K NL SAS gives between 75-125 IOPs per drive. Right away we can see that we can't get enough IOPs using RAID 1, since two drives won't give us the needed 400 IOPs.
Assuming low end you need 6 drives in a RAID 0 to get over 400 IOPs, on the high side you need 4. Now with RAID 10, you get 1/2 of all drives for write and all drives for read, so working toward the write side, you would need between 8 and 12 drives to cover your bases on RAID 10. This tells us we need RAID 10.
-
@Dashrender All of that completely ignores the onboard cache.
-
@brianlittlejohn Its like an extra $30 to get a NL-SAS over an Enterprise SATA drive....
-
With De-duplication and Compression and RAID 5/6 Flash drives are cheaper than 10K RPM drives. We did the price comparisons with VSAN 6.2 came out and 10K is officially "dead" unless all your data is encrypted or something.
-
@John-Nicholson said:
@brianlittlejohn Its like an extra $30 to get a NL-SAS over an Enterprise SATA drive....
Yeah the NL-SAS stuff is crazy cheap.
-
@BRRABill It is also CRAZY slow (like Low latency tape is what we call it). Useless for most workloads without a large cache in front of it.
-
@John-Nicholson said:
@BRRABill It is also CRAZY slow (like Low latency tape is what we call it). Useless for most workloads without a large cache in front of it.
Then how is it so many people here are using it for their servers?
-
@BRRABill said:
@John-Nicholson said:
@BRRABill It is also CRAZY slow (like Low latency tape is what we call it). Useless for most workloads without a large cache in front of it.
Then how is it so many people here are using it for their servers?
Perspective. I believe @John-Nicholson works ina large place running tons of workloads on each host.
-
@John-Nicholson said:
With De-duplication and Compression and RAID 5/6 Flash drives are cheaper than 10K RPM drives. We did the price comparisons with VSAN 6.2 came out and 10K is officially "dead" unless all your data is encrypted or something.
What are you using for De-Dup and compression? Is that something native in hypervisors now? if not, it adds to the cost column.
-
@JaredBusch said:
@BRRABill said:
@John-Nicholson said:
@BRRABill It is also CRAZY slow (like Low latency tape is what we call it). Useless for most workloads without a large cache in front of it.
Then how is it so many people here are using it for their servers?
Perspective. I believe @John-Nicholson works ina large place running tons of workloads on each host.
Agreed - HDD might be dead for large companies - big players, but SMB - we have at least a year left, maybe 2.
-
@Dashrender said:
@John-Nicholson said:
With De-duplication and Compression and RAID 5/6 Flash drives are cheaper than 10K RPM drives. We did the price comparisons with VSAN 6.2 came out and 10K is officially "dead" unless all your data is encrypted or something.
What are you using for De-Dup and compression? Is that something native in hypervisors now? if not, it adds to the cost column.
There's dedupe in Win2K12 at the OS level, assuming you are deduplicating NTFS file systems. If you are using encryption, that's the only way you will be able to dedupe data.
We use Pure Storage SANs, which support native dedupe at the block level. And it appears that VSAN supports block level dedupe as well.
https://blogs.vmware.com/virtualblocks/2016/02/10/whats-new-vmware-virtual-san-6-2/