SAS SSD vs SAS HDD in a RAID 10?
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
Are there any good sources that express that as best practice? I'm looking for myself now too and by the way....
There can never be a best practice of this sort. It's standard practice to start with RAID 5 for SSD due to the risk types and levels, but not on HDs for the same reason. RAID 10 tends to saturate RAID controllers with SSD, but not with HDs.
As with all RAID, it comes down to price / risk / performance. And for most deployments, RAID 5 gives the best blend with SSDs; and RAID 10 gives the best blend for HDs. But in both cases, RAID 6 is the second most popular choice, and RAID 10 is an option with SSDs.
With SSDs, you rarely do RAID 10. If you really need the speed, you tend to do RAID 1 with giant NVMe cards instead.
Yeah, sorry, I guess I shouldn't have said "best practice". I was more or less looking for some information that would help validate what Dustin said. I wanted to look into it more and educate myself as much as possible.
Well I think if I am able to go with the SSD drives, I will do a RAID 6. I am creating a few different server builds as options that display different levels of performance and cost.
Speaking of my RAID card, I am looking at the H740P which has 8GB of NV cache memory and flash backed cache. I still need to educate myself on this stuff as well because I'm not sure if this is overkill or not. My other option was the H330, which has none of that.
EDIT: Nevermind on the H330 doesn't offer RIAD 6 as an option.
NV Cache is important. Having battery backed cache means there's a maintenance item in the batteries. They wear out or outright die at some point thus impacting performance because the RAID engine will flip over to Write-Through when they disappear. Performance pain would be noticeable on rust maybe not so much on SSD depending on throughput needs.
With today's RAID engines being dual or more processors having more cache RAM is a good thing. More than 4GB of cache RAM? It depends on what the setup will be and what advanced features would be utilized such as SSD Cache add-ons if using a combination of SSD and rust.
Since this setup will be SQL I suggest running a Telegraf/InfluxDB/Grafana setup to baseline the current SQL server's usage patterns. That would give a really good big picture and close-up view to work from and extrapolate future performance needs as things grow.
Suffice it to say, we'd run with maximum count smaller capacity SAS SSDs in RAID 6 with a 2GB minimum NVRAM RAID controller. That should yield at least 15K IOPS per disk and more than enough MiB/Second throughput.
Suggestion: Make sure the entire storage stack is set up at 64KB block sizes to maximize the balance between IOPS and throughput.
-
@phlipelder said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@dustinb3403 said in SAS SSD vs SAS HDD in a RAID 10?:
OBR5 is the standard if you are going to be using an SSD
Are there any good sources that express that as best practice? I'm looking for myself now too and by the way....
There can never be a best practice of this sort. It's standard practice to start with RAID 5 for SSD due to the risk types and levels, but not on HDs for the same reason. RAID 10 tends to saturate RAID controllers with SSD, but not with HDs.
As with all RAID, it comes down to price / risk / performance. And for most deployments, RAID 5 gives the best blend with SSDs; and RAID 10 gives the best blend for HDs. But in both cases, RAID 6 is the second most popular choice, and RAID 10 is an option with SSDs.
With SSDs, you rarely do RAID 10. If you really need the speed, you tend to do RAID 1 with giant NVMe cards instead.
Yeah, sorry, I guess I shouldn't have said "best practice". I was more or less looking for some information that would help validate what Dustin said. I wanted to look into it more and educate myself as much as possible.
Well I think if I am able to go with the SSD drives, I will do a RAID 6. I am creating a few different server builds as options that display different levels of performance and cost.
Speaking of my RAID card, I am looking at the H740P which has 8GB of NV cache memory and flash backed cache. I still need to educate myself on this stuff as well because I'm not sure if this is overkill or not. My other option was the H330, which has none of that.
EDIT: Nevermind on the H330 doesn't offer RIAD 6 as an option.
NV Cache is important. Having battery backed cache means there's a maintenance item in the batteries. They wear out or outright die at some point thus impacting performance because the RAID engine will flip over to Write-Through when they disappear. Performance pain would be noticeable on rust maybe not so much on SSD depending on throughput needs.
With today's RAID engines being dual or more processors having more cache RAM is a good thing. More than 4GB of cache RAM? It depends on what the setup will be and what advanced features would be utilized such as SSD Cache add-ons if using a combination of SSD and rust.
Since this setup will be SQL I suggest running a Telegraf/InfluxDB/Grafana setup to baseline the current SQL server's usage patterns. That would give a really good big picture and close-up view to work from and extrapolate future performance needs as things grow.
Something like Grafana is only the front-end right? Would InfluxDB be the logging component? I would be interested in gathering performance data but I fear that setting something up would be time-consuming and end up not working, as most this stuff seems to go that way.
Suffice it to say, we'd run with maximum count smaller capacity SAS SSDs in RAID 6 with a 2GB minimum NVRAM RAID controller. That should yield at least 15K IOPS per disk and more than enough MiB/Second throughput.
Yeah I was also considering a RAID 6 with 5 SAS SSD drives but then in the discussion on here some were saying RAID 1 or 5 would be good too. I'm still not decided though.
Suggestion: Make sure the entire storage stack is set up at 64KB block sizes to maximize the balance between IOPS and throughput.
Hmm.. I usually leave the defaults on that sort of thing until I know more about the technology. Is 64K usually the default?
-
Hmm.. I usually leave the defaults on that sort of thing until I know more about the technology. Is 64K usually the default?
No. When we deploy, note that we deploy on Storage Spaces, we make sure the stack from the platters/SSD up to the OS are configured with 64KB block sizes for database driven systems. There are exceptions to the rule such as highly active IOPS setups with smaller write sizes that could push that stack to 32KB to get more IOPS out.
For setups that require fairly mundane day to day file work 128KB or 256KB (the usual default) are okay.
For archival storage or storage that hosts something like 4K video files then we'd push out to 512KB or 1024KB depending on the network fabric.
-
@phlipelder I don't understand why we keep talking about SAS/SATA SSDs and RAID performance when it's a dead technology, suitable for legacy applications only?
NVMe drives are many hundreds of percent faster, have much higher IOPS, lower latency and the software stack is much more optimized.
-
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@phlipelder I don't understand why we keep talking about SAS/SATA SSDs and RAID performance when it's a dead technology, suitable for legacy applications only?
NVMe drives are many hundreds of percent faster, have much higher IOPS, lower latency and the software stack is much more optimized.
NVMe is nowhere near as mature a technology as SAS is. The resilience that's built-in to SAS is just not there yet with NVMe. That's why Hyper-Converged is such a big thing.
Local attached storage, such as NVMe, shared out across nodes with resilience built-in at the node local storage level and up.
-
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
-
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
I can't imagine that it doesn't matter at all...
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
I can't imagine that it doesn't matter at all...
What you care about with drives are speed and capacity. What in that table makes you think Bytes per sector value or Bytes per physical sector value matter?
The piece of table you show is literally talking about how the drive electronics read and write sectors to the drive medium. Any modern OS doesn't care, and will perform, for all intents and purposes, exactly the same.
-
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
I can't imagine that it doesn't matter at all...
What you care about with drives are speed and capacity. What in that table makes you think Bytes per sector value or Bytes per physical sector value matter?
The piece of table you show is literally talking about how the drive electronics read and write sectors to the drive medium. Any modern OS doesn't care, and will perform, for all intents and purposes, exactly the same.
Isn't there potentially less usable storage space with 512e? Isn't 512n older than 512e and aren't there slight performance differences? I read stuff online but I try to steer clear of random people saying things on random forums since there's no way to tell if they know what they are talking about. I'm reading through this document right now in hopes of leading me to the best decision.
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
I can't imagine that it doesn't matter at all...
What you care about with drives are speed and capacity. What in that table makes you think Bytes per sector value or Bytes per physical sector value matter?
The piece of table you show is literally talking about how the drive electronics read and write sectors to the drive medium. Any modern OS doesn't care, and will perform, for all intents and purposes, exactly the same.
Isn't there potentially less usable storage space with 512e? Isn't 512n older than 512e and aren't there slight performance differences?
Yes, there are performance differences. If you can actually notice them in real usage, then you'll be the first I've heard of it.
I read stuff online but I try to steer clear of random people saying things on random forums since there's no way to tell if they know what they are talking about. I'm reading through this document right now in hopes of leading me to the best decision.
White paper = sales brochure. At best, they're trying to confuse you in the hopes that you'll buy the more expensive stuff.
The performance difference is so small as to be statistically irrelevant.
IOPS will be the same, and it's IOPS that we really care about for servers.
-
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@travisdh1 said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
I'm currently reading all about 512n vs 512e right now but I'm not certain on what I should be going with. Any recommendations?
That's referring to sector size on the drive its self. Really doesn't matter at all.
I can't imagine that it doesn't matter at all...
What you care about with drives are speed and capacity. What in that table makes you think Bytes per sector value or Bytes per physical sector value matter?
The piece of table you show is literally talking about how the drive electronics read and write sectors to the drive medium. Any modern OS doesn't care, and will perform, for all intents and purposes, exactly the same.
Isn't there potentially less usable storage space with 512e? Isn't 512n older than 512e and aren't there slight performance differences?
Yes, there are performance differences. If you can actually notice them in real usage, then you'll be the first I've heard of it.
I read stuff online but I try to steer clear of random people saying things on random forums since there's no way to tell if they know what they are talking about. I'm reading through this document right now in hopes of leading me to the best decision.
White paper = sales brochure. At best, they're trying to confuse you in the hopes that you'll buy the more expensive stuff.
The performance difference is so small as to be statistically irrelevant.
IOPS will be the same, and it's IOPS that we really care about for servers.
ok then, where do you go to get technical information on the different technologies like this?
-
The PM863A's at xByte are refurbs. They are @ $1,299 each and include the correct tray with 1 year warranty. If you buy them in a current gen server they include Dell's NBD onsite warranty. If you need a better SLA on the warranty we can do that as well!
-
We also have the 3.84TB MU TLC NVMe as well (PM1725a). Not too much more @ $1,999 each. The R740XD will allow up to x24 NVMe drives and the R640 will allow up to x8 NVMe's.
-
@bradfromxbyte said in SAS SSD vs SAS HDD in a RAID 10?:
We also have the 3.84TB MU TLC NVMe as well (PM1725a). Not too much more @ $1,999 each. The R740XD will allow up to x24 NVMe drives and the R640 will allow up to x8 NVMe's.
Do you put those NVMe cards in a RAID config or what? I'm looking at a R640 build right now and I saw one of those 3.84TB NVMe cards was like $5K
-
It's not a hardware raid. It bypasses the perc completely and goes from the back plane to the proc directly. Any management is done via OS.
-
-
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
ok, well if I want to do a RAID 1 then, I've got these as options as they are almost 4TB:
- 3.84TB SSD SAS Read Intensive 12Gbps 512n 2.5in Hot-plug Drive, PX05SR,1 DWPD,7008 TBW - $4,673.84 /ea.
- 3.84TB SSD SAS Read Intensive 12Gb 512e 2.5in Hot-plug Drive, PM1633a,1 DWPD,7008 TBW - $4,391.49 /ea.
- 3.84TB SSD SATA Read Intensive 6Gbps 512n 2.5in Hot-plug Drive, PM863a - $3,262.09 /ea.
- 3.84TB SSD SATA Read Intensive 6Gbps 512e 2.5in Hot-plug Drive, S4500,1 DWPD,7008 TBW - $3,262.09 /ea.
And I could toss out the H740P and go back to the H330
You pay a severe Dell tax on those prices.
PM863a is a Samsung drive and the real price is around $1500.
S4500 is Intel but older slower model as the newer one is S4510. Real price on the newer model is around $1500.Don't have prices on PX05SR (Toshiba) or PM1633a (Samsung) but similar drive HGST Ultrastar SS300 is around $2800, Seagate 1200.2 is around $2500.
With real price I mean what you pay if you buy one drive from just about anywhere.
I wouldn't waste any money on SAS 12Gbps drives (unless you need dual port) because if you need maximum performance U.2 NVMe is what you want. Don't be fooled by "read intensive" either - 1 DWPD means you can write 3.8TB per day for 5 years.
Damn Dell prices... They are so high. I see on xbyte, the PM863a is a lot cheaper, though I can't tell if that's a used/refurb part. What other places would you suggest I look?
Used and refurb are different. Used has been used, refurb has not.
-
@bradfromxbyte said in SAS SSD vs SAS HDD in a RAID 10?:
It's not a hardware raid. It bypasses the perc completely and goes from the back plane to the proc directly. Any management is done via OS.
oh, well I want redundancy..
-
@scottalanmiller said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
@pete-s said in SAS SSD vs SAS HDD in a RAID 10?:
@dave247 said in SAS SSD vs SAS HDD in a RAID 10?:
ok, well if I want to do a RAID 1 then, I've got these as options as they are almost 4TB:
- 3.84TB SSD SAS Read Intensive 12Gbps 512n 2.5in Hot-plug Drive, PX05SR,1 DWPD,7008 TBW - $4,673.84 /ea.
- 3.84TB SSD SAS Read Intensive 12Gb 512e 2.5in Hot-plug Drive, PM1633a,1 DWPD,7008 TBW - $4,391.49 /ea.
- 3.84TB SSD SATA Read Intensive 6Gbps 512n 2.5in Hot-plug Drive, PM863a - $3,262.09 /ea.
- 3.84TB SSD SATA Read Intensive 6Gbps 512e 2.5in Hot-plug Drive, S4500,1 DWPD,7008 TBW - $3,262.09 /ea.
And I could toss out the H740P and go back to the H330
You pay a severe Dell tax on those prices.
PM863a is a Samsung drive and the real price is around $1500.
S4500 is Intel but older slower model as the newer one is S4510. Real price on the newer model is around $1500.Don't have prices on PX05SR (Toshiba) or PM1633a (Samsung) but similar drive HGST Ultrastar SS300 is around $2800, Seagate 1200.2 is around $2500.
With real price I mean what you pay if you buy one drive from just about anywhere.
I wouldn't waste any money on SAS 12Gbps drives (unless you need dual port) because if you need maximum performance U.2 NVMe is what you want. Don't be fooled by "read intensive" either - 1 DWPD means you can write 3.8TB per day for 5 years.
Damn Dell prices... They are so high. I see on xbyte, the PM863a is a lot cheaper, though I can't tell if that's a used/refurb part. What other places would you suggest I look?
Used and refurb are different. Used has been used, refurb has not.
I thought refurb could have been used.. doesn't refurb mean it's been ordered and returned but not necessarily used?