IOPS for SSD?



  • I am building out a new R740XD and am curious what the IOPS would be for a mix use 2.5" SAS SSD. Specifically, this one -
    https://www.dell.com/en-us/shop/dell-800gb-ssd-sas-mix-use-12gbps-512e-25in-hot-plug-drive-pm1645/apd/400-azii/storage-drives-media#polaris-pd

    Is there a standard ball-park number to assign to SSDs?



  • @wrx7m said in IOPS for SSD?:

    I am building out a new R740XD and am curious what the IOPS would be for a mix use 2.5" SAS SSD. Specifically, this one -
    https://www.dell.com/en-us/shop/dell-800gb-ssd-sas-mix-use-12gbps-512e-25in-hot-plug-drive-pm1645/apd/400-azii/storage-drives-media#polaris-pd

    Is there a standard ball-park number to assign to SSDs?

    Specs for that drive:
    PM1645.png



  • @wrx7m
    BTW, do you really need "mixed use"-drives? Most people's workloads can be handled easily with "read intensive" drives.

    The key metric is the DWPD. Are you writing 800x3=2.4TB each day on average to the drive?

    Read intensive drives are usually around 1 DPDW. So that equals 800GB per day on 800GB drives.

    All these numbers have to be multiplied to account for the entire array. For instance 800GB data to a RAID10 array with four drives is just 400GB per drive. So a four drive RAID10 array can handle 1.6TB per day using read-intensive drives. That is still a huge amount of data.

    But let the price decide. Dell have huge margins on their SSD drives. That's why they are twice the price compared to buying the same drive from the manufacturer directly.



  • @Pete-S said in IOPS for SSD?:

    BTW, do you really need "mixed use"-drives? Most people's workloads can be handled easily with "read intensive" drives.

    Good point. Mixed use tends to be write heavy databases.



  • @Pete-S said in IOPS for SSD?:

    So a four drive RAID10 array can handle 1.6TB per day using read-intensive drives. That is still a huge amount of data.

    It's a bit, yeah 😉



  • @Pete-S said in IOPS for SSD?:

    But let the price decide. Dell have huge margins on their SSD drives. That's why they are twice the price compared to buying the same drive from the manufacturer directly.

    As Scott has pointed out before to me, the higher price isn't just them padding (but that is definitely there IMO), but that the manufacturers also have their own custom firmware on these drives that interact with their backend systems/RAID cards, etc.



  • @scottalanmiller said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    BTW, do you really need "mixed use"-drives? Most people's workloads can be handled easily with "read intensive" drives.

    Good point. Mixed use tends to be write heavy databases.

    Well, we do have quite a few DBs, Including a SQL server that has all data replicated from our ERP system in real time. I would say that and the PRTG server see the most write. That being said, I don't think they are "traditionally" write intensive. I picked mixed-use, because the drive would be part of a RAID5 array where the workload would be mixed.

    Should I pursue SSDs spec'd for read?



  • @wrx7m said in IOPS for SSD?:

    Should I pursue SSDs spec'd for read?

    Probably



  • @scottalanmiller said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    Should I pursue SSDs spec'd for read?

    Probably

    That will be good then. The drives for Read are much less expensive.

    Edit: Not considerably, but notably less expensive. about $200 cheaper and you get an additional 160GB.



  • @Pete-S Where did you find this matrix? I am looking for one for a 960GB RI SSD now.



  • @wrx7m said in IOPS for SSD?:

    @Pete-S Where did you find this matrix? I am looking for one for a 960GB RI SSD now.

    You need to find out what drive it actually is first. Usually it's Samsung or Intel.

    Samsung are called things like PM1635, SM883 and Intel have names like P4610, S3510 etc.
    Sometimes you'll find the actual part number like MZ7KM960HMJP-00005. Just search for it then.



  • @Pete-S Thanks. This one is harder to find. Dell has several 960GB 2.5" SSD RI SAS drives on their site and the server configuration doesn't list the part numbers, but does list DWPD and TBW, but then their individual drive purchase options don't list those figures. At least, not that I have seen for this capacity.



  • @wrx7m said in IOPS for SSD?:

    @Pete-S Thanks. This one is harder to find. Dell has several 960GB 2.5" SSD RI SAS drives on their site and the server configuration doesn't list the part numbers, but does list DWPD and TBW, but then their individual drive purchase options don't list those figures. At least, not that I have seen for this capacity.

    Do you have a link?



  • @wrx7m said in IOPS for SSD?:

    @scottalanmiller said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    Should I pursue SSDs spec'd for read?

    Probably

    That will be good then. The drives for Read are much less expensive.

    Edit: Not considerably, but notably less expensive. about $200 cheaper and you get an additional 160GB.

    This is totally expected. Write heavy SSDs need more cells to move into when you hit write thresholds.



  • @Pete-S said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    @Pete-S Thanks. This one is harder to find. Dell has several 960GB 2.5" SSD RI SAS drives on their site and the server configuration doesn't list the part numbers, but does list DWPD and TBW, but then their individual drive purchase options don't list those figures. At least, not that I have seen for this capacity.

    Do you have a link?

    Could be this one, but there are several that it could be on their search results.
    https://www.dell.com/en-us/work/shop/accessories/apd/400-bdqr



  • This is the storage config for the server-
    bb6a7942-953c-4839-ad9a-9ef14b78df3a-image.png



  • @Dashrender said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    @scottalanmiller said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    Should I pursue SSDs spec'd for read?

    Probably

    That will be good then. The drives for Read are much less expensive.

    Edit: Not considerably, but notably less expensive. about $200 cheaper and you get an additional 160GB.

    This is totally expected. Write heavy SSDs need more cells to move into when you hit write thresholds.

    That is normally not the case. They use different NAND chips in the different models.
    You will not get three times the endurance by just reserving a couple of hundred gigs extra.

    For instance Samsung's latest pairing PM883 and SM883 have the same capacity models. PM883 uses Samsung 64-layer TLC V-NAND while the higher endurance SM883 uses Samsung 64-layer MLC V-NAND.



  • @wrx7m said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    @Pete-S Thanks. This one is harder to find. Dell has several 960GB 2.5" SSD RI SAS drives on their site and the server configuration doesn't list the part numbers, but does list DWPD and TBW, but then their individual drive purchase options don't list those figures. At least, not that I have seen for this capacity.

    Do you have a link?

    Could be this one, but there are several that it could be on their search results.
    https://www.dell.com/en-us/work/shop/accessories/apd/400-bdqr

    The full name of that says: Dell 960GB SSD SATA Read Intensive 6Gbps 512e, 2.5in Drive in 3.5in Hybrid Carrier S4510.

    S4510 in the product description being the magic number here. That's an Intel drive. Just look for Intel S4510 960GB drive and you'll find it.

    PS. Here you go:
    https://ark.intel.com/content/www/us/en/ark/products/134912/intel-ssd-d3-s4510-series-960gb-2-5in-sata-6gb-s-3d2-tlc.html



  • @Pete-S said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    @Pete-S Thanks. This one is harder to find. Dell has several 960GB 2.5" SSD RI SAS drives on their site and the server configuration doesn't list the part numbers, but does list DWPD and TBW, but then their individual drive purchase options don't list those figures. At least, not that I have seen for this capacity.

    Do you have a link?

    Could be this one, but there are several that it could be on their search results.
    https://www.dell.com/en-us/work/shop/accessories/apd/400-bdqr

    The full name of that says: Dell 960GB SSD SATA Read Intensive 6Gbps 512e, 2.5in Drive in 3.5in Hybrid Carrier S4510.

    S4510 in the product description being the magic number here. That's an Intel drive. Just look for Intel S4510 960GB drive and you'll find it.

    PS. Here you go:
    https://ark.intel.com/content/www/us/en/ark/products/134912/intel-ssd-d3-s4510-series-960gb-2-5in-sata-6gb-s-3d2-tlc.html

    Oh, I see. OK. Thanks for pointing out the obvious 🙂
    Too bad the server config doesn't tell you exactly which drives they are.



  • I overlooked the fact that one was only 6Gbps. I found another one that showed 12Gbps and it had KPM5XRUG960G at the end. I googled that and it seems that one is a Kioxia/Toshiba drive. The DWPD matches the Dell server config description of "1"

    https://www.span.com/product/KIOXIA-PM5-R-Toshiba-SSD-Read-Intensive-SIE-KPM5XRUG960G-2-5-SAS-12Gb-960GB-SSD~69373



  • @wrx7m said in IOPS for SSD?:

    This is the storage config for the server-
    bb6a7942-953c-4839-ad9a-9ef14b78df3a-image.png

    I'm not sure about the state of NVMe support on VMware and Dell. But it might be an option.
    I mean I'm sure it supported but the question is what options you have for redundancy.

    NVMe drives have ridiculous IOPs and transfer rate and a single drive will normally outperform an SSD array.

    I'm interest in this myself as I have a customer that are looking at ESXi on Dell R740 servers with the same type of CPU that you have but with less storage capacity overall.



  • @Pete-S said in IOPS for SSD?:

    @wrx7m said in IOPS for SSD?:

    This is the storage config for the server-
    bb6a7942-953c-4839-ad9a-9ef14b78df3a-image.png

    I'm not sure about the state of NVMe support on VMware and Dell. But it might be an option.
    I mean I'm sure it supported but the question is what options you have for redundancy.

    NVMe drives have ridiculous IOPs and transfer rate and a single drive will normally outperform an SSD array.

    I'm interest in this myself as I have a customer that are looking at ESXi on Dell R740 servers with the same type of CPU that you have but with less storage capacity overall.

    The SSD array will be a significant upgrade and probably more speed than we would need. However, with the second tier, we need significantly more storage and the IOPS are less of a concern, as it is for file storage.

    I can look at prices to see what the cost difference is. I don't think I even considered nvme for the servers.



  • @Pete-S said in IOPS for SSD?:

    I'm interest in this myself as I have a customer that are looking at ESXi on Dell R740 servers with the same type of CPU that you have but with less storage capacity overall.

    I just looked at the cost. You have to select a different chassis- "Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, Max of 8 SAS/SATA" , which adds about $1300 bucks,
    However, the 960GB nvme drive is only about $50 more than the 960 GB 12Gbps SAS drive I was looking at.

    The problem for me, is that I can't get the storage density I need when using 2.5" drives. Also, it would cost quite a bit more if I could.



  • @wrx7m said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    I'm interest in this myself as I have a customer that are looking at ESXi on Dell R740 servers with the same type of CPU that you have but with less storage capacity overall.

    I just looked at the cost. You have to select a different chassis- "Chassis up to 24 x 2.5 Hard Drives including 24 NVME Drives, Max of 8 SAS/SATA" , which adds about $1300 bucks,
    However, the 960GB nvme drive is only about $50 more than the 960 GB 12Gbps SAS drive I was looking at.

    The problem for me, is that I can't get the storage density I need when using 2.5" drives. Also, it would cost quite a bit more if I could.

    Interesting. I suppose the question is - do you need the IOPs of NVMe? If so - then perhaps an external DAS shelf for the 3.5 drives for the fileserver would be the way to go.

    Of course, if you're using something like StarWinds to create a vSAN for shared storage, that really ups the cost a lot (doubling everything and all).



  • @Dashrender said in IOPS for SSD?:

    Interesting. I suppose the question is - do you need the IOPs of NVMe?

    Since NVMe drives has the best performance and NVMe driver technology is superior, it makes sense to pick NVMe whenever you can. NVMe drives and the SAS-3 drives are priced roughly the same so you get the extra performance for free (if the chassis can take NVMe).

    NVMe drives are directly attached to the PCIe bus on the CPU and that why Intel for instance refer to SATA/SAS as legacy technology. The PCIe bus interface is also the reason why they are faster.



  • @Pete-S said in IOPS for SSD?:

    @Dashrender said in IOPS for SSD?:

    Interesting. I suppose the question is - do you need the IOPs of NVMe?

    Since NVMe drives has the best performance and NVMe driver technology is superior, it makes sense to pick NVMe whenever you can. NVMe drives and the SAS-3 drives are priced roughly the same so you get the extra performance for free (if the chassis can take NVMe).

    NVMe drives are directly attached to the PCIe bus on the CPU and that why Intel for instance refer to SATA/SAS as legacy technology. The PCIe bus interface is also the reason why they are faster.

    Of course - but @wrx7m said the chassis is $1300 more expensive, assuming the backplane for NVMe can't take SATA/SAS SSDs, and is limited to NVMe, that means he can't use less expensive drives for fileshares - which then causes him to need a DAS to host those drives (@wrx7m already mentioned that the size/slot limitations of the NVMe chassis prevented him from having enough storage for File server). So that's even more expense than just the $1300.

    Plus @wrx7m is doubling everything up, so the costs just keep on climbing.



  • @Dashrender said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    @Dashrender said in IOPS for SSD?:

    Interesting. I suppose the question is - do you need the IOPs of NVMe?

    Since NVMe drives has the best performance and NVMe driver technology is superior, it makes sense to pick NVMe whenever you can. NVMe drives and the SAS-3 drives are priced roughly the same so you get the extra performance for free (if the chassis can take NVMe).

    NVMe drives are directly attached to the PCIe bus on the CPU and that why Intel for instance refer to SATA/SAS as legacy technology. The PCIe bus interface is also the reason why they are faster.

    Of course - but @wrx7m said the chassis is $1300 more expensive, assuming the backplane for NVMe can't take SATA/SAS SSDs, and is limited to NVMe, that means he can't use less expensive drives for fileshares - which then causes him to need a DAS to host those drives (@wrx7m already mentioned that the size/slot limitations of the NVMe chassis prevented him from having enough storage for File server). So that's even more expense than just the $1300.

    Plus @wrx7m is doubling everything up, so the costs just keep on climbing.

    Yes, you're right. NVMe is not a good fit in this case. Even if it's more limitation by Dell than anything else.



  • @Pete-S said in IOPS for SSD?:

    @Dashrender said in IOPS for SSD?:

    @Pete-S said in IOPS for SSD?:

    @Dashrender said in IOPS for SSD?:

    Interesting. I suppose the question is - do you need the IOPs of NVMe?

    Since NVMe drives has the best performance and NVMe driver technology is superior, it makes sense to pick NVMe whenever you can. NVMe drives and the SAS-3 drives are priced roughly the same so you get the extra performance for free (if the chassis can take NVMe).

    NVMe drives are directly attached to the PCIe bus on the CPU and that why Intel for instance refer to SATA/SAS as legacy technology. The PCIe bus interface is also the reason why they are faster.

    Of course - but @wrx7m said the chassis is $1300 more expensive, assuming the backplane for NVMe can't take SATA/SAS SSDs, and is limited to NVMe, that means he can't use less expensive drives for fileshares - which then causes him to need a DAS to host those drives (@wrx7m already mentioned that the size/slot limitations of the NVMe chassis prevented him from having enough storage for File server). So that's even more expense than just the $1300.

    Plus @wrx7m is doubling everything up, so the costs just keep on climbing.

    Yes, you're right. NVMe is not a good fit in this case. Even if it's more limitation by Dell than anything else.

    Also looks like you can use NVMe drives on 740xd for a $470 premium, not $1300.

    But Dell have limitations how you can configure so it's possible that something else will become more expensive instead.

    In the case of @wrx7m it won't work since he needs 3.5" drives but it might for others.

    dell_poweredge_740xd.png



  • @Pete-S They dropped the price to 1061.24 since I posted. lol Interesting. Yes, but that is a max of 12 nvme. I may have misunderstood that option with 8 SAS/SATA. I am guessing that the max of 12 would allow for more SAS/SATA, although it doesn't mention it. My issue was also with the available drive capacities and cost per TB for spinning disks in the 2.5" spec.



  • @wrx7m said in IOPS for SSD?:

    @Pete-S They dropped the price to 1061.24 since I posted. lol Interesting. Yes, but that is a max of 12 nvme. I may have misunderstood that option with 8 SAS/SATA. I am guessing that the max of 12 would allow for more SAS/SATA, although it doesn't mention it. My issue was also with the available drive capacities and cost per TB for spinning disks in the 2.5" spec.

    Yeah, especially direct from the OEM. Have you thought about buying the storage from xByte instead?