NVMe and RAID?
-
@scottalanmiller said in NVMe and RAID?:
@PhlipElder said in NVMe and RAID?:
@marcinozga said in NVMe and RAID?:
@PhlipElder said in NVMe and RAID?:
@marcinozga said in NVMe and RAID?:
@PhlipElder said in NVMe and RAID?:
@biggen said in NVMe and RAID?:
I appreciate all the help guys. Yeah I'm compiling a price list but it ain't cheap. Server alone would be about $7k and that's on the low end with smaller NVMe drives (1.6TB). Then still have to purchase the switch and then have to purchase the 10Gbe NICs for the workstations themselves.
Its a large investment that I bet never sees the light of day. It will turn into "I have $2k, what can you build with that?"
FleaBay is your best friend.
10GbE pNIC: Intel x540: $100 to $125 each.
For 10GbE switch go for NETGEAR XS712T XS716T or XS728T depending on port density needed. The 12-port is $1K.
As far as the server goes, is this a proof of concept driven project?
- ASRock Rack Board
** Dual 10GbE On Board (designated by -2T) - Intel Xeon Scalable or AMD EPYC Rome
- Crucial/Samsung ECC Memory
- Power Supply
The board should have at least one SlimSAS x8 or preferably two. Each of those ports gives you two NVMe drives. An SFF-8654 Y cable to connect to a two drive enclosure would be needed. I suggest ICYDOCK.
The build will cost a fraction of a Tier 1 box.
Once the PoC has been run and the kinks worked out, then go for the Tier 1 box tailored to your needs.
I love Asrock Rack products, their support is great, if they can actually fix the damn issues, if not, you're sol. My next server refresh will have this board: https://www.asrockrack.com/general/productdetail.asp?Model=ROMED8-2T#Specifications
We just received two ROMED6U-2L2T boards:
https://www.asrockrack.com/general/productdetail.asp?Model=ROMED6U-2L2T#SpecificationsThey are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.
FYI: We only use EPYC Rome processors with a TDP of 155 watts or higher. Cost wise, there's very little increase while the performance benefits are there.
EDIT: Missed the Slimline x8 beside the MiniSAS HD ports. That's six NVMe drives if we go that route.
You're probably overpaying with that CPU, here's a deal not many know about, Epyc 7302P for $713
https://www.provantage.com/hpe-p16667-b21~7CMPTCR7.htmWe're in Canada. We overpay for everything up here. :S
And even when you pay a lot, you often can't get things. We tried to order stuff from Insight Canada for our Montreal office and after a week of not being able to ship, they eventually just told us that they couldn't realistically service Canada.
We're creative with our procurement process so we don't have issues with getting product.
Insight is tied to Ingram Micro. If they don't have it, Insight doesn't.
Our Canadian distribution network used to be quite homogeneous with all three major distributors having similar line cards. The competition was good though pricing was fairly consistent across the three.
We have a number of niche suppliers that help when we can't get product from the Big Three always making sure we're dealing with legit product not grey market. We verify that with our vendor contacts.
PING if you need anything.
- ASRock Rack Board
-
@biggen said in NVMe and RAID?:
Yeah I have no problem whiteboxing stuff for me (or close family), but when you do it for others, they expect tech support for life. I don't really want to go down that road
But a PoC build may be more "in line" with his budge needs. Thanks for that @PhlipElder !
That's what we do as a business.
We've been system builders since day one of MPECS in 2003 but since the late 1990s for myself.
We have a parts bin full of broken promises.
But, we also have a defined solution set that we know works so we run with them.
Our support terms are clearly defined and require a contract.
We are either building a mutually beneficial business relationship or it ain't gonna happen. We don't do one-offs unless there's good reason to.
-
The ROMED6U-2L2T is mATX? Whats the advantage there over a full size ATX board?
-
@biggen said in NVMe and RAID?:
@PhlipElder said in NVMe and RAID?:
EPYC Rome 7262
The ROMED6U-2L2T is mATX? Whats the advantage there over a full size ATX board?
it's smaller, so takes up less space
-
Ha I just found an Anandtech article about that exact board: https://www.anandtech.com/show/15835/asrock-rack-offers-rome-matx-motherboard-with-only-6-memory-channels
-
@biggen said in NVMe and RAID?:
The ROMED6U-2L2T is mATX? Whats the advantage there over a full size ATX board?
Smaller chassis. It's the next best thing to Mini-ITX but without the pains of dealing with Mini-ITX.
-
-
@biggen said in NVMe and RAID?:
So this Icy Dock enclosure would connect to both of those SlimSAS port with what exactly? Four of these?
Edit: No that wouldn't work. Like you said, need a Y-cable. Something like this?
Correct on both counts.
https://blog.mpecsinc.com/2020/07/27/custom-build-s2d-the-elusive-slimsas-8x-sff-8654-cable/ -
@PhlipElder Excellent! Bookmarking your blog as well.
On a side note, I really really like Mangolassi.it Actual realife sys admins who you can bounce stuff off and ask questions. Glad this site is doing well. Its always my first search for something technical that I know someone in here will have dealt with at some point in their career.
-
The SFF-8654 to dual SFF-8643 is a bit of a unicorn isn't it? Heck, the SFF-8654 isn't even listed in the SAS wiki.
-
@biggen said in NVMe and RAID?:
The SFF-8654 to dual SFF-8643 is a bit of a unicorn isn't it? Heck, the SFF-8654 isn't even listed in the SAS wiki.
They are now. Finding them was a real challenge. And even then, we need to order them in bulk.
We may put a few up for sale for folks doing custom builds since they are so hard to find.
We have plans for them.
-
@PhlipElder said in NVMe and RAID?:
They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.
Just a side note - 4 NVMe drives is a typical Supermicro config that they have on a plethora of motherboards and chassis. So your config is not unusual at all and you could buy one off the shelf from Supermicro. Supermicro is not HPE or Dell - they probably have 20 times as many models, maybe more. And they cater to OEM system builders.
-
@PhlipElder said in NVMe and RAID?:
@biggen said in NVMe and RAID?:
The SFF-8654 to dual SFF-8643 is a bit of a unicorn isn't it? Heck, the SFF-8654 isn't even listed in the SAS wiki.
They are now. Finding them was a real challenge. And even then, we need to order them in bulk.
We may put a few up for sale for folks doing custom builds since they are so hard to find.
We have plans for them.
It could be good to know that Broadcom/LSI have them. They're used on Broadcoms Tri-Mode storage adapters (SAS/SATA/NVMe).
I think it's this model you want: 05-60002-00
-
@Pete-S said in NVMe and RAID?:
@PhlipElder said in NVMe and RAID?:
They are a perfect board for our cluster storage nodes with two built-in 10GbE ports. An AMD EPYC Rome 7262 processor, 96GB or 192GB of ECC Memory, four NVMe via SlimSAS x8 on board, and up to twelve SATA SSDs or HDDs for capacity and we have a winner.
Just a side note - 4 NVMe drives is a typical Supermicro config that they have on a plethora of motherboards and chassis. So your config is not unusual at all and you could buy one off the shelf from Supermicro. Supermicro is not HPE or Dell - they probably have 20 times as many models, maybe more. And they cater to OEM system builders.
I love SuperMicro, so easy to customize.
-
@PhlipElder What cases/heatsinks are you using when building these custom systems?
-
NVMe drives really don't produce heat, of course the CPU and power supplies do but standard ventilation that comes with those with vent controls likely would handle that just fine.
-
@DustinB3403 said in NVMe and RAID?:
NVMe drives really don't produce heat, of course the CPU and power supplies do but standard ventilation that comes with those with vent controls likely would handle that just fine.
Actually they do become hot. Especially M.2 and that is one reason that they are not suitable for intensive workloads. As they become too hot, they throttle down the performance so save themselves from damage.
The benefit of 2.5" U.2 is the cooling properties and the hotswap capability. And larger capacity, since it's bigger than M.2.
Also the NVMe drives that goes right into the slot (aka HHHL) features a heatsink for cooling.
-
Also to put things in perspective.
A NVMe U2 drive uses up to 15W. Which is about the same as two conventional spinning drives (SAS enterprise drives).
Heat is only something you have to think about if you're DIY. If you use real server chassis made for SAS/NVMe/whatever you don't have to even consider it.
-
But what about the server case itself? What models are you putting these components in? I'd probably do a tower for the initial build.
-
@biggen said in NVMe and RAID?:
But what about the server case itself? What models are you putting these components in? I'd probably do a tower for the initial build.
Pedestal: Silversone CS381.
Rack Chassis: We go barebones from a variety of vendors. Intel, TYAN, ASRock Rack, and others
Rack Chassis Standalone: Chenbro comes to mind. Silverstone also makes them. We've looked into iStar and Rosewill though never jumped on board.