6gb sas vs 8gb fibre
-
huh - cool - looks like I might have been thinking about it all wrong.
Sorry I'm possibly putting some bad info into your thread @bbiAngie at least I'm learning some cool stuff along the way.
-
@Dashrender No problem! Its all about learning. I still don't get it though....
-
@Dashrender Hey - I'm waiting for someone to pop in and smack me around any second. But until they do...
Just kidding. Like I said, they're limited to the connection speed of the card, the chassis and the drives.
-
@Dave.Creamer said:
@Dashrender Not if the physical connection to the host server is only capable of 6Gb throughput. It's not local storage, so like any attached storage it's limited to the speed of the adapter.
Why would local or not local (DAS is kinda like local though).
The backplane in DAS is just like a backplane inside the server. Typically two backplanes, each with a cable to the controller.
If what Dave says is correct (and now I'm inclined to think he is) either those channels, or the whole card could be limited to 6 Gb.
But it wouldn't matter internal or DAS, they would both be the same speed. -
@Dashrender said:
huh - cool - looks like I might have been thinking about it all wrong.
Sorry I'm possibly putting some bad info into your thread @bbiAngie at least I'm learning some cool stuff along the way.
You can take a gander at the low end stuff I know of on the dumbest search terms I've used that actually worked on NewEgg
-
@travisdh1 What we are looking at cannot be acquired from newegg...
-
@Dave.Creamer said:
Just kidding. Like I said, they're limited to the connection speed of the card, the chassis and the drives.
OK so let's work from there. The RAID controller is the limiting factor - so 6 Gb on SAS or 8 Gb on fibre - Now the question is... are the drives saturating that? If the answer is no, then go with the less expensive option, if the answer is yes, then go with fibre.
At least that makes sense.
-
@bbiAngie said:
@travisdh1 What we are looking at cannot be acquired from newegg...
I figured. It's the easy example for someone not at all familiar with how these thing connect. Also good for you, most of those don't reach "home lab" level.
-
@travisdh1 said:
DAS units normally have a back plane of some sort. So you are really hooking 4 to 8 drives to a single SAS channel. Spinning rust this doesn't matter so much as current SAS/SATA standards are so much faster than the drives can work with data... start dropping SSD into a SAS attached DAS and you could cause yourself a bottleneck.
As Travis says, You're probably not in a bottleneck situation with upwards of 20 spinning rust drives, but if that's 20 SSD drives, you're maxing that thing out.
-
@travisdh1 said:
@bbiAngie said:
@travisdh1 What we are looking at cannot be acquired from newegg...
I figured. It's the easy example for someone not at all familiar with how these thing connect. Also good for you, most of those don't reach "home lab" level.
While I did know that there is not a cable from every drive to the controller, typically there is a backplane or two... what I have to admit is that I assumed that inside that backplane connection with a full bandwidth (6Gb) connection to the RAID card per drive.
So again, with that assumption, the card could potentially talk to all drives on both channels at full speed (i.e. 6 Gb x number of drives), but then would be limited to providing the max output of the card on the PCIe bus to the system, which in the stated case is 6 Gb.
Now in the past, this hasn't been an issue - why not? As Travis said earier, spinning rust, winchester, drives normally don't saturate a 6 Gb connection.
-
Always choose simplicity over complexity unless there's a mitigating factor.
SAS eliminates a whole boat load of other junk you'd need to buy, setup, support and also a ton of other failure points.
-
@Dashrender said:
Can you even compare these two items?
6Gb SAS is the speed of the drives
8 Gb fibre is the connection speed between the host and the DAS chassis.
Right?
Most common DAS chassis connectivity is 6Gb/s SAS.
-
So 8Gb/s FC is going to be marginally faster than 6Gb/s SAS in a pure "how much data can I push down this pipe in theory" perspective.
SAS is a little lighter protocol than FC, so it isn't a full win there. SAS is easier to deal with too.
And of course in the real world, price always matters and that makes SAS way better 99% of the time.
And if this is all that you are comparing it suggests that the 8Gb/s FC has a 6Gb/s SAS bottleneck sitting behind it, anyway.
And then the question is also... why not 12Gb/s SAS?
-
@scottalanmiller said:
And then the question is also... why not 12Gb/s SAS?
If that's the determining factor, then why not 16Gbps FC, or even the upcoming massive speeds?
Of course, if we are talking about pure speed with regards to interconnects between machines, then Infiniband blows them all away:
https://en.wikipedia.org/wiki/InfiniBand
Fibre Channel has it's place. It's much more scaleable, why do you think every multi-tenant environment uses them? SAS interconnects are good for onesie/twosie type things, but when you need to scale more and more storage, it's much easier to pop it in the FC fabric. We provision new storage in a matter of hours integrating into our already large environment. The FC storage devices tend to be much more useful than straight SAS connected storage as well, with the ability to manage the suckers without so much as a blip in downtime.
If money is no object, I would buy Infiniband infrastructure and storage. If I need lots of it on the cheap, SAS. If I need more features, Fibre Channel.
-
@PSX_Defector said:
And then the question is also... why not 12Gb/s SAS?
If that's the determining factor, then why not 16Gbps FC, or even the upcoming massive speeds?
Because one is already in the readily available range and the other is expensive.
-