ServerBear Specs on Scale HC3
-
@hobbit666 said:
Not silly money, what sort of storage comes with the basic starter kit?
New units are right around the corner so don't want to say for sure, but the HC1000 and HC2000 clusters are both 12x SATA drives in RAIN throughout the cluster. So the performance varies a little but the IOPS are more or less what they are, you can see those above. The capacity is around 21.6TB RAW.
-
Currently the HC1000 is 7200RPM SATA and the HC2000 is usually 10K SAS.
-
@scottalanmiller said:
@hobbit666 said:
Not silly money, what sort of storage comes with the basic starter kit?
New units are right around the corner so don't want to say for sure, but the HC1000 and HC2000 clusters are both 12x SATA drives in RAIN throughout the cluster. So the performance varies a little but the IOPS are more or less what they are, you can see those above. The capacity is around 21.6TB RAW.
I don't know what RAIN is - so what is the usable storage?
-
@Dashrender RAIN = Redundant Array of Independent Noodles...
-
@RojoLoco said:
@Dashrender RAIN = Redundant Array of Independent Noodles...
Mmm, noodles, it's past my lunchtime.
-
@travisdh1 mine too. Now I want noodles, redundant or otherwise.
-
@RojoLoco said:
@travisdh1 mine too. Now I want noodles, redundant or otherwise.
They have to be redundant. Eating JBON is much less satisfying.
-
@RojoLoco said:
@travisdh1 mine too. Now I want noodles, redundant or otherwise.
Great, now I want redundant tacos.
-
@coliver said:
@RojoLoco said:
@travisdh1 mine too. Now I want noodles, redundant or otherwise.
They have to be redundant. Eating JBON is much less satisfying.
You win this thread... we will be sending you an award soon.
-
It's oddly satisfying to know that you can relate RAID to almost anything... we had a hamster thread not too long ago as well.
-
What are the specs on the servers?
I am trying to think of how you would create a poor man's cluster....
Couldn't I just create a XenServer Cluster with Xen Orchestra, and get the same thing?
-
@Dashrender said:
I don't know what RAIN is - so what is the usable storage?
It's mirrored, so cut it in half
-
@Dashrender said:
@scottalanmiller said:
@hobbit666 said:
Not silly money, what sort of storage comes with the basic starter kit?
New units are right around the corner so don't want to say for sure, but the HC1000 and HC2000 clusters are both 12x SATA drives in RAIN throughout the cluster. So the performance varies a little but the IOPS are more or less what they are, you can see those above. The capacity is around 21.6TB RAW.
I don't know what RAIN is - so what is the usable storage?
RAIN is Redundant Array of Independent Nodes. The redundancy and/or mirroring (both in this case) are done at the node level, not at a disk pair level. So in many ways, like capacity, it acts just like RAID 10, but the performance balancing and survivability is different.
-
@Dashrender said:
so what is the usable storage?
Outside of the Scale world, there are RAIN systems that are not mirrored, so RAIN itself does not mean a specific utilization rate.
-
@aaronstuder said:
What are the specs on the servers?
The HC2000 in question (we have the fastest one that there is, this is the very latest unit with the Winchesters, technically an HC2100) are built on Dell R430 single CPU nodes. 64GB of RAM per node.
-
@aaronstuder said:
Couldn't I just create a XenServer Cluster with Xen Orchestra, and get the same thing?
Nowhere close, I'm afraid. The thing that makes the Scale cluster important is the RAIN based scale out RLS system on which it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for the interface. It's the storage mostly and the HA management secondary that make it valuable.
The RAIN storage here mirrors at the block level across the cluster providing a very high durability storage layer. And very importantly that's a native storage layer, in the kernel. There is no VSA here, this is a more advanced and more powerful approach. The storage layer runs right in the hypervisor kernel.
Then on top of that, there is integrated storage and compute management, so both layers know what the other layer is doing. Performance, faults, capacity data is all transparent between the two. So this provides for a level of storage performance, scale out and reliability that you cannot easily replicate on your own.
-
Now assuming that you do want to take the "poor man's" approach and try to build something like this on your own, of course there are tools for that. Using either KVM or Xen you can add a scale out storage layer to that. The two key ones on the market are Gluster and CEPH. You would have to build that component yourself. DRBD is great for two nodes but is not scale out, it's a good product but a different animal here. So you need to build your own durable, HA scale out storage layer on which to run Xen or KVM. Then layer the HA and management on top of that.
-
@scottalanmiller said:
hich it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for
How do you have three nodes and only loose 50% storage, yet loose nothing when a node fails?
-
@Dashrender said:
@scottalanmiller said:
hich it is built. XS with XO would give you the same basic "single pane of glass" interface stuff, but you aren't getting a Scale for
How do you have three nodes and only loose 50% storage, yet loose nothing when a node fails?
RAIN mirroring In RAID terms, think of network RAID 1+.
-
The blocks are mirrored. No matter what you write to one node, it is replicated to at least one additional node. But no node is a "pair".