Our New Scale Cluster Arrives Tomorrow
-
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
-
@wrx7m said:
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
Problem is, you need six ports for that many hosts.
-
@scottalanmiller Two switches?
-
-
@scottalanmiller You scared me. I had a brain fart and was trying to figure out if I didn't plan correctly.
-
So you have two switches, each with 3 links to the ESXi hosts and one uplink to a central 10GigE switch?
-
The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!
-
@scottalanmiller said:
The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!
:drool:
-
Yeah, it's HC2150 #1. The first off the line. Not yet on the market. We are uber excited.
Once it is up, we are going to merge it with the HC2100 to make a six node storage-tiered cluster.
-
So the NTG Lab main cluster is about to move from 192GB of RAM to 384GB of RAM
-
Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.
-
@crustachio said:
Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.
Us too! It's the first one out so a lot of testing to be done.
-
@scottalanmiller More advanced raises eyebrow
Scribe is an object storage system (as is VSAN). I've seen the scale guys say Vmware copied them. They didn't I've seen the original R&D proposal and scale was using GPFS back in 2011.
-
@scottalanmiller said:
Gluster
Gluster is terrible for VM storage (no caching, beyond client memory, brick healing can kick of a file lock and crash a VM). I tried to make a VSA off of it ~2013.
CEPH in theory can be used, but its performance is bad for streaming workloads (Strangely random doesn't suck). Most serious openstack deployments use something else for storage for this reason (or go all flash).
I do agree that what makes a "HCI system" is the simplicity of management and integration between the storage and hypervisor. I built a VSA DRDB system on VMware back in ~2009. It was kludgy to manage, painful to expand, and slow. Modern systems that can leverage flash, don't have file system or nested VM overheads etc are a LOT better idea.
-
@crustachio The VDX's issues with bugginess were for layer 3 bridges back in the day. The only outstanding issues I"m aware of involve Cisco wireless controller gratuitous arps. They are limited on multicast PIM-Sparse was another gotcha on the big chassis. They are pretty stable these days, and most people for "serious" BGP edge layer 3 use MLXe's anyways. The VDX is more about having a L2MP fabric for heavy east/west workloads.
-
@crustachio I thought their tier system was based on a mix of drives in a node, not a mixture of host types in the cluster. From my understanding of SCRIBE you would end up with 1/2 the IO coming from flash and 1/2 coming from NL-SAS disks (until you fill up one tier). That's going to make for interesting latency consistency, unless they added some extra intelligence onto it (So both copies will always be on one tier or another).
-
New nodes are being joined to the cluster today... very excited.
-
God I really want a scale system in my rack!
-
@hobbit666 said in Our New Scale Cluster Arrives Tomorrow:
God I really want a scale system in my rack!
Especially a six node, Winchester / SSD tiered one!
-
With the holiday I've been pretty caught up in stuff. Back at my desk today and hopefully this will all be back and online very soon!