Our New Scale Cluster Arrives Tomorrow
-
Not saying the Dells would be our first recommendation or not, I'm not the one managing them and they are in the lab so they were chosen partially for cost, partially because they are specifically tested against the Scale HC3 cluster, partially based on availability and because of the necessity of optical media. If we were doing it for other purposes, we might be on Netgear Prosafe like normal with copper connects.
-
@scottalanmiller that's why the N4032's are pretty attractive. You can get them in standard 10Gbase-T copper configs, good port density, stackable, all around decent performance and price isn't insane. Good middle of the road option based on my research.
-
@wrx7m said:
@scottalanmiller said:
We are using Dells. @art_of_shred or @Mike-Ralston would have to tell you which models, they have physical access to them.
Interested in the model numbers. I am pushing for some Extreme switches and have been planning an upgrade and expansion for about a year including the, "It is going to be about $25K-30K" remarks several times to my boss. Get it all ironed out and just under the 30K mark after some back and forths with the vendor and Extreme and then I get the, "Wow. That is a lot more than I thought." when submitting the proposal. Waiting to hear back from the owner next week. Might have to go back to the drawing board.
Not sure what your requirements are, but my shortlist of 10GbE switches for our baby VSAN project is:
- Dell N4032
- HP FlexFabric 5700
- Juniper EX4550
I was excited about the Brocade ICX/VDX stuff but I read lots of buggy firmware horror stories and the port licensing model for 10GbE really made the price jump (note- I did not run that through any major vendors to see how much padding is in those license prices)
Yes, Cisco stuff is conspicuously absent from my list. I don't particularly trust the 3850X for storage switching and the Nexus stuff gets pricey fast, plus I just don't like Cisco much. But I am no storage switching expert so take my thoughts with, like, a hogshead worth of salt.
-
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
-
@wrx7m said:
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
Problem is, you need six ports for that many hosts.
-
@scottalanmiller Two switches?
-
-
@scottalanmiller You scared me. I had a brain fart and was trying to figure out if I didn't plan correctly.
-
So you have two switches, each with 3 links to the ESXi hosts and one uplink to a central 10GigE switch?
-
The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!
-
@scottalanmiller said:
The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!
:drool:
-
Yeah, it's HC2150 #1. The first off the line. Not yet on the market. We are uber excited.
Once it is up, we are going to merge it with the HC2100 to make a six node storage-tiered cluster.
-
So the NTG Lab main cluster is about to move from 192GB of RAM to 384GB of RAM
-
Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.
-
@crustachio said:
Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.
Us too! It's the first one out so a lot of testing to be done.
-
@scottalanmiller More advanced raises eyebrow
Scribe is an object storage system (as is VSAN). I've seen the scale guys say Vmware copied them. They didn't I've seen the original R&D proposal and scale was using GPFS back in 2011.
-
@scottalanmiller said:
Gluster
Gluster is terrible for VM storage (no caching, beyond client memory, brick healing can kick of a file lock and crash a VM). I tried to make a VSA off of it ~2013.
CEPH in theory can be used, but its performance is bad for streaming workloads (Strangely random doesn't suck). Most serious openstack deployments use something else for storage for this reason (or go all flash).
I do agree that what makes a "HCI system" is the simplicity of management and integration between the storage and hypervisor. I built a VSA DRDB system on VMware back in ~2009. It was kludgy to manage, painful to expand, and slow. Modern systems that can leverage flash, don't have file system or nested VM overheads etc are a LOT better idea.
-
@crustachio The VDX's issues with bugginess were for layer 3 bridges back in the day. The only outstanding issues I"m aware of involve Cisco wireless controller gratuitous arps. They are limited on multicast PIM-Sparse was another gotcha on the big chassis. They are pretty stable these days, and most people for "serious" BGP edge layer 3 use MLXe's anyways. The VDX is more about having a L2MP fabric for heavy east/west workloads.
-
@crustachio I thought their tier system was based on a mix of drives in a node, not a mixture of host types in the cluster. From my understanding of SCRIBE you would end up with 1/2 the IO coming from flash and 1/2 coming from NL-SAS disks (until you fill up one tier). That's going to make for interesting latency consistency, unless they added some extra intelligence onto it (So both copies will always be on one tier or another).
-
New nodes are being joined to the cluster today... very excited.