Our New Scale Cluster Arrives Tomorrow
-
@Breffni-Potter said:
@NattNatt said:
hah well If you ever decide to have it multi-location get in touch and yeah, we have lots of space in our DC's for expansion!
oooo, who are you
Hah a lowly tech that works in a big hosting company based in the UK
-
The next three nodes are on their way now! The first three nodes were the SATA tier. We have an SSD tier coming now as well.
-
@scottalanmiller said:
The next three nodes are on their way now! The first three nodes were the SATA tier. We have an SSD tier coming now as well.
Ahhhh... to have a budget for such things must be nice.
-
I was hot and heavy for Scale.
Fast forward a month and now I'm neck-deep in planning a from-scratch VSAN cluster.
The doesn't-want-to-administer-vmware side of me is lusting for Scale, but the government-pricing-means-vsan-is-practically-free side of my boss trumps all.
-
Should I add... second 10GigE switch coming too.
-
-
We are using Dells. @art_of_shred or @Mike-Ralston would have to tell you which models, they have physical access to them.
-
I wanted the Melanox 40GigE ones but no luck
-
@scottalanmiller yeah I started looking at Mellanox really excitedly and then stopped when I realized I was out of my depth.
-
@scottalanmiller said:
We are using Dells. @art_of_shred or @Mike-Ralston would have to tell you which models, they have physical access to them.
Interested in the model numbers. I am pushing for some Extreme switches and have been planning an upgrade and expansion for about a year including the, "It is going to be about $25K-30K" remarks several times to my boss. Get it all ironed out and just under the 30K mark after some back and forths with the vendor and Extreme and then I get the, "Wow. That is a lot more than I thought." when submitting the proposal. Waiting to hear back from the owner next week. Might have to go back to the drawing board.
-
Not saying the Dells would be our first recommendation or not, I'm not the one managing them and they are in the lab so they were chosen partially for cost, partially because they are specifically tested against the Scale HC3 cluster, partially based on availability and because of the necessity of optical media. If we were doing it for other purposes, we might be on Netgear Prosafe like normal with copper connects.
-
@scottalanmiller that's why the N4032's are pretty attractive. You can get them in standard 10Gbase-T copper configs, good port density, stackable, all around decent performance and price isn't insane. Good middle of the road option based on my research.
-
@wrx7m said:
@scottalanmiller said:
We are using Dells. @art_of_shred or @Mike-Ralston would have to tell you which models, they have physical access to them.
Interested in the model numbers. I am pushing for some Extreme switches and have been planning an upgrade and expansion for about a year including the, "It is going to be about $25K-30K" remarks several times to my boss. Get it all ironed out and just under the 30K mark after some back and forths with the vendor and Extreme and then I get the, "Wow. That is a lot more than I thought." when submitting the proposal. Waiting to hear back from the owner next week. Might have to go back to the drawing board.
Not sure what your requirements are, but my shortlist of 10GbE switches for our baby VSAN project is:
- Dell N4032
- HP FlexFabric 5700
- Juniper EX4550
I was excited about the Brocade ICX/VDX stuff but I read lots of buggy firmware horror stories and the port licensing model for 10GbE really made the price jump (note- I did not run that through any major vendors to see how much padding is in those license prices)
Yes, Cisco stuff is conspicuously absent from my list. I don't particularly trust the 3850X for storage switching and the Nexus stuff gets pricey fast, plus I just don't like Cisco much. But I am no storage switching expert so take my thoughts with, like, a hogshead worth of salt.
-
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
-
@wrx7m said:
I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.
Problem is, you need six ports for that many hosts.
-
@scottalanmiller Two switches?
-
-
@scottalanmiller You scared me. I had a brain fart and was trying to figure out if I didn't plan correctly.
-
So you have two switches, each with 3 links to the ESXi hosts and one uplink to a central 10GigE switch?
-
The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!