The Four Things That You Lose with Scale Computing HC3
-
@scottalanmiller said
The real question should be... why would you assume that it would cost more?
Because if everyone else is selling Apples at $10 a pack and has done for years and a new shop opens which sells them at $5 a pop.
Either the new shop is doing something screwy.
Everyone else is ripping you off/charging because they can.9/10, the answer is generally the cheaper guy is doing something screwy but every once in a while you get a nice surprise.
-
If you were to price out the Scale hardware yourself, you could figure out where their profit opportunity is. Then figure that they deal in large scale so they get better deals than you will, as well. If all you wanted was the Scale hardware, you could do it much cheaper. Their system is designed to need minimal support which keeps their support costs low by not needing to do so much. Since the software that they use is in house or open source, there is no hard cost associated with that. So if you look at the difference in sales cost to the hardware cost, that's the margin and it is very clearly there. They need it, as there is a lot of in house development and such, but you can see that they have solid margins built in.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
Because if everyone else is selling Apples at $10 a pack and has done for years and a new shop opens which sells them at $5 a pop.
But the issue isn't that someone was selling apples and now someone else is selling apples. It's that someone was selling oranges at $10 and now someone has a pear for $5. It's a different thing, there is no reason to assume that it would cost the same.
But, this costs more, not less. If you compare three Dell R430 without Scale, that's $5. If you look at the Scale, it's $10. So they aren't selling the same thing for less, they are selling it for more (with more value added, of course.)
That's the "apples to apples" pricing difference. What EMC and 3PAR do is unrelated, it's only a talking point because they are similarly appliances, not competing devices.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
Everyone else is ripping you off/charging because they can.
9/10, the answer is generally the cheaper guy is doing something screwy but every once in a while you get a nice surprise.
Cisco, VMware, IBM, Microsoft... they all charge an arm and a leg because they've taught people that big names are worth any price and they sell through managers, not IT people.
All of them have low cost or free competitors that blow their doors off... Ubiquiti, Xen, Dell, CentOS.
If you price out a big name that sells on marketing muscle, the price is nearly always double what it should be. So no surprises there.
-
https://www.scalecomputing.com/wp-content/uploads/2015/01/networking-guidelines.pdf
Hmm. I think some testing with Ubiquiti gear would be welcome.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
https://www.scalecomputing.com/wp-content/uploads/2015/01/networking-guidelines.pdf
Hmm. I think some testing with Ubiquiti gear would be welcome.
It would be, but UBNT doesn't have 10GigE switches yet. You "almost always" want to be on 10GigE with a Scale cluster, so the GigE stuff doesn't get the big priority. We would love to be on UBNT for all of our switching but they just don't have what we need. So we have Dell 10GigE switches that feed up into our UBNT.
-
@scottalanmiller said
It would be, but UBNT doesn't have 10GigE switches yet. You "almost always" want to be on 10GigE with a Scale cluster,
Assuming you have 2 clusters in 2 locations, how is it over say 50 down, 50 up WAN connections?
-
Can the Scale stuff interlink between the units directly over 10Gig and then for user connections to resources they use the standard Gigabit?
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
@scottalanmiller said
It would be, but UBNT doesn't have 10GigE switches yet. You "almost always" want to be on 10GigE with a Scale cluster,
Assuming you have 2 clusters in 2 locations, how is it over say 50 down, 50 up WAN connections?
That's very different. The backplane is where you want 10GigE, because that is your local storage network talking to itself. You have reads and writes that traverse that backplane. It's not the LAN traffic, but the internal cluster traffic.
Going between clusters is very dependent on a lot of different things. I've not tested it but it would be very workload dependent and dependent on how you set up the two locations to work. But it is async, so the speed of the WAN link does not slow down the cluster, but the speed of the backplane does.
-
If there is one thing I've learned in IT, go with reliability over cost everyday of the week. Even thousands of dollars is minimal when you are in the heat of having critical systems go down.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
Can the Scale stuff interlink between the units directly over 10Gig and then for user connections to resources they use the standard Gigabit?
Yes, there are two different networks. A backplane and a LAN / frontplane. They are not directly related in any way. So one could be 10GigE and one could be GigE.
-
@scottalanmiller said
Yes, there are two different networks. A backplane and a LAN / frontplane. They are not directly related in any way. So one could be 10GigE and one could be GigE.
Ok but do I need a separate switch for the backplane network or do they have the ports to do it by themselves?
-
@IRJ said in The Four Things That You Lose with Scale Computing HC3:
If there is one thing I've learned in IT, go with reliability over cost everyday of the week. Even thousands of dollars is minimal when you are in the heat of having critical systems go down.
But sometimes you pay for the perception of reliability at a higher price.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
@scottalanmiller said
Yes, there are two different networks. A backplane and a LAN / frontplane. They are not directly related in any way. So one could be 10GigE and one could be GigE.
Ok but do I need a separate switch for the backplane network or do they have the ports to do it by themselves?
It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.
It's two ports for the backplane and two ports for the LAN on each node. Nothing weird, think of it as two SAN ports and two LAN ports. Same concept.
-
@scottalanmiller said in The Four Things That You Lose with Scale Computing HC3:
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
They've got maybe 12 installs when I last checked. Versus how many non Scale deployments.
12? I know more than that many customers personally. They have lots of deployments.
Thanks @scottalanmiller. We have 1500+ cluster deployments worldwide (https://www.scalecomputing.com/press_releases/record-growth-in-2015-for-scale-computing/). I don't think we've made any statements on the UK specifically that I can point to, but I'd be happy to put you in touch with any existing customers in the area.
-
@scottalanmiller said
It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.
I'm trying to be blunt.
Do I need a switch at all for the backplane or can they communicate directly? Do I need to factor in 10GigE switches for redundancy as well?
-
I have no clients that need an on premise server design large enough to justify a Scale setup.
I look forward to acquiring a client with that need though. Scale is a rock solid solution for the price point.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
@scottalanmiller said
It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.
I'm trying to be blunt.
Do I need a switch at all for the backplane or can they communicate directly? Do I need to factor in 10GigE switches for redundancy as well?
Oh, can you skip the switch? No, you NEED a switch. Each node has two backplane ports, one primary and one failover (it's active/passive; not load balanced for latency reasons) so there is no free port and you would need two free ports PER extra node, so that would be four ports minimum and it would explode with cost as you scale up ... adding a four node would, for example, require each original node to add two more 10GigE ports.
Yes, you "need" redundant backplane switches. It will work without them, but it is not recommended.
-
We recommend that you have four switches, two switches in a high availability pair for the backplane and two in a high availability pair for the normal network traffic. Of course, the goal here is to achieve a totally high availability system not just for the Scale HC3 cluster, but for the network itself. Having your Scale cluster up and running won't do you any good if the network it is attached to is down. But the system will run with less.
-
@Breffni-Potter said in The Four Things That You Lose with Scale Computing HC3:
@scottalanmiller said
It's all ethernet, so you could mix it together on a single switch and just VLAN them apart from each other.
I'm trying to be blunt.
Do I need a switch at all for the backplane or can they communicate directly? Do I need to factor in 10GigE switches for redundancy as well?
Did we manage to answer your questions?