What is a Blade Server
-
Whatever - my blade server is better than yours.
-
@Bob-Beatty said:
What they say: Once you go blade, you never go back....
I've actually never once worked with a shop that tried blades and didn't go back. Running side by side with rackmounts over a long period of time and multiple vendors the experience was always the same: increased risk, more problems from all of the complexity, extra training and dependence on the vendors, higher cost and lower performance. And I worked with them in shops that bought servers by the thousands and were able to hit the kind of scale that blades are supposedly built for and they couldn't find a way to get them to break even on cost and maintenance. Once anything would go wrong with them the complexity would rear its ugly head and little issues would turn into big ones. Networking, especially, tends to be problematic. And the firmware issues... oh the firmware issues.
-
@Bob-Beatty said:
Whatever - my blade server is better than yours.
Same one, UCS B200
The UCS were actually the worst. HP was the best, but still bad enough to throw them out, which is what we did.
-
You must have had lemons. Never had an issue with mine and the firmware updates were a piece of cake. But I only managed one - if I had several I would have hated it.
-
@Bob-Beatty said:
You must have had lemons. Never had an issue with mine and the firmware updates were a piece of cake. But I only managed one - if I had several I would have hated it.
It was always "as designed." Just lots of problems with the extra blade complexity. Whole layers of unnecessary management, oversight, things to fail, things to address. All unnecessary. Blades only add complexity, they don't make anything easier. There is just more firmware to deal with, more component interactions, more things that mess with each other. Even if they work flawlessly, they, at best, mimic a normal server. Any deviation and they get worse.
-
@scottalanmiller said:
@Bob-Beatty said:
What I loved most about it was the simplicity of how it worked for our business and how easy it was to expand by adding a new blade. There were no local hard drives to worry about, just an internal thumb drive (or SSD option if you choose) to house the Hyper Visor. V-motion and fail-over was immediate.
ALL of those features are standard on every enterprise rackmount server. None of that is unique to blades. The blades only add the complexity, not the features that make them seem valuable.
Dell has the redundant SD cards and internal usb port for ESXi long before UCS had them.
-
@Jason said:
Dell has the redundant SD cards and internal usb port for ESXi long before UCS had them.
HP too. We were doing that on the G5.
-
I've worked with UCS before. There okay.
We have a ton of datacenter space here so we can but a butt load of Dell 1U servers and pack them with 256GB+ of ram and be better of that blades and not locked it.
I would like to pick up a used blade maybe a UCS for home though just because I don't have much space.
-
The UCS weren't "bad", only bad in comparison to the competition. The problem was that they never exceeded the minimum bar in any area. At best they met it. But as they fell below it at other times, they just never lived up to the expectations for a minimal server. Nothing about them was ever better than easier options and sometimes worse. That's all it takes to be a "never buy", lacking any compelling reason to be considered.
-
I don't have anything to compare them too - except for rack servers. I thought it was pretty awesome- guess I was missing out on the real fun.
-
@Bob-Beatty said:
I don't have anything to compare them too - except for rack servers. I thought it was pretty awesome- guess I was missing out on the real fun.
I think that blade sales guys do a good job of making standard features that people often have not used in the past sound like they are cool and new and somehow associated with blades so put in a lot of effort pushing those features.
Literally the only thing that makes them blades is the lack of discrete electrical components - it's purely a tradeoff of risk for buying fewer parts. Risk vs. density. Any feature beyond that would exist in both worlds.
-
Any feature beyond that would exist in both worlds.
Size is pretty much the only benefit. And possibly power usage (a few big PSUs are more efficient than lots of small ones - if they are designed well.) but the real world impact of either is pretty much non-tangible.
-
@Jason said:
Any feature beyond that would exist in both worlds.
Size is pretty much the only benefit. And possibly power usage (a few big PSUs are more efficient than lots of small ones - if they are designed well.) but the real world impact of either is pretty much non-tangible.
Yeah, all about density. Although few datacenters are designed in such a way to leverage those aspects.
-
Blades seem to make you give up a lot of flexibility. With an old fashioned server I can run them diskless today and add disks tomorrow if the way that I want to use them changes. But if I have a blade, I'm stuck.