ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    What is a Blade Server

    IT Discussion
    servers blade
    4
    18
    4.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller
      last edited by

      There are three basic kinds of standard enterprise servers in the world of IT. Towers, rack mount and blades. Of these, towers exist almost exclusively in the SMB or branch office worlds (those without datacenters) and racks dominate nearly everything else. (There is another category, the full rack footprint server that is shaped like a rack but is actually a self-contained tower. Basically a single server so large that it cannot be racked and so is built into a rack form factor. Typically these are the biggest mainframes from IBM and UNIX machines from vendors like Oracle and HP SuperDome. These are unique and not included here.)

      Tower servers are easy to identify because they are meant to stand, on their own, on a floor, desk or shelf. Generally tower servers lean toward entry level designs as larger businesses generally do not want tower servers sprinkled around as they are very difficult to physically manage at any scale. Tower servers are typically only used in businesses with only one or two servers total (per physical location, at least) because any more and the effort to deal with them is greater than getting a short rack and using industry standard rack servers.

      Rack servers are servers built to be mounted in an industry standard rack and are measured in "rack units" of height such as 1U, 2U, and so forth. Rack servers, like towers, are normal, self-contained servers. It is only the form factor that differs between them. Nearly all enterprise servers are made in a rack form factor. This makes rack mountable servers the easiest and most affordable in most cases simply due to the nature of the scale of manufacturing that goes into mainline rack servers.

      Blade servers are special in that what makes them blades is that individual "nodes", meaning the servers themselves, plug into a shared chassis or "enclosure." The enclosure contains shared components like power supplies, out of band management and networking / IO paths. This, in theory, allows blade servers to be cheaper and maintain a higher density footprint than other types of servers by sharing critical components among many blade servers - typically four to sixteen blade servers per enclosure. Nearly all blade enclosures themselves are rack mountable but there is nothing in the blade concept that would require that the resulting enclosure have a rack mountable footprint, but market pressures, of course, apply.

      Blades, therefore, carry risks that other server types do not as instead of each server having discrete components, components are shared so issues with failure power supplies, back planes, networking, firmware and other shared components can and do cause blades to fail and when blades fail they will often fail as a full enclosure meaning up to sixteen servers all failing at once, instead of one at a time. The blade approach can magnify risk considerably.

      A blade can never be used on its own, by definition. A single blade lacks the necessary components to be functional on its own. And, of course, an empty blade enclosure is just as useless.

      Understanding what blades are and what makes them uniquely blades compared to other server form factors is necessary to understand why they are used, when they might make sense and what additional risks and caveats they carry.

      1 Reply Last reply Reply Quote 2
      • Bob BeattyB
        Bob Beatty
        last edited by

        My experience with Blade Servers (CISCO UCS B200) is a pleasant one. For the first time in my career, I watched a vendor setup a Server, I didn't have anything to do with it. The enclosure was rack mounted with a FCOE backbone providing 40gb network capability (connections to EMC SAN and dual F5 BIGIP appliances).

        This was setup at a Data Center, and was the perfect fit for the type of business. We had ESXi Enterprise licensing so the 5 hosts were in FA and DRS Cluster with a single EMC SAN for shared storage. The management of the blade server was quite a learning curve with this product, but once I had some time with it, I understood how to manage it properly. What I loved most about it was the simplicity of how it worked for our business and how easy it was to expand by adding a new blade. There were no local hard drives to worry about, just an internal thumb drive (or SSD option if you choose) to house the Hyper Visor. V-motion and fail-over was immediate.

        What they say: Once you go blade, you never go back....

        scottalanmillerS 3 Replies Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @Bob Beatty
          last edited by

          @Bob-Beatty said:

          My experience with Blade Servers (CISCO UCS B200) is a pleasant one. For the first time in my career, I watched a vendor setup a Server, I didn't have anything to do with it. The enclosure was rack mounted with a FCOE backbone providing 40gb network capability (connections to EMC SAN and dual F5 BIGIP appliances).

          I've worked with those some. The Cisco UCS blades that we used cost more than their Dell rackmount competitors, took far more IT support to manage due to their extra complexity and underperformed the rackmounts dramatically. The UCS enclosures are so complex that they actually offer a certification in just that!

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @Bob Beatty
            last edited by

            @Bob-Beatty said:

            What I loved most about it was the simplicity of how it worked for our business and how easy it was to expand by adding a new blade. There were no local hard drives to worry about, just an internal thumb drive (or SSD option if you choose) to house the Hyper Visor. V-motion and fail-over was immediate.

            ALL of those features are standard on every enterprise rackmount server. None of that is unique to blades. The blades only add the complexity, not the features that make them seem valuable.

            J 1 Reply Last reply Reply Quote 0
            • Bob BeattyB
              Bob Beatty
              last edited by

              Whatever - my blade server is better than yours.

              scottalanmillerS 1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @Bob Beatty
                last edited by

                @Bob-Beatty said:

                What they say: Once you go blade, you never go back....

                I've actually never once worked with a shop that tried blades and didn't go back. Running side by side with rackmounts over a long period of time and multiple vendors the experience was always the same: increased risk, more problems from all of the complexity, extra training and dependence on the vendors, higher cost and lower performance. And I worked with them in shops that bought servers by the thousands and were able to hit the kind of scale that blades are supposedly built for and they couldn't find a way to get them to break even on cost and maintenance. Once anything would go wrong with them the complexity would rear its ugly head and little issues would turn into big ones. Networking, especially, tends to be problematic. And the firmware issues... oh the firmware issues.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller @Bob Beatty
                  last edited by

                  @Bob-Beatty said:

                  Whatever - my blade server is better than yours.

                  Same one, UCS B200 🙂

                  The UCS were actually the worst. HP was the best, but still bad enough to throw them out, which is what we did.

                  1 Reply Last reply Reply Quote 1
                  • Bob BeattyB
                    Bob Beatty
                    last edited by

                    You must have had lemons. Never had an issue with mine and the firmware updates were a piece of cake. But I only managed one - if I had several I would have hated it.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @Bob Beatty
                      last edited by

                      @Bob-Beatty said:

                      You must have had lemons. Never had an issue with mine and the firmware updates were a piece of cake. But I only managed one - if I had several I would have hated it.

                      It was always "as designed." Just lots of problems with the extra blade complexity. Whole layers of unnecessary management, oversight, things to fail, things to address. All unnecessary. Blades only add complexity, they don't make anything easier. There is just more firmware to deal with, more component interactions, more things that mess with each other. Even if they work flawlessly, they, at best, mimic a normal server. Any deviation and they get worse.

                      1 Reply Last reply Reply Quote 0
                      • J
                        Jason Banned @scottalanmiller
                        last edited by Jason

                        @scottalanmiller said:

                        @Bob-Beatty said:

                        What I loved most about it was the simplicity of how it worked for our business and how easy it was to expand by adding a new blade. There were no local hard drives to worry about, just an internal thumb drive (or SSD option if you choose) to house the Hyper Visor. V-motion and fail-over was immediate.

                        ALL of those features are standard on every enterprise rackmount server. None of that is unique to blades. The blades only add the complexity, not the features that make them seem valuable.

                        Dell has the redundant SD cards and internal usb port for ESXi long before UCS had them.

                        scottalanmillerS 1 Reply Last reply Reply Quote 1
                        • scottalanmillerS
                          scottalanmiller @Jason
                          last edited by

                          @Jason said:

                          Dell has the redundant SD cards and internal usb port for ESXi long before UCS had them.

                          HP too. We were doing that on the G5.

                          1 Reply Last reply Reply Quote 0
                          • J
                            Jason Banned
                            last edited by

                            I've worked with UCS before. There okay.

                            We have a ton of datacenter space here so we can but a butt load of Dell 1U servers and pack them with 256GB+ of ram and be better of that blades and not locked it.

                            I would like to pick up a used blade maybe a UCS for home though just because I don't have much space.

                            1 Reply Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller
                              last edited by

                              The UCS weren't "bad", only bad in comparison to the competition. The problem was that they never exceeded the minimum bar in any area. At best they met it. But as they fell below it at other times, they just never lived up to the expectations for a minimal server. Nothing about them was ever better than easier options and sometimes worse. That's all it takes to be a "never buy", lacking any compelling reason to be considered.

                              1 Reply Last reply Reply Quote 0
                              • Bob BeattyB
                                Bob Beatty
                                last edited by

                                I don't have anything to compare them too - except for rack servers. I thought it was pretty awesome- guess I was missing out on the real fun.

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • scottalanmillerS
                                  scottalanmiller @Bob Beatty
                                  last edited by

                                  @Bob-Beatty said:

                                  I don't have anything to compare them too - except for rack servers. I thought it was pretty awesome- guess I was missing out on the real fun.

                                  I think that blade sales guys do a good job of making standard features that people often have not used in the past sound like they are cool and new and somehow associated with blades so put in a lot of effort pushing those features.

                                  Literally the only thing that makes them blades is the lack of discrete electrical components - it's purely a tradeoff of risk for buying fewer parts. Risk vs. density. Any feature beyond that would exist in both worlds.

                                  J 1 Reply Last reply Reply Quote 0
                                  • J
                                    Jason Banned @scottalanmiller
                                    last edited by Jason

                                    Any feature beyond that would exist in both worlds.

                                    Size is pretty much the only benefit. And possibly power usage (a few big PSUs are more efficient than lots of small ones - if they are designed well.) but the real world impact of either is pretty much non-tangible.

                                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @Jason
                                      last edited by

                                      @Jason said:

                                      Any feature beyond that would exist in both worlds.

                                      Size is pretty much the only benefit. And possibly power usage (a few big PSUs are more efficient than lots of small ones - if they are designed well.) but the real world impact of either is pretty much non-tangible.

                                      Yeah, all about density. Although few datacenters are designed in such a way to leverage those aspects.

                                      1 Reply Last reply Reply Quote 0
                                      • StrongBadS
                                        StrongBad
                                        last edited by

                                        Blades seem to make you give up a lot of flexibility. With an old fashioned server I can run them diskless today and add disks tomorrow if the way that I want to use them changes. But if I have a blade, I'm stuck.

                                        1 Reply Last reply Reply Quote 0
                                        • 1 / 1
                                        • First post
                                          Last post