ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Our New Scale Cluster Arrives Tomorrow

    IT Discussion
    scale ntg lab hyperconvergence virtualization kvm scale hc3 scale hc3 hc2000
    20
    131
    35.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • crustachioC
      crustachio @wrx7m
      last edited by

      @wrx7m said:

      @scottalanmiller said:

      We are using Dells. @art_of_shred or @Mike-Ralston would have to tell you which models, they have physical access to them.

      Interested in the model numbers. I am pushing for some Extreme switches and have been planning an upgrade and expansion for about a year including the, "It is going to be about $25K-30K" remarks several times to my boss. Get it all ironed out and just under the 30K mark after some back and forths with the vendor and Extreme and then I get the, "Wow. That is a lot more than I thought." when submitting the proposal. Waiting to hear back from the owner next week. Might have to go back to the drawing board.

      Not sure what your requirements are, but my shortlist of 10GbE switches for our baby VSAN project is:

      • Dell N4032
      • HP FlexFabric 5700
      • Juniper EX4550

      I was excited about the Brocade ICX/VDX stuff but I read lots of buggy firmware horror stories and the port licensing model for 10GbE really made the price jump (note- I did not run that through any major vendors to see how much padding is in those license prices)

      Yes, Cisco stuff is conspicuously absent from my list. I don't particularly trust the 3850X for storage switching and the Nexus stuff gets pricey fast, plus I just don't like Cisco much. But I am no storage switching expert so take my thoughts with, like, a hogshead worth of salt.

      S 1 Reply Last reply Reply Quote 1
      • wrx7mW
        wrx7m
        last edited by wrx7m

        I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @wrx7m
          last edited by

          @wrx7m said:

          I was looking at switches that were mainly Gb and had 4 SFP+ ports, as I only have 3 ESXi hosts with local storage.

          Problem is, you need six ports for that many hosts.

          wrx7mW 1 Reply Last reply Reply Quote 0
          • wrx7mW
            wrx7m @scottalanmiller
            last edited by

            @scottalanmiller Two switches?

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @wrx7m
              last edited by

              @wrx7m said:

              @scottalanmiller Two switches?

              Oh okay, three each. You are good.

              wrx7mW 1 Reply Last reply Reply Quote 1
              • wrx7mW
                wrx7m @scottalanmiller
                last edited by

                @scottalanmiller You scared me. I had a brain fart and was trying to figure out if I didn't plan correctly.

                1 Reply Last reply Reply Quote 0
                • scottalanmillerS
                  scottalanmiller
                  last edited by

                  So you have two switches, each with 3 links to the ESXi hosts and one uplink to a central 10GigE switch?

                  1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller
                    last edited by

                    The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!

                    wrx7mW 1 Reply Last reply Reply Quote 2
                    • wrx7mW
                      wrx7m @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      The NEW Scale Cluster just arrived!! Three more nodes. Our last one was an HC2100. This one is the HC2150, the pure SSD cluster. Can't wait to test this out!

                      :drool:

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        Yeah, it's HC2150 #1. The first off the line. Not yet on the market. We are uber excited.

                        Once it is up, we are going to merge it with the HC2100 to make a six node storage-tiered cluster.

                        1 Reply Last reply Reply Quote 2
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          So the NTG Lab main cluster is about to move from 192GB of RAM to 384GB of RAM 🙂

                          1 Reply Last reply Reply Quote 2
                          • crustachioC
                            crustachio
                            last edited by

                            Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.

                            scottalanmillerS S 2 Replies Last reply Reply Quote 0
                            • scottalanmillerS
                              scottalanmiller @crustachio
                              last edited by

                              @crustachio said:

                              Curious to see how the mixed drive cluster works out. FWIR it's supported but not "recommended". I'm sure it'll be great, just kinda curious to see how it goes for ya.

                              Us too! It's the first one out so a lot of testing to be done.

                              1 Reply Last reply Reply Quote 2
                              • S
                                StorageNinja Vendor @scottalanmiller
                                last edited by

                                @scottalanmiller More advanced raises eyebrow

                                Scribe is an object storage system (as is VSAN). I've seen the scale guys say Vmware copied them. They didn't I've seen the original R&D proposal and scale was using GPFS back in 2011.

                                1 Reply Last reply Reply Quote 0
                                • S
                                  StorageNinja Vendor
                                  last edited by

                                  @scottalanmiller said:

                                  Gluster

                                  Gluster is terrible for VM storage (no caching, beyond client memory, brick healing can kick of a file lock and crash a VM). I tried to make a VSA off of it ~2013.

                                  CEPH in theory can be used, but its performance is bad for streaming workloads (Strangely random doesn't suck). Most serious openstack deployments use something else for storage for this reason (or go all flash).

                                  I do agree that what makes a "HCI system" is the simplicity of management and integration between the storage and hypervisor. I built a VSA DRDB system on VMware back in ~2009. It was kludgy to manage, painful to expand, and slow. Modern systems that can leverage flash, don't have file system or nested VM overheads etc are a LOT better idea.

                                  1 Reply Last reply Reply Quote 0
                                  • S
                                    StorageNinja Vendor @crustachio
                                    last edited by

                                    @crustachio The VDX's issues with bugginess were for layer 3 bridges back in the day. The only outstanding issues I"m aware of involve Cisco wireless controller gratuitous arps. They are limited on multicast PIM-Sparse was another gotcha on the big chassis. They are pretty stable these days, and most people for "serious" BGP edge layer 3 use MLXe's anyways. The VDX is more about having a L2MP fabric for heavy east/west workloads.

                                    1 Reply Last reply Reply Quote 1
                                    • S
                                      StorageNinja Vendor @crustachio
                                      last edited by

                                      @crustachio I thought their tier system was based on a mix of drives in a node, not a mixture of host types in the cluster. From my understanding of SCRIBE you would end up with 1/2 the IO coming from flash and 1/2 coming from NL-SAS disks (until you fill up one tier). That's going to make for interesting latency consistency, unless they added some extra intelligence onto it (So both copies will always be on one tier or another).

                                      1 Reply Last reply Reply Quote 1
                                      • scottalanmillerS
                                        scottalanmiller
                                        last edited by

                                        New nodes are being joined to the cluster today... very excited.

                                        1 Reply Last reply Reply Quote 2
                                        • hobbit666H
                                          hobbit666
                                          last edited by

                                          God I really want a scale system in my rack!

                                          scottalanmillerS 1 Reply Last reply Reply Quote 2
                                          • scottalanmillerS
                                            scottalanmiller @hobbit666
                                            last edited by

                                            @hobbit666 said in Our New Scale Cluster Arrives Tomorrow:

                                            God I really want a scale system in my rack!

                                            Especially a six node, Winchester / SSD tiered one!

                                            1 Reply Last reply Reply Quote 2
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 6
                                            • 7
                                            • 5 / 7
                                            • First post
                                              Last post