ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Connecting a NAS or SAN to a VMWare host

    IT Discussion
    7
    37
    5.3k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • NetworkNerdN
      NetworkNerd
      last edited by

      You're also going to need to know how read / write your environment is in terms of hitting the database, no?

      scottalanmillerS 1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @NetworkNerd
        last edited by

        @NetworkNerd yes, you need to know your read / write mix to know what IOPS you need and where you need them.

        1 Reply Last reply Reply Quote 0
        • NetworkNerdN
          NetworkNerd
          last edited by

          Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?

          ? scottalanmillerS 2 Replies Last reply Reply Quote 0
          • ?
            A Former User @NetworkNerd
            last edited by A Former User

            @NetworkNerd said:

            Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?

            Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
            You also need switches designed for Iscsi/sans for the best performance.

            Granted it's been about 2.5 years since I've done a major SAN rollout so it could have change some since then..

            scottalanmillerS 1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @NetworkNerd
              last edited by

              @NetworkNerd said:

              Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?

              If you are doing NFS (NAS) then yes. It increases the size of the pipe so improves throughput.

              Keep in mind that you cannot team / bond an iSCSI connection. You have to use technologies like MPIO to improve throughput.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller @A Former User
                last edited by

                @thecreativeone91 said:

                Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
                You also need switches designed for Iscsi/sans for the best performance.

                Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.

                You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.

                You can also skip the switches altogether.

                NetworkNerdN ? 2 Replies Last reply Reply Quote 0
                • NetworkNerdN
                  NetworkNerd @scottalanmiller
                  last edited by

                  @scottalanmiller said:

                  @thecreativeone91 said:

                  Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
                  You also need switches designed for Iscsi/sans for the best performance.

                  Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.

                  You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.

                  You can also skip the switches altogether.

                  Yep - we use a passive Netgear gig switch between our backup target (Drobo B800i SAN) and our ESXi hosts. It works great.

                  1 Reply Last reply Reply Quote 1
                  • ?
                    A Former User @scottalanmiller
                    last edited by

                    @scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • scottalanmillerS
                      scottalanmiller @A Former User
                      last edited by

                      @thecreativeone91 said:

                      @scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.

                      Right, but that is not the norm. That is specifically a throughput heavy environment, far from center. IOPS typically outweigh throughput but a huge margin.

                      1 Reply Last reply Reply Quote 0
                      • JaguarJ
                        Jaguar
                        last edited by Jaguar

                        LAG's will only improve throughput so far, and really 10GbE is the future at the moment, you'll find you have lower latency, and more throughput overall with 10GbE saturation than with 1GbE LAGs. Also, switches that do iSCSI offload are almost never really used properly to offload, so it's not saving you much. Just any decent switch will work.

                        I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.

                        This is why the term 'tiered storage' has become popular, you create tiers of storage, depending on your needs that can be super fast storage (ssd), fast storage (10/15k drives), normal storage (7.2k drives), and nearline (~5k drives). Then you deploy your applications depending on how you want them to live and operate.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @Jaguar
                          last edited by

                          @Jaguar said:

                          I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.

                          These large number of small touches is why huge RAM both in the system and in cache make such a big different to databases. With a good RAID cache you can offload a ton of write hits and speed the system while also preserving the SSDs. And a large system memory will do even more often keeping transactions from hitting the storage subsystem completely.

                          1 Reply Last reply Reply Quote 0
                          • 1
                          • 2
                          • 2 / 2
                          • First post
                            Last post