ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Poor network bandwidth on VM (failover cluster)

    Scheduled Pinned Locked Moved IT Discussion
    hyper-v
    29 Posts 7 Posters 5.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • L
      LAH3385 @dafyre
      last edited by

      @dafyre said:

      @LAH3385 said:

      @dafyre said:

      You could also try to move that synchronization slider a few notches towards the middle. That should give you a balance of sync and client access speed. It looks like you have it set to just focus on syncing. This could likely be what is hurting you.

      I moved it to 9/10 client access. Very little to no different.

      Might be a wise thing to set it back closer to the defaults.

      Look at the Perfmon Counters for Disk Read / Writes and Queue Length for both of your servers that are running Starwind?

      0_1452192335353_upload-f8415521-98e9-4429-bd11-eeb1e68fdad2

      What am I looking for? How long should I run the test?

      dafyreD 1 Reply Last reply Reply Quote 0
      • DashrenderD
        Dashrender
        last edited by

        set the counters, then move some files around, then take a screen shot and post.

        L 1 Reply Last reply Reply Quote 0
        • L
          LAH3385 @Dashrender
          last edited by

          @Dashrender said:

          set the counters, then move some files around, then take a screen shot and post.

          Idle 1min
          0_1452193514171_upload-0630f92b-5ad0-47d2-b048-23ce8b69677c
          File Transfer 11 minutes
          0_1452193504024_upload-951c5e09-5883-406b-8533-0d2c0d029909

          L 1 Reply Last reply Reply Quote 0
          • L
            LAH3385 @LAH3385
            last edited by

            @LAH3385

            These are some tests I conduct.
            visor1
            visor2 (the host where VM resides at the moment)
            VM
            File size 1.8GB (contain 450 files of 4MB each)

            visor1 to visor2
            0_1452194713907_upload-ff1e7c09-796d-406f-a189-b6bb37bc723d
            visor2 to visor1
            0_1452194777230_upload-5180d670-c663-4f04-856d-30e38f833b4a
            visor1 to VM
            0_1452195144939_upload-cf2436ed-6db6-4eec-b145-535bd5c566bf
            visor2 to VM
            0_1452195249146_upload-a93df16d-4510-4eec-9748-b14281cec9c1

            Connection from my PC to visor1/2 or VM is around 6MB/s to 11MB/s (average around 7MB/s)

            O 1 Reply Last reply Reply Quote 0
            • dafyreD
              dafyre @LAH3385
              last edited by dafyre

              @LAH3385 said:

              @dafyre said:

              @LAH3385 said:

              @dafyre said:

              You could also try to move that synchronization slider a few notches towards the middle. That should give you a balance of sync and client access speed. It looks like you have it set to just focus on syncing. This could likely be what is hurting you.

              I moved it to 9/10 client access. Very little to no different.

              Might be a wise thing to set it back closer to the defaults.

              Look at the Perfmon Counters for Disk Read / Writes and Queue Length for both of your servers that are running Starwind?

              0_1452192335353_upload-f8415521-98e9-4429-bd11-eeb1e68fdad2

              What am I looking for? How long should I run the test?

              I saw you post your permon screen... Look under the disk performance tabs and see what you are getting?

              L 1 Reply Last reply Reply Quote 0
              • L
                LAH3385 @dafyre
                last edited by

                @dafyre

                Weird thing is only hypervisor1 able to view report. visor2 and VM return with error
                0_1452201737590_upload-fef9d8b5-b0a5-4768-9994-2f5eccc051a5
                I really do not think my disk is the bottle neck here.
                I did found a setting on GPO [Network/Background Intelligent Transfer Service (BITS)] It should not have impact on Foreground transfer.

                1 Reply Last reply Reply Quote 0
                • O
                  original_anvil Vendor @DustinB3403
                  last edited by

                  @DustinB3403 said:

                  This could be related to the disk performance and not the network performance.

                  Just because the document is being written to a network share doesn't mean that is the issue.

                  +1

                  1 Reply Last reply Reply Quote 0
                  • O
                    original_anvil Vendor @LAH3385
                    last edited by

                    Starwind console should have graphs available for all kinds of resources utilisation. Btw, you don't need crossover cable on 1Gbit and faster ethernet cards.

                    I couldn't find where the graph would be located at. But this is the setting on Synchronization priority
                    0_1452184423155_upload-864c3478-4cc1-418f-9abe-4a38e2d7b6c9

                    It's on the Performance tab. But you can check the Windows performance monitor as well.
                    As about the priority - I'd recommend you to keep it in the middle. It relates to the synchronization after failures (FastSync or FullSync)

                    1 Reply Last reply Reply Quote 0
                    • O
                      original_anvil Vendor @LAH3385
                      last edited by

                      @LAH3385 said:

                      @LAH3385

                      These are some tests I conduct.
                      visor1
                      visor2 (the host where VM resides at the moment)
                      VM
                      File size 1.8GB (contain 450 files of 4MB each)

                      visor1 to visor2
                      0_1452194713907_upload-ff1e7c09-796d-406f-a189-b6bb37bc723d
                      visor2 to visor1
                      0_1452194777230_upload-5180d670-c663-4f04-856d-30e38f833b4a
                      visor1 to VM
                      0_1452195144939_upload-cf2436ed-6db6-4eec-b145-535bd5c566bf
                      visor2 to VM
                      0_1452195249146_upload-a93df16d-4510-4eec-9748-b14281cec9c1

                      Connection from my PC to visor1/2 or VM is around 6MB/s to 11MB/s (average around 7MB/s)

                      Actually measuring the performance with the file copy is not the best way
                      http://blogs.technet.com/b/josebda/archive/2014/08/18/using-file-copy-to-measure-storage-performance-why-it-s-not-a-good-idea-and-what-you-should-do-instead.aspx

                      I'd recommend you to run the IOmeter benchmark against StarWind RAM disk through the networks. It should show the real numbers

                      1 Reply Last reply Reply Quote 2
                      • O
                        original_anvil Vendor
                        last edited by original_anvil

                        @LAH3385 BTW, as I mentioned in the other post to you, we are welcomed to jump with you on the remote session to look deeper into the issue and try to solve it. I'm going to PM you right now.

                        1 Reply Last reply Reply Quote 0
                        • 1
                        • 2
                        • 2 / 2
                        • First post
                          Last post